Linux Server Maintenance Checklist

Server maintenance needs to be performed regularly in order to ensure that your server will continue to run with minimal problems, while a lot of maintenance tasks are automated within the Linux operating system now there are still things that need to be checked and monitored regularly to ensure that Linux is running optimally. Below are steps that should be taken in order to maintain your servers.


New package updates have been installed within the last month.
Keeping your server up to date is one of the most important maintenance tasks that needs to be done. Before applying updates to your server, confirm that you have a recent backup or snapshot if working with a virtual machine so that you have the option of reverting back if the updates cause you unexpected problems. If possible you should aim to test updates on a test server first if you are applying them to a production server, this allows you to first confirm that the updates will not break your server and will be compatible with any other packages or software that you may be running.

You can update all packages currently installed on your server by running ‘yum update’ or ‘apt-get upgrade’, depending on your distribution (throughout the rest of this post commands will be aimed towards Red Hat based operating systems). Ideally this should be done at least once per month so that you have the latest security patches, bug fixes, and improved functionality and performance. You can automate the update by making use of crontab to check for and apply updates whenever you like rather than having to do it manually.

Other applications have been updated in the last month.
Other web applications such as WordPress/Drupal/Joomla need to be frequently updated, as these sort of applications act as a gateway to your server by usually being more accessible than direct server access and by allowing public access in from the Internet. Lots of web applications also may have third party plugins installed which can be coded by anyone which potentially have many security vulnerabilities throughout their unaudited code, so it is critical to update these sorts of applications installed on your server very frequently. These content management systems are not managed by ‘yum’ so will not be updated with a ‘yum update’ like the other packages installed. The updates are usually provided directly through the application itself, if you’re unsure contact the application provider for further assistance.

Reboot the server if a kernel update was installed.
If you ran a ‘yum update’ as previously discussed check to see if the kernel was listed as an update. Alternatively you can explicitly update your kernel with ‘yum update kernel’. The Linux kernel is the core of the Linux operating system and is updated regularly to include security patches, bug fixes and added functionality. Once the kernel has been installed you must reboot your server to complete the process. Before you reboot, run the command ‘uname –r’ which will print the current kernel version that you are booted into. After you reboot and the server is running run the ‘uname –r’ command again and confirm that the newer version that was installed with yum was displayed. If the version number does not change you may need to investigate the kernel that is booted in /boot/grub/grub.conf, yum will update this file by default to boot the updated kernel so you shouldn’t have to change anything normally.

It is possible to avoid rebooting your server by using third party tools such as Ksplice from Oracle or KernelCare from CloudLinux, however by default on a standard operating system the reboot will be required to make use of the newer kernel.


Server access reviewed within the last 6 months.
In order to increase security you should review who has access to your server, in an organization you may have staff who have left but still have accounts with access, these should be removed or disabled. There may be accounts that have sudo access that should not, this should also be reviewed often to avoid a possible security breach as granting root access is very powerful, you can check the /etc/sudoers file to see who has root access and if you need to make changes do so with the ‘visudo’ command. You can view recent logins with the ‘last’ command to see who has been logging into the server.

Firewall rules reviewed in the last 6-12 months.
Firewall rules should also be reviewed from time to time to ensure that you are only allowing required inbound and outbound traffic. Requirements for a server change and as packages are installed and removed the ports that it is listening on may change potentially introducing vulnerabilities so it is important to restrict this traffic correctly, this is typically done in Linux with iptables or perhaps a hardware firewall that sits in front of the server. You can test for ports that are open by using nmap from another server, and view the current rules on the server by running ‘iptables –L –v’.

Confirm that users must change password.
User accounts should be configured to expire after a period of time, common periods are anywhere between 30-90 days. This is important so that the user password is only valid for a set amount of time before the user is forced to change it. This increases security because if an account is compromised it will not always be able to be used as the password will change to something different – access by an attacker will not be maintained through that account.

If your accounts are using an LDAP directory like Active Directory this can be set for the accounts centrally there. Otherwise in Linux you can set this on a per account basis, however this is not as scalable as using a directory because you need to implement the changes on all of your servers individually which will take time. This can be done using the chage command, ‘chage –l username’ will display the current settings on the account, for example:

[root@demo  ~]# chage -l root
Last password change                                    : Apr 07, 2014
Password expires                                        : never
Password inactive                                       : never
Account expires                                         : never
Minimum number of days between password change          : 0
Maximum number of days between password change          : 99999
Number of days of warning before password expires       : 7

All of these parameters can be set for every user on the system.


Monitoring has been checked and confirmed to work correctly.
If your server is used in production you most likely have it monitored for various services, it is important to check and confirm that this monitoring is working as intended and reporting correctly so that you know you will be correctly alerted if there are any issues. It is possible that incorrect firewall rules may disrupt monitoring, or your server may be performing different roles now since the monitoring was initially configured and may now need to be monitored for additional services.

Resource usage has been checked in the last month.
Resource usage is typically checked as a monitoring activity, however it is good practice to observe long term monitoring data in order to get an idea of any resource increases or trends which may indicate that you need to upgrade a component of your server so that it is capable of working under the increased load. This will depend on your monitoring solution, however you should be able to monitor CPU usage, free disk space, free physical memory and other variables for certain thresholds and if these start to trigger more often you will know to investigate further. Typically in Linux you’ll be monitoring with SNMP/NRPE based tools, such as Nagios or Cacti.

Hardware errors have been checked in the last week.
Critical hardware problems will likely show up on your monitoring and be obvious as the server may stop working correctly. You can potentially avoid this scenario by monitoring your system for hardware errors which may give you a heads up that a piece of hardware is having problems and should be replaced in advance before it fails.

You can use mcelog which processes machine checks, namely memory and CPU errors on 64-bit Linux systems, it can be installed with ‘yum install mcelog’ and then started with ‘/etc/init.d/mcelogd start’. By default mcelog will check hourly using crontab and report any problems into /var/log/mcelog so you will want to monitor this file regularly each week or so.


Backups and restores have been tested and confirmed working.
It is important to backup your servers in case of data loss, it is equally important to actually test that your backups work and that you can successfully complete a restore. Check that your backups are working on a daily or weekly basis, most backup software should be able to notify you if a backup task fails which should be investigated.

It is a good idea to perform a test restore every few months or so to ensure that your backups are working as intended. This may sound time consuming but it’s well worth it, there are countless stories of backups appearing to work until when all the data is lost, only then do people realize that they are not actually able to restore the data from backup.

You can backup locally to the same server which is not recommended, or you can backup to an external location either on your network or out on the Internet, this could be your own server or a cloud storage solution like Amazon’s S3 storage. An external backup is recommended, keep in mind that if you are going to be storing sensitive data at a third party location that you will probably need to investigate encrypting the data so that it is stored safely.

Other general tasks

Unused packages have been removed.
You can save both disk space and reduce your attack surface by removing old and unused packages from your server. Having less packages on your server is a good way to harden and secure it as there is less code available for an attacker to make use of. The command ‘yum list installed’ should display all packages currently installed on your server. ‘yum remove package-name’ will remove the package from your server, just be sure you know what the package is and that you actually want to remove it. Be careful when removing packages with yum, if you remove a package that another package depends on, the dependent package will also be removed which can potentially remove a lot of things at once, after running the command it will confirm the list of packages that will be removed so carefully double check it before proceeding.

File system check performed in the last 180 days.
By default after 180 days or 20 mounts (whichever comes first) your servers will be file system checked with e2fsck on next boot, this should be run occasionally to ensure disk integrity and repair any problems. You can force a disk check by running ‘touch /forcefsck’ and then rebooting the server – the file will be removed on next boot, or with the ‘shutdown –rF now’ command to force a disk check on next boot and perform the reboot now. Aternatively you can use -f instead of –F to skip the disk check, this is known as a fast boot and can also be done with ‘touch /fastboot’. This can be useful for example if you have just performed a kernel update and need to reboot and you want the server back up as soon as possible rather than waiting for the check to complete.

The mount count can be modified using the tune2fs command, the defaults are pretty good however ‘tune2fs –c 50 /dev/sda1’ will increase the mount count to 50 so a file system check will happen after it has been mounted 50 times. On the other hand ‘tune2fs –i 210’ will change the disk so that it is only checked after 210 days rather than 180.

Logs and statistics are being monitored daily or weekly.
If you look through /var/log you will notice that there are a lot of different log files on the server which are continually written to with different information, sometimes useful information but most of the time it is not relevant leading to a large amount of information to go through. Logwatch can be used to monitor your servers logs and email the administrator a summary on a daily or weekly basis – you can control it via crontab. Logwatch can also be used to send a summary of other useful server information such as the disk space in use on all partitions on the server, so it’s a good way to get up to date notifications from your servers. You can install the package with ‘yum install logwatch’.

Regular scans are being run on a weekly/monthly basis.
In order to stay secure it is important to scan your server for malicious content. ClamAV is an open source antivirus engine which detects trojans, malware and viruses and works well with Linux. You can set the cron job to run a weekly scan at 3AM for instance and then email you a report outlining the results. Depending on how much content you have the scan may take a while, it’s recommended that you set an intensive scan to run once per week at a low resource usage time such as on the weekend or over night. Check the crontab and /var/log/cron log file to ensure that the scans are running as intended, you can also configure an email summary to be sent to you so also confirm you are receiving these alerts.

OK, folks that’s it for this post. Have a nice day guys…… Stay tuned…..!!!!!

Don’t forget to like & share this post on social networks!!! I will keep on updating this blog. Please do follow!!!

IT Services and Managed Service Providers (MSPs)

In-house tools can make you more efficient at monitoring, patching, providing remote support, and service delivery. But you also need to ensure regular scheduled maintenance of every client system. That’s where a Managed Service Provider (MSP) comes in.

What’s a Managed Service Provider (MSP)?

A managed service provider (MSP) caters to enterprises, residences, or other service providers. It delivers network, application, system and e-management services across a network, using a “pay as you go” pricing model.

A “pure play” MSP focuses on management services. The MSP market features other players – including application service providers (ASPs), Web hosting companies, and network service providers (NSPs) – who supplement their traditional offerings with management services.

You Probably Need an MSP if….

Your business has a network meeting any of the following criteria:

  • Connects multiple offices, stores, or other sites
  • Is growing beyond the capacity of current access lines
  • Must provide secure connectivity to mobile and remote employees
  • Could benefit from cost savings by integrating voice and data traffic
  • Anticipates more traffic from video and other high-bandwidth applications
  • Is becoming harder to manage and ensure performance and security, especially given limited staff and budget

What Can You Gain?

1. Future proof services, using top-line technology

IT services and equipment from an MSP are constantly upgraded, with no additional cost or financial risk to yourself. There’s little chance that your Managed IT Services will become obsolete.

2. Low capital outlay and predictable monthly costs

Typically, there’s a fixed monthly payment plan. A tight service level agreement (SLA) will ensure no unexpected upgrade charges or changes in standard charges.

3. Flexible services

A pay-as-you-go scheme allows for quick growth when necessary, or cost savings when you need to consolidate.

4. Converged services

A single “converged” connection can provide multiple Managed IT Services, resulting in cost-savings on infrastructure.

5. Resilient and secure infrastructure

A Managed Service Provider’s data centres and managed network infrastructure are designed to run under 24/7/365 management. Typically, their security procedures have to meet government approval.

6. Access to specialist skills

The MSP will have staff on hand capable of addressing specific problems. You may only need this skill once, and save the expense of training your staff for skills they’ll never use.

7. Centralized applications and servers

Access to centralized data centers within the network can also extend access to virtual services, as well as storage and backup infrastructure.

8. Increased Service Levels

SLAs can ensure continuity of service. A managed service company will also offer 24/7/365 support.

9. Disaster recovery and business continuity

MSPs have designed networks and data centers for availability, resilience and redundancy, to maintain business continuity. Your data will be safe and your voice services will continue to be delivered, even if your main office goes down.

10. Energy savings

By running your applications on a virtual platform and centralizing your critical business systems within data centers, you’ll lower your carbon footprint and reduce costs.

Functions of an MSP

Under Managed Services, the IT provider assumes responsibility for a client’s network, and provides regular preventive maintenance of the client’s systems. Technical support is delivered under a service level agreement (SLA) that provides specified rates, and guarantees the consultant a specific minimum income.

The core tools of Managed Services are:

  1. Patch Management
  2. Remote Access provision
  3. Monitoring tools
  4. Some level of Automated Response

Most MSPs also use a professional services automation (PSA) tool such as Autotask or ConnectWise. A PSA provides a Ticketing System, to keep track of service requests and their responses. It may also provide a way to manage Service Agreements, and keep track of technicians’ labor.

In essence, though, it boils down to this: If a system crashes, and the Managed Service Provider is monitoring the network, that MSP has total responsibility for the state of the backup and the health of the server.

As their client (and this should be spelled out, in the SLA), you can hold the MSP totally responsible – up to and including court action, for failing to provide the service they’re contracted to provide.

How to Choose an MSP

Here are five key characteristics to consider, when selecting a managed service provider:

1. Comprehensive Technology Suite

The MSP should have a broad set of solutions available to meet not only your current needs, but to scale and grow as your business develops new products and services.

A well-equipped MSP will offer support for virtual infrastructures, storage, co-location, end user computing, application management capabilities, etc. The MSP should be able to accommodate a range of applications and systems, under a service level agreement starting at the application layer, and extending all the way up the technology stack.

2. Customization and Best Practices

Look for a service provider with the expertise to modify each architecture based on individual business goals.

Their best practices should ensure seamless migration for customers, by taking an existing physical machine infrastructure and visualizing it. Comprehensive support should be available, throughout.

3. Customer-Centric Mindset

The MSP should provide a dedicated account manager who serves as the single point of contact and escalation for the customer. Support should be readily available, along with access to other service channels, as required.

The most effective MSPs will be available to address problems around the clock, and have effective troubleshooting capabilities.

4. Security

For customers working in regulated environments such as healthcare and financial services, security and compliance issues are paramount.The MSP should have robust, tested infrastructure and operational fabric that operates across several geographical zones. This cuts down their susceptibility to natural disasters and service interruptions.

The provider should continuously monitor threats and ensure that each system is designed with redundancy at every level.

5. The Proper Scale

If a small business selects one of the largest service providers, they may not receive a high level of customer-centric, flexible and customized support. Conversely, if a business selects an MSP that’s too small, it may lack the scale and expertise to offer the necessary support.

Having direct access to a senior member of the MSP’s management team by direct email or cell phone can be a good measure of the degree of personalized attention a customer is likely to receive.

Understanding the different types of service providers is the first step in making the right decision for your organization.

OK, folks that’s it for this post. Have a nice day guys…… Stay tuned…..!!!!!

Don’t forget to like & share this post on social networks!!! I will keep on updating this blog. Please do follow!!!


Hi all in this post I will be discussing two web server packages, one is Apache which already has shown its ability to do multiple things, in a single package with the help of modules, and millions of websites in the internet is powered by Apache platform. The other is the relatively new web server package called Nginx made by Russian programmer Igor Sysoev.

Many people in the industry might be aware of the thing called “Speed” for which Nginx is famous for. There are some other important difference between Apache and Nginx’s working model, we will be discussing, that differences in detail.


Lets discuss two working models used by the Apache web server. We will get to Nginx later. Most people who are associated with Apache might be knowing about these two models, through which Apache servers its requests. These models are mentioned below.

1.Apache MPM Prefork
2.Apache MPM Worker.

Note: there are many different MPM modules available, for different platforms and functionalities but we will be discussing only the above two here.
Lets have a look at the main difference between MPM Prefork and MPM Worker. MPM stands for “Multi Processing Module“.

MPM Prefork :

Most of the functionality in Apache comes from modules, even this MPM Prefork comes as a module, and can be enabled or disabled. This prefork model of Apache is non-threaded and is a good model, as it makes each and every connection isolated from each other.
So if one connection is is having some issues, the other one is not all effected. By default if no MPM module is specified then Apache uses this MPM Prefork as its MPM module. But this model is very resource intensive.

Why is Prefork model resource intensive?

Because in this model a single parent process sits and creates many child processes which wait for requests and serve as the requests arrive. Which means each and every request is served by a separate process. In other words, we can say its “process per request”. And Apache maintains, several number of idle process before the requests arrive. Due to these idle processes waiting for requests, they can serve fast when requests arrive.

But each and every process will utilize system resources like RAM, and CPU. And equal amount of RAM is utilized for each and every process.

prefork model

If you have got a lot number of requests at one time, then you will have lot number of child processes spawned by Apache, and which will result in heavy resource utilization, as each process will utilize a certain amount of system memory and CPU resources.

MPM Worker :

This model of Apache can serve a large number of requests with less system resources than the prefork model because here a limited number of process will serve, many number of requests.
This is multi threaded architecture of Apache. This model uses thread rather than process to serve requests. Now what is thread??
In Operating System’s thread is a small instance of a process which does some job and exits. Thread is sometimes called a process inside a process.

apache worker

In this model also there is one single parent process, which spawns some child processes. But there is no “process per requests”, but instead “thread per requests”. So the child process will have a certain number of threads inside it. Each child process will have certain “server threads” and certain “idle threads”. Idle threads are waiting for new requests, so there is no time wasted in creating threads when the requests arrive.
There is a directive inside Apache config file /etc/httpd/conf/httpd.conf called “StartServers” which says how many child process will be there when Apache starts.
Child process handles requests with the help of a fixed number of threads inside them which is specified by the argument “ThreadsPerChild” in the config file.

Note: there are some php module issues reported while working with apache MPM worker model.

Now lets discuss Nginx.


Nginx was made, to solve the c10k problem in Apache.

C10k : It is a name given to the issue of optimizing the web server software to handle large number of requsts at one time. In the range of 10000 requests at a time, hence the name
Nginx is known for its speed in serving static pages, much faster than Apache and keeping the machine resources very low.
Fundamentally both Apache and Nginx differs a lot.
Apache works in a multi process/multi threaded architecture, While Nginx is an event driven single threaded architecture. [I will come back to event driven later]. The main difference this event driven architecture makes is that, a very small number of Nginx worker process can serve a very very large number of requests.
Sometimes Nginx is also deployed as a front end server, serving static content requests faster to the clients, and Apache in behind.
Each worker process handles requests with the help of the event driven model. Nginx does this with the help of a special functionality in Linux kernel called as epoll and select poll. Apache when even run by its threaded model utilizes considerably much more system resource than Nginx.

Why does Nginx run more efficiently than Apache?

In Apache when a request is being served, either a thread or a process is created which serves the request. Now if one request needs some data from the database,and files from disk, etc the process waits for that.
So some processes in Apache just sits and wait for certain task to complete (eating system resources).
Suppose a client with a slow internet connection connects to a web server running Apache, the Apache server retrieves the data from the disk, to serve the client. Now even after serving the client that process will wait until a confirmation is received from that client (which will waste that much process resource).
Nginx avoids the idea of child processes. All requests are handled by a single thread. And this single thread will handle everything, with the help of something called as event loop. So the thread pops up whenever a new connection, or some thing is required(not wasting resources.).

Step 1: Gets Request.
Step 2: Request Triggers events inside the process.
Step 3: Process manages all these events and returns the output (and simultaneously handles other events for other requests).

Nginx also supports major functionalities which Apache supports like the following.

  • Virtual Hosts
  • Reverse Proxy
  • Load Balencer
  • Compression
  • URL rewrite

are some of them…

OK, folks that’s it for this post. Have a nice day guys…… Stay tuned…..!!!!!

Don’t forget to like & share this post on social networks!!! I will keep on updating this blog. Please do follow!!!

Amazon Web Services vs. Microsoft Azure vs. Google Compute Platform

The rivalry is warming up in the cloud space as vendors are offering innovative features and frequently reducing prices. In this blog, we will try to highlight the competition among the three titans of the cloud: Google Cloud Platform (GCP), Amazon Web Services (AWS), and Microsoft’s Azure. Which of these 3 will thrive and win the battle, time will tell.  We have IBM Softlayer and Alibaba’s AliCloud joining the bandwagon.

Although AWS (Amazon Web Services) has a noteworthy head start, Microsoft and Google are not out of the race. Today, Google is developing 12 new cloud data centers over the next 18 months. Both of these cloud vendors have the money, power, marketing bling and technology to draw enterprise and individual customers.

For helping in the enhanced decision making, this article provides a brief breakdown of these three market giants. The article will also try to explain the advantages realized by commissioning a multi-cloud strategy.

IAAS Quadrant_2016

Source: Gartner (August 2016)

Amazon Web Services

AWS has well organized and distributed data centers commissioned across the globe. Availability Zones are placed at quite a distance from each other to avoid any impact of a failure of an Availability Zone on the other.


Microsoft has been quickly building more and more data centers all over the world to catch up with Amazon’s vast geographical presence. From six regions in 2011, they currently have 22 regions, each of which contains one or more data centers, with five additional regions, planned to open in 2016. While Amazon was the first to open a region in China, Microsoft preceded to open the India region at the end of 2015.


Google has the least impression out of the three cloud providers. Google makes up for its geographical limitations with the help of its worldwide network infrastructure, providing low-latency and high-speed connectivity within its data centers, both at a regional and interregional level.


Amazon’s Elastic Compute Cloud (EC2) offers core compute service, enabling its users to form virtual machines with the help of pre-configured or custom based AMIs. You can choose the power, size, number of VMs, memory capacity and select from diverse availability zones from which to launch. It also provides auto-scaling and ELB (load balancing). ELB allocates charges through instances for improved performance, and auto-scaling enables its users to spontaneously and automatically scale available EC2 (Elastic Compute Cloud) volume high or low.

Lately in 2012, Google launched its cloud computing service known as GCE (Google Compute Engine). GCE allows its users to start VMs, much like AWS, into availability groups and regions. Though, Google Compute Engine was not accessible for everyone till 2013. Subsequently, Google added improvements, such as comprehensive support for Operating Systems, load balancing, quicker persistent disks, live migration of virtual machines and instances with more cores.

Similarly in the year 2012, Microsoft launched its cloud compute services, but the same was not normally accessible till May 2013.  Its users select a Virtual Hard Disk (VHD), which is similar to Amazon’s AMI, for VM creation. A Virtual Hard Disk could also be predefined by third parties, by Microsoft or even by the user. With every virtual machine, you are required to specify the amount of memory and number of cores.


Storage is one of the primary elements of IT. The article discusses storage service assistances from these three large cloud providers on the two primary storage types: Block storage and Object storage.


Amazon offers its block storage service which is known as EBS (Elastic Block Storage), and it can support three different types of persistent disks viz. SSD, Magnetic, and SSD with provisioned Input/Output Operations Per Second (IOPS). The volume sizes range from a maximum of 1TB for magnetic disks, to 16TB for SSD.

Amazon’s world-leading object storage service known as S3 (Simple Storage Service), has four different SLAs viz. standard, reduced redundancy, regular – infrequent access, and Glacier. All the data is deposited in a single availability zone unless it is simulated manually over regions or availability zones.


Microsoft’s refers its storage services as Blobs. Disks and Page Blobs are its block storage service. It can be sourced as Premium or standard, with volumes sizes of 1TB. Block Blobs is its object storage service. Alike Amazon, it also offers four different SLAs viz. LRS (Locally redundant storage) where terminated data copies are kept inside the same data center; ZRS (zone redundant storage), where copies of redundant data are maintained in diverse data centers in the same region; and GRS (geographically redundant storage) which executes LRS (Locally redundant storage) on two detached data centers, for the maximum level of availability and durability.


In Google cloud computing space, storage is structured a bit contrarily as compared with its other two competitors. Block storage does not have a particular category but has an add-on to instances within Google Cloud Engine (GCE). Google provides two choices: magnetic or SSD volumes; though the IOPS tally is static. The ephemeral disk is completely configurable and is a chunk of the storage offering. Object storage known as Google Storage is divided into three modules viz. Standard, Durable Reduced Availability for less or non-critical data and Nearline, for archives.


Amazon’s VPCs (Virtual Private Clouds) and Azure’s VNET (Virtual Network) enables their users to cluster virtual machines into remote networks in the cloud. Using VNETs and VPCs, there users can outline a network topology, create route tables, subnets, network gateways and private IP address ranges. There’s nothing much to choose between them over this as both have ways to extend it to your on-premise data center into the public cloud. Whereas, every GCE instance has a single network, which outlines the gateway address and address range for all instances linked to it. You can apply firewall rules to an instance, and it can accept a public IP address.

Billing Structure

Amazon Web Services 

AWS categorizes resources under accounts. Each account comprises a single billing unit within which cloud resources are provisioned. Establishments with numerous AWS accounts, though, would wish to obtain one single combined bill instead of several separate bills. AWS permits this by generating a single consolidated billing. In AWS one of the accounts is selected as unified account and other accounts are connected to it, henceforth linked accounts. The bills are then combined to contain billing for all of the consolidated account and linked accounts, together is referred to as consolidated billing account family.


Microsoft engages a tiered approach to accounts management. The subscription is the lowermost in the ladder, and the individual one truly consumes and provisions resources. An account manages several subscriptions. It might sound like AWS account structure, but Microsoft’s Azure accounts are management units, and they do not use resources by themselves. For establishments without MS Enterprise Agreements, it is where the grading ends. Those with Enterprise Agreements may register their Enterprise Agreements in Azure, and can manage accounts under them, with department administrative and discretionary cost center hierarchies.


Google uses a flat pyramid structure for its billing. The resources are clustered under groups known as Projects. There is no entity higher than projects; nevertheless, several projects could be gathered under a consolidated billing account. This billing statement is similar to Azure’s accounts in the sense as these billing statements are not a consuming entity and also cannot provision services.


Cloud service vendors are providing different pricing and discounts models for their cloud services. Maximum of all such complex pricing and discounts models circle compute services, whereas bulk discounts are typically used with all remaining services. It is primarily due to two reasons. Firstly, the vendors are placed in a very competitive market, and they would want to lock-in their users for a long-term commitment. Secondly, it includes an interest to make the most of the use of their infrastructure, where for each available VM hour in represents a real loss.

Amazon Web Services

AWS has the most diversified and complex pricing models for its Elastic Compute Cloud (EC2) services:

On-demand: clienteles pay for what they use without paying any upfront cost.

Reserved Instances: customers reserve instances for one or three years with an upfront cost based on the use. Payment options include:

  • All-upfront: Here the customer pays for the total commitment upfront and receives the uppermost discount rate
  • Partial-upfront: Here the customer pays 50-70 percent of the commitment upfront, and the remaining is paid in monthly installments. Here the client receives a somewhat lower discount as compared to all upfront.
  • No-upfront: Here the customer pays nothing upfront, and the sum is paid in monthly installments over the term of the reservation. Considerably lower discount is received by the client under this payment option. 


Microsoft bills its clienteles by rounding up the utilized number of minutes on demand. Azure also provides short-term obligations with discounts. Discounts are offered only for bulk financial commitments, through pre-paid subscriptions, which provides 5 percent discount on the bill, or through Microsoft’s Enterprise Agreements, where higher discounts may be applied to an upfront financial obligation by the client.


GCP bills for instances by rounding up the number utilized minutes, with 10 minutes as a minimum base. It recently declared new sustained-use pricing for computing services offering more flexible and a simpler approach. The sustained-use pricing will automatically discount the on-demand baseline hourly rate as a particular instance is used for a larger percentage of the month.

The Bottom Line

The public cloud war slogs on.  With cloud computing in an early maturing stage, it is tough to foresee exactly how things might change in future. However, it is possible to comprehend that prices might continue to drop, and attractive and innovative features may continue to appear. Cloud computing is here to stay and with the growing maturity of private and public cloud platforms with IaaS massive adoption, enterprises now understand that depending on a single cloud vendor is not a long-term option. Issues such as vendor lock-in, higher availability and leveraging the competitive pricing may push enterprises to look for an optimal mix of clouds for their requirement, rather than a sole provider.

OK, folks.. !!!

Don’t forget to like & share this post on social networks!!! I will keep on updating this blog. Please do follow!!!

Server Monitoring Best Practices

As a business, you may be running many on-site or Web-based applications and services. Security, data handling, transaction support, load balancing, or the management of large distributed systems. The deployment of these will depend on the condition of your servers. So it’s vital for you to continuously monitor their health and performance.

Here are some guidelines designed to help you get to grips with server monitoring and the implications that it carries.

Understand Server Monitoring Basics

The basic elements of “Server monitoring” are events, thresholds, notifications, and health.

1. Events

Events are triggered on a system when a condition set by a given program occurs. An example would be when a service starts, or fails to start.

2. Thresholds

A threshold is the point on a scale that must be reached, to trigger a response to an event. The response might be an alert, a notification, or a script being run.

Thresholds can be set by an application, or a user.

3. Notifications

Notifications are the methods of informing an IT administrator that something (event, or response) has occurred.

Notifications can take many forms, such as:

  • Alerts in an application
  • E-mail messages
  • Instant Messenger messages
  • Dialog boxes on an IT administrator’s screen
  • Pager text messages
  • Taskbar pop-ups

4. Health

Health describes the set of related measurements defining the state of a variable being monitored.

For instance, the overall health of a file server might be defined by read/write disk access, CPU usage, network performance, and disk fragmentation.

Set Clear Objectives

Server Monitoring Best PracticesDecide what it is you need to monitor. Identify those events most relevant to detecting potential issues that could adversely affect operations or security.

A checklist might include:

  1. Uptime and performance statistics of your web servers
  2. Web applications supported by your web servers
  3. Performance and user experience of your web pages, as supported by your web server
  4. End-user connections to the server
  5. Measurements of load, traffic, and utilisation
  6. A log of HTTP and HTTPS sessions and transactions
  7. The condition of your server hardware
  8. Virtual Machines (VMs) and host machines running the web server

Fit Solid Foundations

It’s safe to say that most IT administrators appreciate useful data, clearly presented enabling them to view lots of information in a legible area. This means that you should take steps to ensure that your monitoring output is easy to read and well presented.

A high-level “dashboard” can serve as a starting point. This should have controls for drilling down into more detail. Navigation around the monitoring tool and access to troubleshooting tools should be as transparent as possible.

It’s also necessary to:

• Identify the top variables to monitor, and set these as default values. Prioritise them in the user interface (UI).

• Provide preconfigured monitoring views that match situations encountered on a day-to-day basis.

• Have a UI that also allows for easy customization.

• Users/IT managers should be able to choose what they want to monitor at any given time. Or be able to adjust the placement of their tools. And they should be able to decide the format they want to view the data in.

• The UI text should be consistent, clear, concise, and professional. From the outset, it should state clearly what is being monitored – and what isn’t.

Build, to Scale

Organizations of different sizes naturally have different monitoring needs. Small Organization IT administrators often look to fix problems after they’ve been identified. Monitoring is generally part of the troubleshooting process. Monitoring applications should intelligently identify problems, and notify the users via e-mail and other means. Keep the monitoring UI simple.

Medium Organization IT administrators monitor to isolate big and obvious problems. A monitoring system should provide an overview of the system, and explanations to help with the troubleshooting process. Preconfigured views, and the automation of common tasks performed on receiving negative monitoring information (e.g., ping, trace route.) will speed response. Again, keep the monitoring UI simple.

Large Organization/Enterprise IT administrators require more detailed and specific information. Users may be dedicated exclusively to monitoring, and will appreciate dense data, with mechanisms for collaborating. Long-term ease of use will take precedence over ease of learning.

Set Up Red Flags

You should provide a set of “normal” or “recommended” values, as a baseline. This will give context to the information being monitored. The system may give the range of normal values itself, or provide tools for users to calculate their own.

Within the application, make sure that data representing normal performance can be captured. This can be used later, as a baseline for troubleshooting. In any case, users should be able to tell at a glance when a value is out of range, and is then a possible cause for concern. Your monitoring software can assist in this, by setting a standard alert scale, across the application.

In Western cultures, common colors for system alerts are:

  • Red = Severe or Critical
  • Yellow = Warning or Informational
  • Green = Good or OK

For accessibility, colors should be combined with icons, for users who are sight-impaired. Words that can be dictated by a screen reader are also appropriate. Limit the use of custom icons in alerts though as users may resent having to learn too many new ones. There may also be conflicts with icons in other applications but saying that, common icons, that are recognizable, are fine, as there’s nothing new to learn.

Explain the Language

Don’t assume that your users will understand all the information your monitoring software provides. Help them interpret the data, by providing explanations, in the user interface.

  • Use roll-overs to display specific data points, such as a critical spike in a chart
  • Explain onscreen, how the monitoring view is filtered. For example, some variables or events might be hidden (but not necessarily trouble-free). The filter mechanism, an explanation of the filter, and the data itself should be positioned close together
  • Give easy access to any variables that are excluded in a view
  • State when the last sample of data was captured
  • Reference the data sources
  • There should be links to table, column, and row headings, with pop up explanations of the variables, abbreviations, and acronyms
  • Provide links beside the tables themselves, with pop up explanations of the entire table

Let Them Know

Alerts should be sent out, to indicate there is a problem with the system. Notifications should be informative enough to give IT administrators a starting point to address the problem. Information which helps the user take action should be displayed near the monitoring information. Probable causes and possible solutions should be prominently displayed.

Likewise, the tools needed for solving common problems should be easily accessible at the notification point.

You should log 24 to 48 hours of data. That way, when a problem arises, users will have enough information available to troubleshoot. Note that some applications need longer periods of monitoring, and some shorter. The log length will be determined by the scope of your day-to-day operations.

Provide multiple channels for notification (email, Instant Messages, pager text, etc.)

Users should be able (and encouraged) to update, document, and share the information needed to start troubleshooting.

Keep Them Informed

Users often need to use monitoring data for further analysis, or for reports. The monitoring application itself should assist, with built-in reporting tools. Performance statistics and an overall summary should be generated at least once a week. Analysis of critical or noteworthy events should be available, on a daily basis.

Allow users to capture and save monitoring data – e.g., the “normal” performance figures used as a baseline for troubleshooting. Users should be able to easily specify what they want recorded (variables, format, duration, etc.). They should also be allowed to log the information they’re monitoring.

There should be a central repository, for all logs from different areas of monitoring. A single UI can then be used, to analyze the data. Export tools (to formats such as .xls, .html, .txt, .csv) should be provided. This will help to facilitate collaboration in reporting and troubleshooting.

Take Appropriate Measures

Different graph types should be appropriate to the type of information you are analyzing.

Line graphs are good for displaying one or more variables on a scale, such as time. Ranges, medians, and means can all be shown simultaneously.

Table format makes it easy for users to see precise numbers. Table cells can also contain graphical elements associated with numbers, or symbols to indicate state. The most important information should appear first, or highlighted so that it can be taken in at glance.

Histograms or bar graphs allow values at a single point in time to be compared easily. Ranges, medians, and means can all be displayed simultaneously.

Some recommendations:

  • When using a line graph, show as few variables as possible. Five is a safe maximum. This makes the graph easier to read
  • Avoid using stacked bar graphs. It’s better to use a histogram, and put the values in clusters along the same baseline. Alternatively, break them up into separate graphs
  • When using a graph to show percentage data, always use a pie chart
  • Consider providing a details pane; clicking a graph will display details about the graph in the pane
  • Avoid trying to convey too many messages in one graph
  • Never use a graph to display a single data point (a single number)
  • Avoid the use of 3D in your charts; it can be distracting
  • Allow users to easily flip between different views of the same data

Push the Relevant Facts

Displaying a lot of stuff onscreen makes it harder for administrators to spot the information that is of most value – like critical error messages.

Draw attention to what needs attention, most:

  • by placing important items prominently
  • by putting more important information before the less important
  • by using visual signposts, such as text or an icon, to indicate important information

Preconfigured monitoring views will reduce the emphasis on users configuring the system. Allow users to customise the information and highlight what they think is important, so it can be elevated in the UI. Group similar events – and consider having a global overview of the system, visible at all times.

Hide the Redundant

If it hasn’t gone critical, or isn’t affecting anything, they don’t need to see it. At least, not immediately. If a failure reoccurs, don’t keep showing the same event, over and over. Try to group similar events into one.

Allow your users to tag certain events as ones they don’t want to view. Let them set thresholds that match their own monitoring criteria. This allows them to create environment-specific standards, and reduces false alarms. Use filters and views, to give users granular control of what they are monitoring.

Provide the ability to zoom in for more detailed information, or zoom out for aggregated data. Allow users to hide unimportant events, but still have them accessible.

Be Prepared, for the Worst

As well as probable causes, the application should suggest possible solutions, for any problems that occur. Administrators will likely have preferred methods of troubleshooting. But, in diagnostic sciences, it helps to get a second opinion. It’s essential to identify events most indicative of potential operational or security issues. Then, automate the creation of alerts on those events, to notify the appropriate personnel.

Being prepared also means that all data should be backed up and stored off the premises as well as on the network. This protects against the obvious such as hardware failure or malware attacks, but also against complete disaster such as a fire at the premises.

And the Best, that Can Happen

With proper monitoring measures in place, you greatly reduce the risk
of losses due to poor server performance. This has a corresponding positive effect on your business – especially online services and transactions.

A well-tuned monitoring system will help facilitate the identification of potential issues, and accelerate the process of fixing unexpected problems before they can affect your users.

OK, folks that’s it for this post. Have a nice day guys…… Stay tuned…..!!!!!

Don’t forget to like & share this post on social networks!!! I will keep on updating this blog. Please do follow!!!

Troubleshooting Network & Computer Performance Problems

Problem solving is an inevitable part of any IT technician’s job. From time to time you will encounter a computer or network problem that will, simply put, just leave you stumped. When this happens it can be an extremely nerve-wracking experience, and your first instinct might be to panic.

Don’t do this. You need to believe that you can solve the problem. Undoubtedly you have solved computer performance or network troubles in the past, either on your job or during your training and education. So, if you come across a humdinger that, at first glance at least, you just can’t seem to see a way out of, instead of panicking, try to focus and get into the ‘zone’. Visualize the biggest problem that you’ve managed to solve in the past, and remember the triumph and elation that you felt when you finally overcame it. Tell yourself, “I will beat this computer,” get in the zone, and prepare for battle.

Top 3 Computer & Network Issues You’re Likely To Experience

Network staff and IT security personnel are forever tasked with identifying and solving all manner of difficulties, especially on large networks. Thankfully there are, generally speaking, three main categories that the causes of these issues will fall into. These are: Performance Degradation; Host Identification; and Security.

Let’s take a closer look at each of these categories.

1. Performance Degradation

Performance DegradationPerformance degradation is when speed and data integrity starts to lapse, due, normally, to poor quality transmissions. All networks, no matter their size, are susceptible to performance issues, however the larger the network, the more problems there are likely to be. This is due in the main to the larger distance, and additional equipment, endpoints and midpoints.

Furthermore, networks that aren’t properly equipped with an adequate amount of switches, routers, domain controllers etc. will inevitably put the whole system under severe strain, and performance will thereby suffer.

So, having an adequate amount of quality hardware is of course the start of the mission to reduce the risk of any problems that you may encounter. But hardware alone is not enough without proper configuration – so you need to get this right too.

2. Host Identification

Host IdentificationProper configuration is also key to maintaining proper host identification. Computer networking hardware cannot deliver all of the messages to the right places without correct addressing. Manual addressing can often be configured for small networks, but this is somewhat impractical in larger organizations. Domain controllers and DHCP servers and their addressing protocols and software are absolutely essential when creating and maintaining a large, scalable network.

3. Security

Network SecurityHost identification and performance issues will not make any difference to a network that finds itself breached by hackers. And so, security is also of utmost importance.

Network security means preventing unauthorized users from infiltrating a system and stealing sensitive information, maintaining network integrity, and protecting network denial of service attacks. Again, these issues all magnify in line with the size of the network, simply because there are more vulnerable points at which hackers may try to gain access. On top of this, more users mean more passwords, more hardware, and more potential entry points for hackers.

Your defenses against these types of threats will of course be firewalls, proxies, antivirus software, network analysis software, stringent password policies, and invoking procedures that adequately compartmentalize large networks within in internal boundaries – plenty of areas, then, which may encounter problems.

Troubleshooting the Problems

Ok, so those are the potential difficulties that you are most likely to encounter. Identifying the source of any given problem out of all of these things can of course cause a lot of stress for the practitioner tasked with solving it. So, once you’ve got into the ‘zone’, follow these next 5 simple problem solving strategies and you’ll get to the bottom of the snag in no time. Just believe.

1. Collect Every Piece of Information You Can

Troubleshooting-Gather InformationThis means writing down precisely what is wrong with the computer or network. Just by doing this very simple act starts to trigger your brain into searching for potential solutions. Draw a diagram to sketch out the problem as well. It will help you visualize your task at hand.

Next you need to ask around the office to find out if anything has changed recently. Any new hardware for instance, or any new programs that have been added. If it turns out that there has, you need to try the simple step first of reversing the engines. Revert everything back to how it was before and see if that fixes things.

One of the best troubleshooting skills that you can have is pattern recognition. So, look for patterns in scripts, check for anything out of the ordinary. Is there a spelling mistake somewhere? A file date that is newer than all the rest?

2. Narrow the Search Area

Harware or Software - Narrow Search AreaFirstly you need to figure out if the problem is to do with hardware or software. This will cut your search down by half immediately.

If its software, then try and work out the scale of the problem – which programs are still running and which are not? Try uninstalling and then reinstalling the suspected program.

If it’s hardware, then try swapping the suspect component in question with something similar from a working machine.

3. Develop a Theory

Theory for Possible Causes of ProblemsMake a detailed list of all the possible causes of the problem at hand, and then ask yourself very seriously, using all of your experience, which one is it most likely to be? Trust your instincts, and also turn to the internet. The likelihood is that someone somewhere will have encountered just this very thing before, and may well have posted about it in a blog or forum. If you have an error number, then that will improve your chances of finding a reference. From here, you are in the perfect position to start the process of trial and error.

4. Test Your Theories Methodically

Test Your TheoryThe best troubleshooters test one factor at a time. This can actually be quite a discipline, but it is essential in order to be thorough. Write down every single change that you make, and keep listing potential causes as they occur to you, as well as possible solutions, and keep drawing diagrams to help you visualize the task.

5. Ask For Help!

Ask for HelpSeriously, there is no shame in it, so don’t start getting precious. Try and figure out who the best person would be to solve the problem and get in touch with them. Send out emails,post to forums, call an expert or contact the manufacturer. Do whatever it takes. It’s all part of the troubleshooting process, and you need to know when you require assistance.

OK, folks that’s it for this post. Have a nice day guys…… Stay tuned…..!!!!!

Don’t forget to like & share this post on social networks!!! I will keep on updating this blog. Please do follow!!!




What Are the Differences Between Routers and Switches?

In this article I will talk about the differences between two of the most common networking devices, routers and switches. You may already be somewhat familiar with these devices, even if you are not working in an IT department. Home internet connections became so common these days that we are practically addicted to them without even realizing we are. Because technology evolved so fast, newer, faster and cheaper networking devices have been developed to fulfill our needs. Many of you may own a router to connect to the Internet. If you are an IT professional, you probably know how network devices work, but for a casual user, these things may sound a bit like science fiction. If you’ve ever been curious about how routers and switches work, this is a perfect opportunity to learn about their role and functionality.

Protocol Stacks and Layers

Two main protocol stacks are used in today’s communications, OSI and TCP/IP. These designs define the rules that manage data communications inside computer networks. You should know that these stacks are divided into several layers. Each layer is independent and provides an important and unique role in communications. For more information about this, check out the following link to download “Networking Short Review“.

After having a general idea of protocol stacks, you can identify at what layer each networking device works. Based on a defined set of rules, both switches and routers take decisions on how and where data should be forwarded. If you don’t know by now, routers are also called layer 3 devices, while switches are layer 2 devices. But how did we get to this idea and what things are defined by each layer? Well, the networking layer (this is how it’s named in the OSI stack or the Internet layer in the TCP/IP design) is where routers take decisions based on the information gathered from the network. The IP (Internet Protocol) was developed as the central piece in data transmissions. There is much to talk about regarding this layer, but it’s not the main topic for this article. For those interested, read more here.

IP Addresses

An IP address is a 32 bit element used to identify a certain machine. Whenever data is sent between networking devices, it must be segmented into smaller pieces for better manipulation and transmission. At the network layer, these pieces are called packets. Each packet carries all the elements needed to communicate between devices. Layer three is responsible for the logical transmission between two devices. It’s called a logical transmission because even if devices are not physically connected, at layer three the transmission is seen as a client-server communication. Source and destination IP addresses are used to identify each machine involved in this operation and based on the information gathered from them, routers take forwarding decisions. All routing information is stored in routing tables

Diagnose Routing ProblemsWhat are Switches?

Switches are layer 2 devices because they make decisions based on the physical address (also known as the MAC address – Media Access Control). In the OSI stack, this layer is also known as the Data Link Layer. Each physical device uses the MAC address to uniquely identify itself in a computer network (two devices with the same MAC address cannot exist). Switches communicate between each other using physical addresses. To exchange information switches also use broadcasting and ARP mechanisms. The PDU, or the protocol data unit, defined at the Data Link Layer is known as the frame. A frame contains all the information involved in a layer 2 transmission. A frame is formed by adding the header (that contains the source and destination MAC address) and the trailer (error checking and other information) to a packet. This mechanism is also known as encapsulation.

Check out the following link from Wikipedia to better understand how Ethernet frames look. Switches store their layer 2 information in MAC address tables. The whole concept is pretty simple: when a frame is received, the switch will check the packet’s destination MAC addresses. If the address is found, the frame will be forwarded, through the desired interface, directly to the destination machine. MAC address tables store bindings between MAC addresses and switch ports. If the MAC address is not found, it will be added as a new entry in the MAC address table. The switch will then flood the frame on all its interfaces except the one that the frame was received from. This is known as a broadcast message and it is an important aspect of switches. Remember that switches will forward broadcasts while routers will block them.

Differences Between Switches and Routers

You may already know that routers define broadcast domains while switches define collision domains. A broadcast domain is defined by a single physical interface on a router. We say that switches segment collision domains because unlike hubs, each port defines a separate communication channel. In these channels collisions do not occur and transmission is made in full duplex mode (both sending and receiving of data can be done in the same time).

Another difference between these network devices is that usually, routers have a lower port density than switches. Then why should you use routers when you have a higher number of ports available on switches? Because each port connects a different network, the transmission between routers is made using the highest available speed on the physical port. With switches, the whole available speed is divided between all the transmitting ports. So even if you have fewer ports available on a router, the ports will forward data at the highest available speed. This is why routers are used when sending data between two distant networks.

Switches are used to create LANs while routers are used to interconnect LANs. A group of interconnected LANs is known as a WAN (Wide Area Network).

switches & routersRouters and switches can use different ports. Besides the normal FastEthernet, Fiber or Serial ports, they can also be equipped with console or aux ports and other special interfaces. Some advanced networking devices are modular, meaning their configuration can be changed even when the device is turned on in order to reduce the downtime. These modular devices are redundant meaning that they have two or more components with the same functionality. Such network devices are expensive and are usually used by large enterprises or ISPs. Remember that the cost of a network device can vary from several to thousands of dollars.

Unlike switches, routers can also support additional services like DHCP, NAT or packet filtering. These services can be activated using the router’s GUI or the console line. Network devices use different technologies to support their functionality. For example, switches use VLANs, STP or VTP technologies and routers use dynamic routing protocols, VLSM or CIDR.

I hope all the important aspects of these two network devices have been pointed out. If you think that there is more to be added here don’t hesitate to leave a comment.

OK, folks that’s it for this post. Have a nice day guys…… Stay tuned…..!!!!!

Don’t forget to like & share this post on social networks!!! I will keep on updating this blog. Please do follow!!!


Cloud computing symbol being pressed by hand

A core component in your company’s move to the Amazon Web Services cloud is the design of Amazon Virtual Private Cloud (VPC) network resources. Although AWS provides a VPC network design wizard, issues such as IP address range selection, subnet creation, route table configuration and connectivity options must be carefully evaluated.
Thinking through such issues ensures a smoother transition from in-house infrastructure while reducing the risk of time-consuming backtracking of your cloud architecture.

Single versus Multiple AWS Accounts

Before designing your VPCs it is necessary to consider into how many AWS accounts you will deploy.

In some situations, using a single AWS account may be sufficient. One account is often sufficient when using AWS for disaster recovery or as a development sandbox. However, in many other situations multiple AWS accounts should be considered.

For instance, you may wish to separate development and testing environments into one account and place a production environment into a second account. Additional AWS accounts may be created to reflect the organizational structure of larger enterprises. Multiple accounts can be utilized also to separate workloads based on security requirements such as the isolation of PCI-compliant workloads from those that are less sensitive.


Single versus Multiple VPCs

Choosing whether your AWS infrastructure utilizes a single VPC or several VPCs is not a straightforward decision.

Using multiple VPCs provides for better isolation between the systems, contains the scope of security audits, and limits “blast radius” in case of an operator error or security breach. However, multiple VPCs increase the complexity of network topology, routing, and connectivity between the VPCs and on-premise data centers.

Using a single VPC simplifies the networking and connectivity but makes it harder to isolate workloads from one another. With a single VPC isolation of workloads, user accounts and network access leans heavily on the use of AWS Security Groups (SGs) and Network Access Control Lists (NACLs). The likelihood of running into AWS limits related to SGs and NACLs is higher in this scenario.

If you use multiple VPCs, consider isolating them from each other. VPC isolating is also appropriate if you want VPCs for  shared infrastructure  tools such as authentication stores, management tools, or common entry points (e.g., bastion servers).

Single vs Multiple Region Deployments

AWS regions by design are isolated from each other which means that virtual networks are also inherently separated. For most uses, a single region network configuration is sufficient. However, in circumstances that require low latency with active processing workloads in globally shared configurations there may be additional considerations to take into consideration.

When evaluating interconnection between regions, scrutinize whether you can deploy to multiple regions within an isolated architecture or if a content delivery network (CDN) solution such as CloudFront meets your needs.


There are a number of factors with respect to VPC subnetting that must be taken into account :

  • High availability of AWS Managed Services, such as RDS, is achieved by using multiple subnets in multiple AWS Availability Zones
  • AWS subnets cannot be resized
  • AWS subnets will either share or have independent routing tables assigned

If your subnet IP spaces may not meet your future needs, consider using from the start the newly added support for IPv6 in AWS.

Network Connectivity Options

VPC Peering

Peering allows communication between VPCs using private IPv4 or public IPv6 addressing over a virtual connection. This feature enables AWS cross-account connections within a single AWS region. These facilitate resource sharing between two or more VPCs although it does not allow transitive peering relationships.


AWS provides several flavors of VPN connectivity depending on your needs. These are used to connect your VPCs to remote networks such as your corporate intra-net:

  • AWS managed hardware VPN – A high-availability, redundant IPsec connection compatible with major vendors’ routers
  • Customer managed software VPN – Consists of an EC2 instance within a VPC running a software VPN appliance obtained from a third-party
  • AWS VPN CloudHub – For connection to multiple remote networks

AWS Direct Connect

AWS Direct Connect provides a dedicated physical connection for high-performance and high-reliability connectivity between AWS and on-premise data centers. Often, VPNs are configured over Direct Connect connections.


Choosing the correct VPC architecture for your cloud migration is a critical first step in moving to the cloud. “Re-dos” are unfortunately common when system partitioning, CIDR sizing or inappropriate VPC options are chosen that lead to hard-to-manage, insecure or inefficient cloud infrastructure.

Don’t forget to like & share this post on social networks!!! I will keep on updating this blog. Please do follow!!!

How Can Server Monitoring Improve Performance?

It’s important to maintain a careful watch over a company server as misuse of this technology can lead to data loss and it can incur financial costs. Server monitoring tools provide administrators with an easy way to maintain a vigilant approach as they provide alerts and function to keep the administrator up to date and abreast of any problems – potential or current.

A network monitoring tool is a powerful application that can monitor bandwidth, availability and server performance monitoring.

Server Systems

Improve PerformanceFirstly let’s consider what a server is. A server is a system of computers that provides network services. It’s a collection of hardware and software that works together for the purpose of effective communication between computers. So web server monitoring is carried out through the use of web server software. This software checks the working conditions of the server and relays messages regarding CPU usage, performance level of the network in use and the health of the remaining disk space. Server monitoring can have many additional features such as alerting and benchmarking.


Server monitoring can be divided into several categories.

Firewall monitoring

Firewall MonitoringIt’s important to maintain a close watch on the security of your firewall and monitors can be used to perform this task. These tools are equipped with a number of different sensors and they undertake the process of firewall monitoring easily. By monitoring a firewall carefully you can get to know the exact activities going on in terms of data flow in and out of the system.

The security of the system is highly boosted, as any malware attempting to gain access to the system is automatically detected and a warning message will appear. The use of monitoring tools ensures that you are in control of your internet usage and it will indicate the top connections, top talkers, and top protocols.

Bandwidth monitoring

Bandwidth MonitoringTo monitor the bandwidth usage on a server, monitoring software needs to be used. This involves complete identification of the actual problems affecting a network system and will help administrators to work on the problem, rather than spending time attempting to identify it. This saves time and ensures a more effective management of bandwidth monitoring.

Monitoring the bandwidth usage keeps track of information about consumption as far as leased lines are concerned. It’s responsible for the effective monitoring of network connections, tracing usage trends, and measuring the bandwidth being used for billing purposes. This monitoring system contributes to decisions concerning router traffic balancing and it will warn the administrator if any flaws in network load are identified.

Router monitoring

Monitoring RoutersRouters are relatively expensive parts of server systems and as they are costly, it calls for effective monitoring to avoid incurring losses. By monitoring routers administrators can monitor the bandwidth and avoid over subscription that could lead to paying for more than is needed. By monitoring the bandwidth carefully administrators can avoid possible congestion and other related network problems.

Monitoring a router can optimize bandwidth allocations more effectively, ensuring that large networks run smoothly. Router monitoring entails spotting a problem and initiating upgrades, or even replacements if necessary. Importantly, the administrator is made aware of traffic trends, which in turn gives them the ability to plan for capacity and increment the best ROI possible.

Switch monitoring

Monitoring SwitchesTo avoid negative effects or failures on LAN, protective switch monitoring systems should always be in place. These systems allow for monitoring of port utilization and traffic and will provide necessary alerts if anything goes wrong. The monitors will detect any potential problems and it will in turn prevent them.

The switch monitoring system will alert administrators if a port is heavily used and direct them to a more underutilized port. To prevent data loss the administrator will be notified if a port starts to discard and switch; port mappers are used to quickly supervise the status of any device connected or interfacing with the port in question.

NetFlow monitoring and packet sniffing

Monitoring Packet SniffersIt’s important for an administrator to know when their bandwidth is being used, who is using it, and the reason why that person is using it. This is where NetFlow comes in and it works to keep administrators up to date about who is using what on the network. This is an important tool and it shows the administrator how the current usage may affect the network. If the administrator discovers that the bandwidth is over-stretched and over-used, they can take the necessary steps to negate the danger.

Packet sniffing can also be used when monitoring a network as it captures and records data flow. This system allows for discerning every single packet and carries out analysis on its predefined parameters. This is an addition to standard bandwidth capabilities and the sensors make use of the host machine cards.

Network and VoIP monitoring

Monitoring VOIPNetwork monitoring software functions in an important manner as it keeps the administrator informed before a failure or malfunction occurs within a network. This is useful to an administrator as they can take pre-emptive measure to prevent future flaws from happening, which in turn leads to a lesser cost to repair any issues. Effective network monitoring increases the efficiency of a network as it keeps track of bandwidth and data consumption.

It’s easy to install network monitors and they are just as easy to use. They support remote control as they provide notification techniques and multiple location monitoring. VoIP monitors constitute powerful QoS sensors that can measure jitter, the latency of a given network, and packet loss. With a good network monitoring tool, an administrator can be informed of his data usage instantly and can also be alerted and warned if the system monitoring tools detect a deterioration in quality.

These days, malware attacks are common and the need for network monitoring is more pressing than ever. Protecting data, especially that of customers, is vital to ensure that a business performs well in the modern connected age.

Is there anything else you monitor that you feel is critical to an IT department? If so, tell us about it in the comments below! And be sure to share this post with friends if you find it useful!


What is IPv6 and an IPv6 Address?


You are probably aware that today’s communications rely very heavily on the Internet Protocol. An IP address is a unique identifier for each machine connected to the Internet. With the ever growing size of the Internet, the demand for IP addresses increased drastically. Today’s IPv4 supports 2 to the power of 32 addresses. This number may make it sound like there’s plenty of space to accommodate all devices in the world, but the truth is that there are not many free IP classes left. This means that in the following years or even earlier, the whole IPv4 block will be exhausted.

This raises a huge problem with the Internet’s expansion, meaning that new technologies must be adopted to overcome this situation. This is why the new IPv6 technology was developed and implemented in some parts of the world. An IPv6 address is a 128-bit binary value that is represented by 32 hexadecimal digits. The IPv6 block would be sufficient to assign trillions of addresses to everyone from the whole world.

Plug and Play

There are some real advantages of using IPv6, this article will cover the most important aspects of this protocol. The IPv6 protocol supports autoconfiguration technology, meaning that no human interaction is needed for its configuration. This feature provides a good plug-and-play mechanism for all network devices and it can provide global coverage and flexibility. By having enough IP addresses, services like NAT (network address translation) are not required anymore.

This means that the transition from private networks to public ones and vice versa is done instantly without address translation. IPv6 uses multi-homing technology in which a physical device can support multiple IP addresses. IPv6 has a simpler header than IPv4 which has some real advantages, like increased routing performance and no broadcasts or checksums. All communications are done using IPsec technology which is why IPv6 provides increased security and easier portability for mobile users (mobile users can change networks without decreasing security).

Converting to IPv6 Will Take Some Time


Internet’s core communications will change to support the IPv6 protocol. To carry out this transition successfully, the whole operation must be done transparent to end users without any downtime. This process will take time and will not be done overnight. This means that both IPv4 and IPv6 will function in the same time. The migration between these two technologies will be done using some transition mechanisms like 6 to 4 tunneling, dual stacking routing and others.

I’ve written earlier that IPv6 addresses are composed of 128 bits represented as 32 hexadecimal digits. Each hexadecimal digit can be converted as a binary string. Let’s take the following example 2a80::9c62:ef40:769a:464b. This address contains a combination between numbers and digits. You must know that in hexadecimal numbering 10 is represented by the letter A, 11 by B and so on. By converting each group of 4 hexadecimal values we can obtain the binary value of the whole address. You can see that there is a group of two :: . This representation means that there are groups of 0s to complete the whole 128 bit address. The same thing could be written as 2a80:0000:0000:0000:9c62:ef40:769a:464b. Remember that this mechanism can be used only once. Simply put, :: means that the whole block is composed of 0s. For example, the IPv6 loopback address is written as ::1. Also, with IPv6 we can write groups of four zeros as one 0, something similar to the following example 01a3:0:25b6: 9c62:ef40:769a:0:464b.

There are several types of IPv6 addresses:

Private Addresses: these are reserved IP addresses used inside organizations and are not routable in the Internet. These IPv6 addresses start with FE followed by a digit from 8 to F interval. These private IP addresses are divided into two categories:

Link-local addresses – IPs used for communications inside a physical network segment. These IPs are not routable and are used for some special tasks like neighbor discovery or autoconfiguration. These IP addresses start with FE followed by a value from 8 to B, something similar to fe80::9c62:ef40:769a:464b.

Site-local addresses – even though the site-local IP addresses are deprecated, you would probably be interested to find out about them. These are IPv6 addresses similar to IPv4 private IPs. These addresses begin with FE followed by a value from C to F.

Loopback address – We’ve talked about this earlier, an IPv6 address used for testing purposes. By pinging this IP we redirect the traffic to the same machine (looping back the traffic).

Reserved addresses – these IPs are not leased to anyone and are reserved for future development

Global unicast addresses – IPs allocated to the five global RIRs. From Wikipedia:

“A regional Internet registry (RIR) is an organization that manages the allocation and registration of Internet number resources within a particular region of the world. Internet number resources include IP addresses and autonomous system (AS) numbers”.

IP addressesUnspecified address::/128 — The address with all bits set to 0 is called the unspecified address (corresponding to in IPv4).
“This address must never be assigned to an interface and is to be used only in software before the application has learned its host’s source address appropriate for a pending connection. Routers must not forward packets with the unspecified address.
Applications may be listening on one or more specific interfaces for incoming connections, which are shown in listings of active internet connections by a specific IP address (and a port number, separated by a colon). When the unspecified address is shown it means that an application is listening for incoming connections on all available interfaces.”

Multicast addresses – though broadcasting is not implemented in IPv6, multicast addresses are used for several operations. In multicasting, one packet is transmitted from one node to sever network devices at the same time using one IPv6 address. Because IPv6, supports stateless address configuration, devices will use a combination of multicast addressing, Network Discovery Protocol and IMPv6 to acquire a network address.

I’ve written earlier that the transition from IPv4 to IPv6 will not be done overnight so network devices will need to support both protocols at the same time. For this reason new technologies were developed to support this transition:

Dual stack is one example of a transition mechanism in which network devices support both IPv4 and IPv6 over the same network portion. These devices use two protocol stacks, one for IPv4 and another one for IPv6. Based on the destination address, a router will choose which stack it will use.

IPv6 tunneling mechanism – IPv6 network packets are encapsulated into IPv4 packets. The header of the packet changes a little bit meaning that it will contain an identifier for the IPv6 packet, the IPv6 data and an IPv4 header. This is not an effective transmission mechanism, but it will serve as an intermediate solution for transmitting IPv6 packets without changing the network devices. This tunneling mechanism can be configured in two ways: manual Ipv6 to Ipv4 tunneling and dynamic 6to4 tunneling. Other well-known tunneling mechanisms include ISATAP and Teredo.

NAT Protocol Translation or NAT-PT – in this technology, routers will forward packets from an IPv4 to an IPv6 network and vice versa.

IPv6 is a complex Internet Protocol that will be adopted globally. There are many aspects to consider when implementing this protocol and probably I don’t know all about it. When using an automatic IPv6 configuration, the MAC address of your physical interface will be integrated to the IPv6 address to form a unique address. This raises a big concern in terms of security because MAC addresses can be discovered by malicious users but also user’s actions can be tracked over the Internet. I don’t know how this problem is resolved in IPv6 so please add anything you think is relevant for this topic.

If you’ve enjoyed this POST please SHARE it with others and stay tuned…..!!!