Your First Day on AWS: 10 Pitfalls and How to Avoid Them

Amazon Web Services is a great cloud platform for small companies and startups, as well as large enterprises. AWS has an amazing array of services that cover all the infrastructure requirements for running all types of computational work. AWS is also extremely scalable and configurable, making it the right type of environment for small and new companies that don’t quite know the dimensions of their future growth and the attendant increase in their IT infrastructure requirements.

While AWS offers a superb and economical Infrastructure-as-a-Service (IaaS) platform that lets you provision vast amounts of infrastructure components and services at the click of a button, many users, especially those who are new to AWS, often don’t understand how to harness AWS’s capabilities to capture the maximum benefit for their organizations.

They also aren’t aware of the security and cost implications of running applications and storing data in the AWS cloud. Often, companies are surprised to find that their costs are much higher than they had originally envisaged, because they were unaware of how AWS billing works.

This article summarizes the common pitfalls when you work with AWS, and how to work proactively so as to prevent major issues when you’re ramping up your usage of AWS services.

1. Reduce Your Exposure to Outages

AWS is extremely reliable, but it’s not perfect –there have been well publicized outages where companies found that one of the Amazon data centers on which they happened to be storing their data experienced a major network issue, resulting in lost business and goodwill.

By using a simple strategy of spreading your workload across multiple data centers within an AWS region through AWS’s Availability Zones, you reduce your risk of outages. Not only that, you also get to load balance your workload across multiple servers.

Use horizontal partitioning to implement redundancy at each tier of an application. This involves running a minimum of two instances of each application tier. In order to make horizontal planning even more robust, you must place the redundant instances in different availability zones to avoid application failures and downtime due to data center failures, which are all but inevitable over the long run.

2. Avoid Problems through Efficient Monitoring

It’s always better to catch problems before they develop into full-blown crises. A lot of new AWS users don’t take advantage of AWS’s CloudWatch service. CloudWatch can monitor various AWS resources such as EC2 instances, EBS volumes, and others. CloudWatch doesn’t cost you a thing for certain types of monitoring, while it’s pretty inexpensive for others.

Treat monitoring as important as application design, and integrate rigorous monitoring into your application right from the outset. Proactively avoiding disasters is far better than constantly fighting fires!

3. Set Up Proper Alerts

You can set up alerts to you via E-mail, SMS, and HTTP with the Simple Notification Service (SNS). You can also hook CloudWatch up with SNS so that you can get all alerts from CloudWatch delivered directly to you.

In order to use SNS, you must create the service within your AWS account, so it can start sending you notifications.

4. Prevent Waste through a Continuous Utilization Review

It’s amazing how many AWS users fail to stay on top of their resource usage. Many users end up underusing and even not taking advantage of the AWS resources they paid for. AWS keeps billing you for the resources that you’ve requested from it, regardless of whether you’re fully using those resources or not.

Here are some strategies to prevent wasting your AWS resources and cutting your spending on AWS services:

  • Use AWS EC2 reserved instances to cut the computing costs of your applications.
  • Diligently review your AWS bills to ensure that you’re unwittingly “using,” and hence getting billed for resources or applications you aren’t aware of.
  • During application design, take care that that you can add or remove resources from these applications so as to maintain a high resource utilization rate.
  • Use either AWS’s Trusted Advisor service or a commercial resource utilization and cost tracking service to help you control your AWS costs.

5. Avoid Over-Provisioning of Resources by Taking Advantage of Elasticity

It’s extremely easy to add AWS instances and other components of the AWS infrastructure such as load balancers. Every piece of infrastructure you add on adds to your bill. It’s smarter to take advantage of AWS’s built-in elasticity features such as auto-scaling.

In order to reduce the workload of your operations groups, use AWS’s Auto Scaling groups so Amazon takes care of dynamically scaling your resource pools.

Remember that organizations waste fully 80 percent of the processing capacity available through their pool of EC instances. You end up paying for the unused capacity, of course. EC2 is the largest component of total AWS spending for the vast majority of AWS users. Therefore, it helps to pay close attention to how you’re using EC2 instances and ensure that you optimize your usage of these resources.

6. Adopt the Most Suitable EC2 Pricing Model

As mentioned earlier, EC2 is in most cases the largest component of your total AWS expenses. EC2 has different pricing models: On-demand, reserved, and spot pricing.

Reserved instances are cheaper than on-demand instances, and instances acquired through spot pricing are the cheapest. Roughly three quarters of EC2 usage is through on-demand instances, with spot-price based instances forming less than 10% of the total instances in use. You can save more than 50-60 percent of total EC2 billing volume through smart planning.

Most companies are leery of committing to reserved instances. However, if you’re planning to use a set of EC2 instances for longer than 3 months, you’re better off reserving the instances. Spot instances ere where AWS lets you bid for its unused resources. You can reduce total costs significantly through the spot pricing model, especially to take care of spikes in business demand. There’s no real drawback to a spot pricing oriented strategy since you can always get the regular on-demand instances if you can’t successfully bid for the spot instances.

7. Don’t Ignore the Services in the “Other” Category

Typically, the services listed under the “Other “category in the AWS itemized bills account for anywhere between 15-20% of the total spending on AWS services. While this seems small compared to what companies usually spend on the EC2 service (typically about 60-65 percent), these services, which include SQS and SNS, can add up to a sizable amount at the end of the user,

You must monitor your usage of the services in the “Other” category with the same diligence with which you track EC2 instance usage.

8. Don’t Leave the Lights On

New users are often beguiled by the ease with which they can commission new instances in AWS. A byproduct of this is the fact that often organizations don’t keep track of how many instances they’ve started up and how many unneeded instances are running, adding to the AWS bill.

AWS is fully transparent regarding its charges. However, AWS has so many services that it’s hard to keep track of all the resources you’ve contracted for. Make sure you’ve a system in place that checks for unused instances and ensure that they’re turned off when your applications finish using the instances.

9. Don’t Overdo the EBS Snapshots

Newcomers are often leery about losing their data in the cloud environment and hence create too many EBS (Elastic Block Store) snapshots. Over a period of time, the accumulated snapshots, while they do very little to protect your data, do add significantly to your storage costs.

Create a sensible EBS snapshot strategy at the outset, and ensure you create only a moderate amount of them. While we’re on the topic of EBS, it may be worth noting that about a sixth of all EBS volumes aren’t attached to instances – but you do keep paying for the unattached volumes!

10. Secure your AWS System

Regardless of how good your applications are, you’re extremely vulnerable in the cloud without an iron-clad security strategy. You can start off with a strong security strategy by making sure your system is patched with all available updates so you’re protected against known security vulnerabilities.

Tighten access to your AWS account, so as to protect your AWS resources from accidental or intentional damage. You must also ensure that you control all network traffic to and from your Amazon EC2 instances. Make sure you take advantage of Amazon’s Private VPN (Virtual Private Network) capability to create subnets in your network so that unauthorized outsiders can’t break into your systems from the internet.

That’s it! Follow these 10 basic steps when you start out with AWS in order to fully realize the potential of your investment in AWS as your cloud platform of choice.

By following the simple guidelines presented here, you minimize your spending on AWS services. You’ll also be on your way to securing your network from intruders as well as security bugs. Finally, your system will be pretty much immune to outages due to various types of system failures.

Don’t forget to like & share this post on social networks!!! I will keep on updating this blog. Please do follow!!!


Free or Open Network Solutions

There are endless free and open source software solutions and services out there for use in the network—for small, medium, and enterprise environments. Here I’ll share a couple you might consider.


There are endless free and open source software solutions and services out there for use in the network—for small, medium, and enterprise environments. You aren’t limited to Microsoft, Cisco, and other commercial giants. You can save money by using free, open source, or less expensive solutions. Here I’ll share a couple you might consider.

Vyatta Enterprise-level Firewall/Router

Vyatta is a network operating system targeted for enterprise-level networks for cloud, virtual, and physical deployments. It offers the core network services: NAT, routing, DHCP, firewall, VPN, QoS, IPS, and more. It can install onto any standard x86 based system, run using VMware, Citrix XenServer, Xen and Redhat KVM hypervisor environments, or on Amazon VPC. They provide their core open source software for free with documentation, but isn’t recommended for critical networks. The paid subscriptions offer additional features, updates, add-ons, and support.

Endian Enterprise-level Firewall/Router

Endian is a Unified Threat Management (UTM) operating system, enabling you to turn any PC or server into a full-featured security appliance. In addition to providing the basic LAN services (NAT, Firewall, DHCP, etc.), it also offers VPN, hotspot functionality, anti-virus, anti-spam, web security, and email content filtering. Like Vyatta, their free open source community version of the OS isn’t designed for critical networks.

RouterOS Enterprise-level Firewall/Router

RouterOS is a closed source Linux-based operating system (OS) designed to implement a router. It’s the same OS offered on the Router BOARD hardware from MikroTik. However, the OS is freely downloadable and installable onto regular PCs and servers, turning it into an enterprise-level router. It offers all the network services for a LAN, including routing, firewall, bandwidth management, wireless access point, backhaul link, hotspot gateway, VPN server, and more. However, only basic functionality and limited use of some features is free. After the 24-hour fully functional trial, you can purchase a license starting at $45.

Untangle SMB-level Firewall/Router

Untangle is another network operating system, but is targeted more for small-to-medium sized businesses. It can be installed and ran on a normal dedicated PC. It can provide the router (with NAT, firewall, DHCP, etc.) for your network and/or provide additional security and control, such as web and spam filtering, virus and spyware protection, captive portal, a VPN server. The paid premium services then offer enhancements to these features and additional user and bandwidth management functionality.

389 Directory Server

The 389 Directory Server, previously known as the Fedora Directory Server, is a free and open Linux-based enterprise-class LDAP server. It can be an alternative solution to or compliment Microsoft’s Active Directory. It supports install on Linux, Solaris, and HP/UX 11 systems. It’s highly reliable and scalable with multi-master replication. It features a graphical console for managing the server, users, and groups.

Citadel for Email, Calendar, and Collaboration

Citadel is a Linux-based open source groupware solution to manage and offer email, calendar, contact management, and other collaboration features to your users. It’s available as a Debian/Ubuntu package, or downloaded as a VMware appliance. Users can access the services via a web-based interface or via supported clients, such as Outlook, Mozilla Thunderbird and Sunbird, Evolution, and KOrganizer.

SOGo for Email, Calendar, and Collaboration

SOGo is another Linux-based open source groupware solution, offering email, calendar, and contact management to your users. Binary packages are available for several Linux distributions and it’s available as virtual appliances. Unlike most other open source Groupware solutions, the connectors for Outlook and Thunderbird are available for free. Additionally, it also supports all the major mobile smartphones and devices, either natively or via a free SOGo connector.

OpenDNS for Fast and Intelligent DNS and Content Filtering

OpenDNS is a third-party DNS service you can use instead of your ISP’s. It’s a fast and intelligent DNS service that can protect against DNS-based attacks and malware/botnet activity. Additionally, it can provide DNS-based content filtering to automatically block phishing, malware-infested, proxy, adult, and other dangerous sites. You can also view usage reports on Internet activity. It even features the ability to create shortcuts, or words you can type into a browser that will automatically point to a website or IP address.

FreeNAS for a Network File Server

FreeNAS is one of the most popular open source network-attached storage (NAS) software solutions. It installs onto systems with a Compact Flash, USB flash, hard drive, or booted directly from a LiveCD. It offers sharing attached drives via following native protocols: SMB/CIFS (Windows), AFP (Apple/Mac), and NFS (Unix/Linux). Additional supported protocols include FTP, TFTP, RSYNC, Unison, iSCSI and UPnP.

It supports advanced networking with VLAN tagging, link aggregation, and Wake on Lan (WoL). The monitoring features include S.M.A.R.T (smartmontools), email alerts, SNMP, Syslog, and UPS support. You’ll also find extra services: bittorent client (Transmission), UPnP server (FUPPES), iTunes/DAAP server (Firefly), webserver (lighttpd), and network bandwidth measure (Iperf).

Nagios for Network Monitoring

Nagios is a monitoring and alerting system to keep tabs on your servers, switches, applications, and services. It features a web-based interface, email and SMS alerts, escalation capabilities, and event handlers to automatically restart failed services. Reports can provide records of alerts, notifications, outages, and alert responses. The solution is also highly customizable and extendable via add-ons, APIs, or code modifications. You can use free open source version for solid monitoring, but they offer a separate version for enterprise-class environments.

DD-WRT Wi-Fi Router Firmware Replacement

DD-WRT is one of the most popular, feature-rich, and well-maintained open source firmware replacements for consumer-level wireless routers, and can also run on embedded systems and PCs. It provides the typical wireless router features in to addition to some advanced or enterprise-level features, including VLAN support, multiple or virtual SSIDs, hotspot functions, and a VPN client and server. It also offers great customization, such as with the startup and firewall scripts. The wireless access point can also serve as a Client, Bridge, or Repeater.

CoovaAP Wi-Fi Router Firmware Replacement for Hotspot Solutions

CoovaAP is an OpenWRT-based firmware replacement for wireless routers, specifically designed to implement a Wi-Fi hotspot. It includes the CoovaChilli access controller, an embedded captive portal, and features bandwidth traffic shaping. It supports a variety of hotspot schemes, including free access with Terms of Service agreement, commercial or paid access, and even WPA Enterprise security with RADIUS accounting.

That’s enough guys….!!!!

Don’t forget to like & share this post on social networks!!! I will keep on updating this blog. Please do follow!!!

Microsoft Azure Cloud

This article provides an overview of the services and features of the Microsoft Azure cloud solution.

What is Microsoft Azure?

Microsoft Azure is a public cloud platform based on Windows Server 2012 that offers on-demand, scalable hosted services for the deployment and execution of applications on off-premises infrastructure. Microsoft currently operates a dozen data centers to support Microsoft Azure. Four data centers are located in the United States, two data centers are in Europe, two data centers are in Japan, two data centers are in Southeast Asia, and one data center is in Brazil. Four more data centers are under development, including two in China, and two in Australia. In order to provide redundancy, data centers operate in geographical pairs. Microsoft Azure is presently offered in 89 countries with data hosted and replicated between the data centers in a client’s region. Microsoft Azure is supported by a high-speed fiber optic network that interconnects the infrastructure.

Microsoft Azure provides a range of features that are categorized into compute, data, app, and network services. Table 1 provides a breakdown of the features available in each category.


Data Services

App Services

Network   Services

Cloud Services


Active Directory


Mobile Services



Traffic Manager

Virtual Machines


BizTalk Services

Virtual Network

Web Sites

Recovery Manager


SQL Database

Media Services


Multi-Factor Authentication

Notification Hubs




Service Bus


Visual Studio Online

Table 1: Microsoft Azure Services and Features

Microsoft Azure Compute

Azure Compute includes Cloud Services, Mobile Services, Virtual Machines, and Web Sites. With this selection of compute models, you have the ability to deploy anything from simple websites to complex multi-tiered applications without the cost associated with an on-premises infrastructure. You also gain the ability to scale Azure resources based on changing application performance demands.

Cloud Services

Azure Cloud Services allow you to build and deploy an application in the cloud using XML configuration files to define how your application should execute. By defining roles and resources for your application, you can run one or more instances of the roles, and have Azure replicate the role to run on multiple computers. Microsoft Azure supports Web (e.g., front end) and Worker (e.g., application logic) roles for applications.

Mobile Services

Azure Mobile Services provides a bundle of features that allow you to build mobile applications that deliver a common experience across Windows, Android, iOS, and HTML devices. These features include data storage, user authentication, and push notifications.

Data storage choices range from Azure SQL database to third party data services, and even on-premises databases for restricted or confidential data. Mobile applications that require cross-platform integration for game media and status data can use Microsoft Azure data storage from a multitude of devices, in addition to Windows-based devices.

User authentication features allow you to integrate with well-known systems like Facebook, Twitter, Microsoft, and Google accounts for authentication, and avoid writing custom code. However, application specific authentication systems are also possible, as well as authenticating to the Azure Active Directory.

Virtual Machines

Azure Virtual Machines provide you with the ability to deploy a variety of Windows and Linux guest operating systems in a virtualized server, eliminating the need to purchase, deploy, and maintain physical servers. You can run applications in multiple virtual machines and balance load traffic between them to tune performance. Because Azure virtual machines are Hyper-V based and use VHD and VHDX encapsulation of virtual machines, you can move virtual machines from on-premises to Azure and back, if required. When running in Azure, virtual machine performance can scale up or down based on real-time requirements.

Web Sites

Microsoft Azure Web Sites supports the development, deployment, management, and scaling of simple or complex websites. Provisioning a website can be done using the Azure Management Portal, integrated development environment such as Visual studio, scripting, and other common tools. Azure web sites support custom domain names, support for SSL, and provide automatic scaling and load balancing.

Microsoft Azure Data Services

Microsoft Azure Data Services provide you with the ability to store, manage, and report on data in the Azure cloud.


Backup services offer the ability to perform full or incremental backup off-premises, instead of protecting important information using on-site backup media and off-site data transport and retrieval for recovery purposes. Azure backups use encrypted data transmission and encrypted data storage. Windows Server and System Center Data Protection Manager tools can be used to perform backups in Azure and provide a common experience for on- and off-premises backup procedures.


Azure Cache enables fast data access for high-performance applications by providing distributed, in-memory storage of critical data. Azure Cache reduces data roundtrips to back end data storage by updating the cache at set intervals, reducing application data access time.


HDInsight, which is based on Apache Hadoop, provides a solution for the distributed and scalable processing of large data sets. HDInsight integrates with Microsoft Business Intelligence tools to provide data analysis. With HDInsight, Hadoop clusters are deployed, provisioned, and decommissioned easily through PowerShell scripts. It is also possible to delete and recreate larger Hadoop clusters without data loss to ensure data set scalability.

SQL Database

Microsoft Azure offers SQL database as a service in multiple tiers that scale to application performance requirements. Application continuity options range from basic protection to geo-replication with failover control. Application data recovery is client controlled from existing data backups.


Microsoft Azure Storage options include block blobs for storage of text or binary data files, page blobs optimized for random access and frequent updates (such as VHDs), tables for storage of unstructured, non-relational data, and queues for reliable, asynchronous messaging. Azure Storage supports different options to transfer on-premises data to the cloud. StorSimple devices automate data uploads from an on-premises iSCSI storage array with a cloud-connected back end to Azure Storage. Another option is Azure Import/Export to move large data sets into and out of blobs using physical devices such as hard disk drives. Azure Storage also supports a command line utility, AzCopy, to transfer smaller or incremental data sets.

Microsoft Azure App Services

Microsoft Azure App Services provide a broad range of application support services including authentication, process automation, messaging, push notification, job scheduling, media delivery, BizTalk integration, and application development.

Active Directory

Azure Active Directory provides a tiered, cloud-based solution for user authentication and access management. You can integrate on-premises Active Directory with Azure to enable corporate credentials for authentication to cloud resources. Premium tier services include self-service password reset, self-service group management, group-based application access management, company branding, and security reports and alerts.


Azure Automation enables the orchestration of frequent tasks using runbooks to manage cloud resources such as Azure Web Sites, Cloud Services, Virtual Machines, Storage, and SQL Database. Windows PowerShell workflow engine provides the execution environment for Azure Automation runbooks.

BizTalk Services

Azure BizTalk Services provides Business-to-Business (B2B) and Enterprise Application Integration (EAI) for discrete applications, both on-premises and cloud-based solutions. Azure BizTalk Services supports Electronic Data Interchange (EDI), and line-of-business (LOB) application integration for SAP, Oracle EBS, SQL Server, and PeopleSoft.

Content Delivery Network (CDN)

Azure CDN offers caching of Azure blobs and static data that is frequently retrieved by cloud-based services at pre-defined, strategic physical nodes for fast access. When data is first requested from a CDN node, it is retrieved directly from the Azure Storage blob. The CDN closest to the location from which a subsequent request is made, caches the data and sets a time-to-live (TTL) for the cached data.

Media Services

Azure Media Services supports the development of scalable and cost-efficient distribution solutions for media content. Azure Media Services include uploading, encoding, format conversion, content protection, as well as live streaming and on-demand media delivery. Windows devices and others like Xbox, as well as devices running iOS, Android, MacOS are all supported.

Multi-Factor Authentication

Azure Multi-Factor Authentication provides a method of authentication using more than one verification method for user sign-in and transactions. Verification methods include mobile applications, phone calls, and text messages. Multi-Factor Authentication applications are available for Windows Phone, iOS, and Android.

Notification Hubs

Azure Notification Hubs offer push notifications to mobile devices that scale from individual users to thousands or millions of users at one time. In addition to the support for broadcast notifications to Windows and other devices, individual users can subscribe to multiple tags that define and target specific user segments. Templates are available to specify the push notification format based on user preferences.


Azure Scheduler is a multi-tenant service available to schedule actions recurrently or on a one time basis. For example, Azure Scheduler can be used to execute backups of application data on a regular basis. Another example is using Azure Scheduler to gather data from an application on a period basis, and aggregate it for distribution.

Service Bus

Azure Service Bus is a service that offers relayed and brokered messaging between applications. Relayed messaging supports one-way messages, request and response pair messages, and peer-to-peer messages. Brokered messaging supports asynchronous communications using devices such as queues and subscriptions.

Visual Studio Online

Azure Visual Studio Online is a tiered service based on Team Foundation Server that provides the ability to develop and store code in the Azure cloud. You can plan and track projects, validate code, build and rebuild projects as needed, test code, and perform load testing of applications, services, and web sites.

Microsoft Azure Network Services

Microsoft Azure Network Services provide specialized connectivity and routing services to support secure communications between Azure cloud resources and on-premises components, as well as traffic load balancing.


Azure ExpressRoute allows the creation of private network connections between on-premises components and Azure data centers. ExpressRoute network connections are highly secure, reliable, and faster because they are not created across public networks such as the Internet. ExpressRoute requires establishing network connections to Azure at an Exchange Provider facility or directly from a client WAN.

Traffic Manager

Azure Traffic Manager offers the ability to shape user traffic distribution (load balancing) across Azure data centers and services, and by doing so improve the responsiveness of applications and content delivery time. Traffic Manager optimizes the availability of applications hosted on Azure services by providing automatic failover when an Azure service becomes unavailable, and by directing users to the closest service to them based on network latency.

Virtual Network

Azure Virtual Network provides the ability to create a virtual private network in Azure and connect to on-premises data center resources or a specific server using an IPSec connection. You can also configure virtual machines and services to point to DNS services on-premises or running in a virtual network.


In this article, you learned about the many features and services available with a Microsoft Azure cloud solution. Microsoft Azure provides the ability to migrate away from on-premises infrastructure deployment solutions. Microsoft Azure delivers a 99.95% monthly SLA and supports automatic operating system and service patches, network load balancing, and resiliency to hardware failure.

Don’t forget to like & share this post on social networks!!! I will keep on updating this blog. Please do follow!!!


Infrastructure Considerations for Cloud Computing

This article explains that although it can be beneficial to outsource various resources to the cloud, you must ensure that you prepare your on-premise infrastructure to meet the unique challenges that come with using hosted services.


As the cloud computing trend continues to gain momentum, it is likely that you will eventually consider outsourcing some of your IT operations to a cloud service provider. Before you do however, it is important to realize that the cloud comes with an entirely new set of challenges that you may not experience with a traditional on-premise data center. Since some of these challenges revolve around your network infrastructure, I wanted to take the opportunity to talk about how making use of cloud services may impact your network infrastructure.

Service Level Agreements

Most of the larger organizations impose Service Level Agreements (SLAs) on their IT staff for mission critical applications. For example an organization may have an SLA that requires E-mail to be available 99.999% of the time. This so called “five nines of availability” means that the service is only permitted to be unavailable for a maximum of five minutes per year.

My point is that if you are bound by a strict SLA for your mission critical applications, then it would not be in your best interest to outsource those applications to the cloud. The reason is simple. No cloud service provider can guarantee that your SLA will be met.

Don’t get me wrong. There are some cloud service providers that will agree to be bound by an SLA. Ultimately though, the chances that the provider will actually be able to adhere to the SLA are unrealistic. After all, your organization will be connecting to the service provider over the Internet. Neither you, the cloud service provider, or your ISP controls the entire Internet. Even if a cloud service provider is able to achieve 100% availability for a hosted application, an Internet failure could render the application inaccessible. If you read the fine print on the cloud provider’s SLA, it will most likely say that the cloud provider is not responsible for Internet outages.

Internet Redundancy

In the previous section, I explained how an Internet failure could render a hosted application or service inaccessible. It is currently impossible to completely avoid Internet failures, but you can sometimes use redundancy as a means of reducing the chances of an Internet related outage.

Of course simply having multiple Internet connections alone isn’t enough. The key to achieving effective redundancy is to acquire Internet service from multiple providers. Suppose for example that you had two separate broadband connections from the same Internet provider. If that provider experienced an outage then the outage would most likely affect both of your Internet connections, which would completely defeat the purpose of having redundant Internet connections.

In some cases it may be impossible to get Internet access from multiple providers. In those types of situations you must consider whether or not you should outsource anything to the cloud. For example, I live in a rural area that is serviced by a single “mom and pop” ISP who has a monopoly on the entire area. I couldn’t get service from a second provider even if I wanted to. Although I could get redundant connections from my current ISP, my ISP has an outage about once a week. As such, I would have to be out of my mind to outsource anything important to the cloud.

Admittedly, the biggest thing stopping me from using the cloud is that my ISP isn’t very reliable. But what if you are in a situation in which your ISP is reliable, but you can’t get service from a secondary ISP?

In a situation like that redundancy may still be beneficial even if you can only get service from a single ISP. If you have redundant Internet connections from a single service provider, those connections will not protect you in the event that your ISP has an outage. It will however, help to protect you against a hardware failure. For example, if an Internet router on your network were to fail, network hosts would still be able to reach the Internet though the redundant connection.


Another factor that you will have to take into account when you are considering whether or not to outsource services to the cloud is the unpredictable latency that comes with using a service over the Internet.

Lately, a lot of organizations have been using cloud storage. Doing so provides them with an unlimited, on demand storage pool. While there is no denying the benefits that cloud storage can provide, it is also worth noting that cloud storage does not currently provide the same level of performance that is available through on-premise storage solutions. Furthermore, because the storage pool is Internet based, the latency is somewhat unpredictable.

To give you a better idea of what I mean, consider my own situation. I work out of my home and most of the time my Internet connection performs fairly well. However, in the late afternoon I always notice my Internet connection getting slower as my neighbors start getting home from work. Sometimes the performance decreases to the point that my connection is borderline unusable.

Granted, I work from my home, but the same problem can occur in a corporate environment. Factors such as what the users are doing at a given moment and your ISPs overall capacity can lead to fluctuations in Internet performance, and these fluctuations can translate directly to latency if you are connected to a cloud storage pool.

Client Side Software

One last infrastructure requirement that I want to mention is client side software. If you are only using the cloud for Infrastructure as a Service (IAAS) as would be the case with cloud storage then the client software isn’t really an issue. However, if you are actually hosting applications in the cloud then the software that the clients use to interface with those applications needs to be reliable.

For example, I know someone who decided to use Microsoft Office Web App rather than having the expense of licensing Microsoft Office 2010 on all of their PCs. In case you are not familiar with Office Web App, it is a collection of free, Web-based versions of the Microsoft Office applications. These applications are designed to be accessible through a Web browser.

To make a long story short, the person in question ended up visiting a malicious Web site that used a virus to disable Internet Explorer. This caused them to not only lose access to the Internet, but also to the basic productivity applications that they use every day.

Even though this event didn’t occur within a corporate environment it very well could have. Not all cloud applications are browser based, but some are. Regardless of whether a cloud application is browser based or not though, it is critical that you take measures to preserve the integrity of whatever software the client computers use to access the hosted application.


In this article I have explained that although there are advantages to using cloud services, hosted services present a set of challenges that are often different from what one might experience in an on-premise deployment. As such, it is important to anticipate as many of these challenges as possible and to plan accordingly before you begin outsourcing resources to the cloud.

Don’t forget to like & share this post on social networks!!! I will keep on updating this blog. Please do follow!!!


Considerations for AWS

When utilizing AWS there are a few things one should consider to side-step common blunders that are often made. In this article we will look at the areas that should be given careful consideration, for these errors to be avoided, as best possible, and ultimately enhance the user experience of AWS.


AWS makes computing in the cloud simple, cost effective and efficient to use. The way in which we utilize the resources available to us is so much easier compared to the traditional ways in which we previously computed and managed our IT infrastructure and resources. Sometimes the ease makes us take a more carefree approach to computing and our management of resource use or the control we have over what is running at any given time is not as it should be. Occasionally the way in which we use the AWS services becomes haphazard and mistakes become pronounced which could have a negative impact on the service, cost and user experience.

The mistakes encompass misusing resources or not using them in the correct manner leading to outcomes that result in overspending and decreased efficiency, some of the time.

There are some common areas where blunders often occur and if we are aware of them, the more chance we have to avoid them or prevent them from transpiring.

Some common mistakes worth avoiding when utilizing AWS

Overspending when there is no need for this

Many of the errors made when utilizing AWS result in unnecessary overspending. The increased cost is avoidable but many organizations still tend to overspend when it comes to AWS utilization. AWS enables the swift allocation of resources and often organizations are running resources that they are not even aware of, resources in many regions and resources that are not even being used. If the resources are not needed you should not be running them and unnecessary spending can be avoided. Even if the resource is provisioned, maybe for future use, but is not actually being used this will incur cost. It’s essential to properly manage resource usage, to know what resources are being utilised, to know what resources are not needed and which resources have become stale. Better resource management is key to maintaining sensible costs.

It’s best to keep track of what you are spending on and keep reviewing on a quarterly basis to ensure that what you have subscribed to and using is still required.

Fail to plan, plan to fail

Planning should be a fundamental part of the process. Planning is especially important with regards to AWS instances and resources.

Plan your instance use so that the correct type, quantity and size are chosen and are fit for purpose to keep functioning optimal and costs realistic.

It’s fundamental that you plan so that you don’t over-provision, you may think by doing this you are allowing for further flexibility but instance types are optimized for certain functions, each with varying costs and capabilities. Only you know your specific deployment requirements hence you must take the responsibility for getting this right.

Moreover ensure you understand the different instance payment models available (on-demand, reserved and spot) and ensure you opt for the appropriate one or combination by planning ahead.

Devise a plan for managing your resources and the subscriptions, stick to it and keep it up-to-date.

Instance misuse and mismanagement

Instance use and management of instances and how we use them has the potential to be quite tricky. From choosing the size of the instance to the quantity of instances we require, through to the most suitable instance type. It is essential that we make the right decisions to get this right. AWS prides itself in flexibility but sometimes we find that the diverse selection makes the decision-making process more complex.

Instance type may be a problem for some. Each instance type is devised differently and for distinctive purposes. The problem occurs when you utilize an instance type for the wrong purpose resulting in overspending where you shouldn’t be and achieve less than optimal satisfaction.

It often transpires, to be on the safe side (so we think), that we choose oversized instances. These instances provide more power than is required, which again incurs needless cost.

Running more instances than is needed is also a common occurrence. There is no need for this, as AWS offers an auto-scaling capability to scale when and if required. Even more worrying is running instances without even knowing that they are running. Leaving instances to run without being used is futile and wasteful equivalent to leaving the water running unnecessarily.

If that wasn’t enough to consider the instance resource payments models are also a matter of decision making. The three models available are on-demand, reserved and spot. How you choose to pay for your instance resources heavily depends on your deployment and application. Each with their own benefits, differing usage will incite one model over another and sometimes a combination of models leads to the best efficiency in cost. Knowing when and how to utilise reserve and spot instances can significantly reduce your costs.

Security mismanagement

Although a very well known and highlighted ‘not to do’, this mistake still happens way to often, using your root account instead of creating a subaccount. This should not be occurring. Your root account gives full access to all your resources, for all your AWS services without restriction.

You should create an IAM user with admin privileges and individual IAM users with relevant permissions. Ensure that you set your permissions correctly. It is best to follow the least privilege approach whenever possible.

Errors in properly configuring security groups leads to weaknesses in security and makes you more susceptible to potential security threats.

Always utilise encryption to ensure the privacy and security of your data.

Don’t undervalue security, encrypt whenever possible, and for more sensitive data consider using Amazon Virtual Private Cloud (VPC).

Not obtaining the necessary education or keeping abreast with changes

Amazon ensures their AWS services are well documented and all documentation kept up-to-date. Online webinars are available to view, without charge, and very informative (take advantage of this). AWS educational events, conferences and talks occur globally. It’s important to have a good knowledge base of the various services and best practice to ensure the best possible outcome. Education is key to enhancing user experience and what you are able to achieve from the AWS service.

Availability and Backup Blunders

EBS snapshots allow you to create copies of your volumes. This doubles up as an effective solution for performing backups. It is not very effective if you take too few, too many or none at all, this is something that should be avoided. Not managing to ensure that data remains current is a big mistake. In case of an unfortunate event occurring, this may lead to loss of data. Getting the balance right is essential.

Spreading your workload over availability zones (within multiple data centres, within a region) is a great feature of AWS, however many make the mistake of not taking benefit from this. This feature when utilized correctly, increases availability and safeguards in case an outage does occur.


This by no means represents a comprehensive collection of pitfalls but rather a few of the common occurring blunders that stand out.

Spending a little more time on planning could have unprecedented advantages in the long run. Planning will help to optimise the service and assist the organization in avoiding costs to spiral out of control unnecessarily.

Although AWS is a pay-for -what -you -use service, it’s important to remember that a lot of the time we are paying for resources that we are not actually using, merely because we have forgotten about them. Either they become lost in the many resources we have running at any giving time, or we have provisioned them for later use. These unused, yet provisioned resources still incur cost, cost that could be avoided.

Considering the areas where mistakes are often made and trying to work on these areas to avoid them could improve your experience with AWS, reduce unnecessary spending, enhance efficiencies and reduce potential vulnerability and security risk.

Don’t forget to like & share this post on social networks!!! I will keep on updating this blog. Please do follow!!!