Introduction to the Simple Network Management Protocol (SNMP)

In this article we will look at how to use SNMP, the Simple Network Management Protocol, and install the service on Windows Server 2003. We will cover the essentials of the SNMP protocol, how it is used and how to install it and configure it to work within a Community.

SNMP is a popular protocol for network management. It is used for collecting information from, and configuring, network devices, such as servers, printers, hubs, switches, and routers on an Internet Protocol (IP) network. SNMP can collect information such as a server’s CPU level, Server chassis Temperature… the list is nearly endless of what you can do with SNMP if configured properly.

Microsoft Windows Server 2003 provides SNMP agent software that works with third-party SNMP management software to monitor the status of managed devices and applications. Many SNMP based network management software applications come with ‘client’ software that will install on your Windows Server 2003 system, some do not. Some network management suites do not include a client portion of the software and this is where you may need to install and configure a Server’s SNMP Service.

Introduction to the Simple Network Management Protocol (SNMP)

SNMP is a simple protocol that can be used on just about any networking device in use today. Some view it as a security threat; others see it as a way to efficiently manage some of their key systems. However you decide to see it, SNMP is a easy to use, easy to set up and not very difficult to understand.

The SNMP protocol was designed to provide a “simple” method of centralizing the management of TCP/IP-based networks. If you want to manage devices from a central location, the SNMP protocol is what facilitates the transfer of data from the client portion of the equation (the device you are monitoring) to the server portion where the data is centralized in logs for centralized viewing and analysis. Many application vendors supply network management software: IBM’s Tivoli, Microsoft’s MOM and HP Openview are three of over 100+ applications available today to manage just about anything imaginable. The protocol is what makes this happen. The goals of the original SNMP protocols revolved around one main factor that is still in use today: Remote Management of Devices. SNMP is commonly used to manage devices on a network.


UDP stands for User Datagram Protocol and is the opposite of TCP, Transmission Control Protocol which is a very reliable and high overhead protocol.

User Datagram Protocol is very low overhead, fast and unreliable. It is defined by RFC 768. UDP is easier to implement and use than a more complex protocol such as TCP. It does however provide plenty of functionality to allow a central manager station to communicate with a remote agent that resides on any managed device that it can communicate with. The unreliability comes in the form of checks and balances whereas if TCP sends something, it waits for an acknowledgment and if it doesn’t hear back, it will resend. Since logging of devices usually happens within a time period that is cyclic in nature, then it’s common sense that you missed the event and you’ll catch it next time… the tradeoff being that the low overhead protocol is simple to use and doesn’t eat up all your bandwidth like TCP based applications going across your WAN.

SNMP Operation

There are two main players in SNMP. The manager and the agent. The manager is generally the ‘main’ station such as HP Openview. The agent would be the SNMP software running on a client system you are trying to monitor.


The manager is usually a software program running on a workstation or larger computer that communicates with agent processes that run on each device being monitored. Agents can be found on switches, firewalls, servers, wireless access points, routers, hubs, and even users’ workstations – the list goes on and on. As seen in the illustration, the manager polls the agents making requests for information, and the agents respond when asked with the information requested.

Network Management Station (NMS)

The manager is also called a Network Management Station or NMS for short. The software used to create the NMS varies in functionality as well as expense. You can get cheaper applications with lesser functionality or pay through the nose and get the Lamborghini of NMS systems. Other functionalities of the NMS include reporting features, network topology mapping and documenting, tools to allow you to monitor the traffic on your network, and so on. Some management consoles can also produce trend analysis reports. These types of reports can help you do capacity planning and set long-range goals.

SNMP Primitives

SNMP has three control primitives that initiate data flow from the requester which is usually the Manager. These would be get, get-next and set. The manager uses the get primitive to get a single piece of information from an agent. You would use get-next if you had more than one item. When the data the manager needs to get from the agent consists of more than one item, this primitive is used to sequentially retrieve data; for example, a table of values. You can use set when you want to set a particular value. The manager can use this primitive to request that the agent running on the remote device set a particular variable to a certain value. There are two control primitives the responder (manager) uses to reply and that is get-response and trap. One is used in response to the requester’s direct query (get-response) and the other is an asynchronous response to obtain the requester’s attention (trap). As I mentioned earlier, I alluded to the fact that the manager doesn’t always initiate – sometimes the agent can as well. Although SNMP exchanges are usually initiated by the manager software, this primitive can also be used when the agent needs to inform the manager of some important event. This is commonly known and heard of as a ‘trap’ sent by the agent to the NMS.

The Management Information Base (MIB)

We just learned what primitives were… the agent and the manager, exchanging data. The data they exchange also has a name. The types of data the agent and manager exchange are defined by a database called the management information base (MIB).The MIB is a virtual information store. Remember, it is a small database of information and it resides on the agent. Information collected by the agent is stored in the MIB. The MIB is precisely defined; the current Internet standard MIB contains more than a thousand objects. Each object in the MIB represents some specific entity on the managed device.

SNMPv2 and SNMPv3

With all TCP/IP related protocols, it’s a well known fact that anything dating before the creation of IPv6 (or IPng) has security weaknesses such as passwords sent in cleartext. SNMP in its original form is very susceptible to attack if not secured properly, messages sent in cleartext exposing community string passwords, or default passwords of public and private being ‘guessed’ by anyone who knew how to exploit SNMP… beyond its inherent weaknesses SNMP in its original implementation is still very simple to use and has been widely used throughout the industry. SNMP in its first version lacked encryption or authentication mechanisms. So, now that SNMP in its first version was good enough, work began to make it better with SNMPv2 in 1994. Besides for some minor enhancements, the main updates to this protocol come from the two new types of functionality, where traps can be sent from one NMS to another NMS as well as a ‘get-bulk’ operation that allows larger amounts of information to be retrieved from one request. SNMPv3 still being worked on and is incorporating the best of both versions and enhanced security as well. SNMPv3 provides secure access to devices by a combination of authenticating and encrypting packets over the network. The security features provided in SNMPv3 are message integrity which ensures that a packet has not been tampered with while in transit, authentication which is determining the message is from a valid source and encryption, which is the securing of the packet by scrambling its contents.


In this article we covered the basics of SNMP, the Simple Network Management Protocol, versions 1, 2 and 3. We also covered some of the terminology used such as MIBs, traps and so on.

Don’t forget to like & share this post on social networks!!! I will keep on updating this blog. Please do follow!!!



Red Hat Virtualization Technology


Hello guys…. finally I have decided to come with highly demanded technology in nowadays, that is Virtualization. So here we are going to start a new series for Red Hat Virtualization. I hope this will help you to understand virtualization technology and keep in touch for more upcoming posts.


Virtualization is a technology that allows a physical system to be partitioned into multiple virtual systems which may each run its own Operating System simultaneously. These virtual machines are isolated (independent)  from each other. Some major advantages of virtualization are:

  1. Reduce management cost.
  2. Lower operating costs.
  3. Reduced power consumption.
  4. Greater overall reliability.
  5. Require less physical space.
  6. Allow live migration and much more.

Virtualization Types:

  1. Full virtualization = Allow unmodified Operating systems and softwares to be run on virtual machine exactly like as it were running on real hardware.
  2. Hardware Assisted Virtualization = The development of Intel-VTx and AMD-V allow H/W assisted virtualization possible where CPU helps to Hypervisor to more efficient virtualization.
  3. Para-virtualization = Same kernel can be used for both physical and virtual machines that allow better performance.

There are so many organizations involved in this technology and each-one have its own features and limitations. But in this post, I am just going to discuss about the Overview of RED HAT Virtualization technology.

Red Hat Enterprise Virtualization platform consists of one or more Hosts and at least one Manager. The virtual machines are hosted on the hosts. To understand this technology you should be familiar with some components of virtualization.

Red Hat Enterprise Virtualization Manager (RHEV-M)

The Red Hat Enterprise Virtualization Manager acts as a centralized management system that allows system administrators to view and manage virtual machines and images. It’s a tool that runs on RHEL6 O.S.

The manager provides a graphical user interface to administer the whole virtual environment infrastructure. The RHEV-M can be manage through Administration Portal, User Portal, and an Application Programming  Interface (API).

  1. The Administration Portal is used to perform setup, configuration, and management of the Red Hat Enterprise Virtualization environment.
  2. The User Portal is used to start, stop, reboot, and connect to virtual machines but you can not perform all tasks like as admin portal.
  3. The REST API provides and interface for automation of tasks.

Red Hat Enterprise Virtualization Hypervisor (RHEV-H)

RHEV-Hypervisor is a fully featured virtualization platform for quick, easy deployment and management of  virtualized guests.

Red Hat Enterprise Linux Host(s)

The Red Hat Enterprise Virtualization Manager also supports the use of systems running RHEL6 AMD64/Intel64  versions as virtualization hosts (Means KVM as a Hypervisor).


Red Hat Enterprise Virtualization Platform Overview



Image Source :

  1. This section used to provide authentication for users that may be LDAP/IPA or Active directory service.
  2. PostgreSQL: This section define the database storage for entire virtual platform where all the will be stored.
  3. VDSM is a service that is used to manage RHEV-Hypervisor and RHEL6 hosts by RHEV-Manager.
  4. This section describe protocols (SPICE or VNC) that can be used to take the console (means display) of RHEV-H.
  5. This is a central point of administration for Linux virtual platform and managed by a tool called RHEV-M.
  6. This is a point where administrator can administer RHEV-M through web interface, user can access virtual  machine through web interface, or this can be achieved via REST API (script based XML language) interface.


Don’t forget to like & share this post on social networks!!! I will keep on updating this blog. Please do follow!!!

The Significance of Administrator account in Windows Servers

It is critical that you protect the Administrator account in a manner that is suitable for your organization. The local Administrator account has complete control over your server, and the domain Administrator account has complete control over your network! So, it makes sense to have a very strong password for this account.

Administrator is an anonymous account in larger organizations. Take a look at your security logs in the Event Viewer and ask yourself, “How do I know who did what using the Administrator account?” It is because of this that you should create a user account with suitable administrative or delegated rights for any administrator who needs them. Using the default Administrator account is often banned unless there is an emergency. This allows every member of IT to be accurately audited by the Security log. To do this sort of thing, you’ll need to create Administrator user accounts for each administrator. You then need to ensure that each administrator has only the rights and permissions they need to do their job—and no more than necessary.

Some organizations choose to disable the Administrator account altogether. That’s one solution that you might not be big on because this account is a great backdoor in the case of password lockouts. Administrator is the one user who cannot be locked out. Those organizations could take an alternative approach. You can think of it as the “nuclear” option. You’ve all seen those movies where two generals have to turn two different keys in order to start a nuclear missile launch. You can do the same thing with the Administrator password. It can be set by two different individuals or even departments, one typing the first half of the password and the other typing the second half. Organizations needing this sort of option probably have an IT security or internal audit department that is the holder of one half of the password while the server administration team retains the other half.

One final option is to rename the Administrator account. There’s some debate about this option because the security identifier (SID; a code that Windows uses internally to uniquely identify an object) of the account can be predicted once you have access to the server or the domain. Some argue that renaming the account is pointless. However, most Internet-based attacks are actually rather robotic and unintelligent. They target typical names such as SA, root, or Administrator and try brute-force attacks to guess the password. It is still worthwhile to rename the Administrator account to defend against these forms of attack.

In the end, the same old security rules apply. Set a very strong password on your Administrator accounts, restrict knowledge of the passwords, restrict remote access where you can, and control physical access to your servers.

Don’t forget to like & share this post on social networks!!! I will keep on updating this blog. Please do follow!!!

nf_conntrack: table full, dropping packet — A solution for CentOS Dedicated Servers

A common problem you may experience is sluggish performance or disconnections from your Cent OS dedicated server, even though there is sufficient CPU, RAM, disk I/O, etc. After some troubleshooting, you may come to believe you are being DDoS attacked, but you don’t see an unusual amount of traffic, and there’s no single IP or handful of IPs that are making an unusually large number of connections to your server. After looking over /var/log/messages, you’ll come to see a lot of messages like the following:

nf_conntrack: table full, dropping packet

This happens when your IPtables or CSF firewall is tracking too many connections. This can happen when you are being attacked, or is also very likely to happen on a busy server even if there is no malicious activity. Connections will be tracked if you have a firewall rule that does NAT or SNAT, or if you are tracking the number of connections per IP for rate limiting reasons. These scenarios are common either in linux router / firewalls, or in the case of firewall rules that are there for brute force protection / DDoS protection.

By default, Centos will set this maximum to 65,536 connections. This is enough for lightly loaded servers, but can easily be exhausted on heavily trafficked servers with a lot of firewall rules. On heavy production servers, I recommend to increase this limit to half a million, which will make a big improvement on the amount of workload those servers can handle.

It is interesting to note, that the kind of servers most likely to have this problem, are ones where the user has set a lot of strict firewall rules to “help ward off attacks”. Unfortunately, the reality is that the firewall rules themselves are causing the downtime, not any attack! One way to solve the problem is to disable your firewall entirely, but before you go to that extreme, it is worth trying to increase the maximum connections here.

In this article, I’ll give you instructions on how to increase the maximum allowed connections for the conntrack connection tracker in Centos. Centos 5 and Centos 6 store the relevant data in different places, so I’ll have instructions for each below. The instructions below assume you’ll be entering commands in an SSH shell / command prompt window:

Centos 5.x: Increasing maximum connection tracking for nf_conntrack

First of all, you may want to know what the maximum connection limit is already

cat /proc/sys/net/ipv4/ip_conntrack_max

This will output the current maximum number of connections that IPtables can track.

If you want to see the current number of connections being tracked, you can run the following command:

cat /proc/sys/net/ipv4/netfilter/ip_conntrack_count

You’ll be given a number of connections here. If this number is more than 20% of the maximum, it’s probably a good idea to increase the maximum.

If you want to temporarily increase this to a half million, enter the following:

echo 524288 > /proc/sys/net/ipv4/ip_conntrack_max

And if you’d like the change to persist across reboots, you’ll need to edit the following file:

nano /etc/rc.d/rc.local

Copy / paste the following line to the end of the file, and then save your changes:

echo 524288 > /proc/sys/net/ipv4/ip_conntrack_max

That’s all there is to it. On heavily trafficked servers, it’s not unusual to see 100k – 200k connections being tracked even if there is no malicious activity. 500k should be a safe maximum, but if you really need to you could increase this further.

Centos 6.x: Increasing maximum connection tracking for nf_conntrack

On Centos 6, the general idea is the same as Centos 5, but the file locations are slightly different.

To view the current maximum configured connections, run:

cat /proc/sys/net/netfilter/nf_conntrack_max

To see the current used connections, run:

cat /proc/sys/net/netfilter/nf_conntrack_count

To temporarily increase this to a half million, run:

echo 524288 > /proc/sys/net/netfilter/nf_conntrack_max

To make this change persist after a reboot, you’ll need to edit the following file:

nano /etc/rc.d/rc.local

And copy and paste the following line to the end of the file, and then save your changes:

echo 524288 > /proc/sys/net/netfilter/nf_conntrack_max

That’s it. You should be in good shape now. Just like in Centos 5, on heavily trafficked servers, it’s not unusual to see 100k – 200k connections being tracked even if there is no malicious activity. Therefore, 500k should be a safe maximum, but if you really need to you could increase this further.


The reason I have Cent OS instructions above is because we’re most familiar with Centos, using it for most of our internal systems. I understand that a lot of other people prefer Ubuntu or Debian. I don’t want to leave those folks out in the cold here, I am just not familiar with this fix for those OS’s. If you have any instructions on doing the same for Ubuntu, Debian, or other Linux distributions, please share them with me by emailing at If you do send that along, I will be glad to post an update with that information.

Don’t forget to like & share this post on social networks!!! I will keep on updating this blog. Please do follow!!!

Do You Need Cloud Computing?


This article looks at whether it is time for you to “use the cloud”, what it means, and what a cloud offers.


Everyone has heard of cloud computing. The term is used over and over on TV, in the media, at tradeshows, and by salespeople. Unfortunately, it means so many things to so many different people, now no one really knows what someone is referring to when they talk about “cloud computing” (without getting a lot more information). Microsoft says that they store your photos in the cloud. You can get cloud email. Who knows, your printer could even be “cloud capable”. This week, I saw a TV commercial for a company that offered insurance, payroll services, and cloud computing. It seems like everyone just wants to say that they sell “the cloud” without ever taking time to define why you need it or how it really works.

As IT Pros we want to embrace cutting edge technology and leverage that technology to the benefit of our companies. Of course, we also want to use cutting edge technology to make our lives, as IT pros, better. So is it time for you to “use the cloud”? What does that even mean? What does a cloud offer? Let’s find out.

What’s the Difference Between a Virtual Infrastructure and Cloud?

Cloud infrastructures abstract away the objects you are used to managing with your virtual infrastructure. For example, you are used to managing hosts, resource pools, clusters, and virtual machines for your own datacenter. With a cloud infrastructure you have a (seemingly) infinite, secure, and highly available pool of computing resources in which you don’t worry about the typical virtual infrastructure objects (you don’t even know about them). With a cloud, you are able to use a self-service portal to quickly deploy pre-built groups of virtual machines for specific roles (such as a database with a web server front-end). You manage your “virtual datacenter” with no knowledge of what else is happening. Typically, you are just billed for what you use.

Public vs. Private Cloud Computing

To know if you “need the cloud”, let’s first understand what “the cloud” is. First, there is public cloud and private cloud. A public cloud is the type of cloud that you see big companies (like Microsoft) advertising on TV. When they say “take it to the cloud”, they want you to move your data to their datacenter. That data could be actual data that you have or, as with software as a service (discussed more below), it’s the data from your applications that is stored in their cloud (such as your photo sharing software). The public cloud doesn’t just have to be someone keeping your data, it could really be someone else doing anything for you (such as editing your photos). When it comes to server virtualization, public cloud is where you run your virtual machines.

So if public cloud is someone else doing something for you, private cloud is you doing it for yourself. All forms of cloud services can be built in house, in your own datacenter with you retaining total control. Private cloud eliminates the concerns related to security breach in a shared datacenter or compliance concerns related to shared storage. While private cloud removes those types of concerns, it adds some additional concerns that, with public cloud, you can dump on the cloud provider. For example, with your own private cloud you are responsible for monitoring capacity, planning for the future, and ensuring high availability. If the private cloud needs to ramp up resources quickly, you are responsible.

A very controversial difference between public and private cloud revolves around the associated cost. With public cloud you know that the public cloud provider is offering their services to make a profit. You as the customer know that you are paying them (based on your usage) to do something you could do and adding some level of profit on top. The flip side to that is that the provider will claim that they can offer the same services for less money than you are able to do because they are doing it on a larger scale. Of course, all this is debatable.

What happens when you connect your private cloud to a public cloud? You get a “hybrid cloud” where these clouds can work together (which may be the best of both worlds).

3 Types of Cloud Computing Services

No matter whether you use public or private cloud, there are multiple types of cloud services that can be offered. Here are the three most common forms of cloud services:

  • Infrastructure as a Service (IaaS) – just like your virtual infrastructure in your existing datacenter, IaaS clouds add cloud services on top (multi-tenancy, security, self-service). With public IaaS clouds, elasticity gives you the ability to expand and contract your infrastructure as needed. You could connect your private cloud to your public IaaS cloud to create a hybrid cloud.
  • Platform as a Service (PaaS) – for developers who need to deliver an application (usually to the Internet), platform as a service cloud offerings have the backend databases already available, clustered web servers available, and all you need to provide is your application code to get your application up and running (potentially to millions of users) as quickly as possible.
  • Software as a Service (SaaS) – likely the cloud service that all of us has already used is software as a service. Free Internet webmail services (like Gmail or Yahoo Mail) are the most common examples of software as a service. Other Internet applications like Microsoft’s Office 365, Dropbox, or are all common software as a service applications. What makes these cloud services is that these companies are doing something for you (such as providing you email services) that you could have provided in house, using your own datacenter.

While those are the three most common cloud computing services, providers around the world have made up their own services and acronyms for spinoffs of these. For example:

  • DRaaS – disaster recovery as a service
  • BUaaS – backup as a service
  • DaaS – desktop as a service
  • STaaS – storage as a service

(and there are more).

Do You Need Cloud Computing?

Now let’s get back to the question that originally started this article – do you need cloud computing? Hopefully, this will help you decide:

  • Software as a Service – Yes, you likely already use this and will continue to use it
  • Platform as a Service – If you are a developer, this could be the right option for you – research to learn more about it.
  • Infrastructure as a Service – if your business is quickly growing or regularly expands and contracts then IaaS may be a great option for you. Test it and compare costs to learn more.
  • Private IaaS cloud – if you are at a large enterprise and want to provide self-service with chargeback (or showback) to different divisions of your company, then consider private IaaS cloud

Other services mentioned like DRaaS and BUaaS are becoming more and more popular. If you could use help with disaster recovery or backup then these types of cloud services may be the most logical entry point for you (and your company) to test and adopt public cloud computing services.


Cloud computing is “raging hot” in the technology world but marketing people and companies trying to sell cloud computing have really pushed “the cloud” so hard that IT people are confused as to what is what and why they need it. I recommend taking a step back from all the cloud computing marketing propaganda; learn about the different types of cloud and the different cloud services to determine if any of these are a good fit for you and your company. Cloud computing is not “one size fits all”.

Don’t forget to like & share this post on social networks!!! I will keep on updating this blog. Please do follow!!!

How Cloud Computing Works?

If you have experienced using e-mail, you have already experienced using the cloud. Basically, what you are loading on your machine, as an e-mail user, is simply – an application. You log in into a Web Service, and all the programs necessary to actually run the application, are located on a remote machine owned by another company. The real storage and the software does not exist on your computer, but exist instead – on the cloud.

The cloud computing system is now widely used in e-commerce businesses, and basically all businesses, and is changing how the entire industry grows.

Why Cloud?

It is estimated that there is over 1 Exabyte of data stored in the cloud at the moment, or 1,073,741,824 gigabytes of data. The Gartner prediction is that at the year-end 2016, more than 50% of Global 1000 companies will have stored customer-sensitive data in the public cloud.

By adopting the cloud, companies are relying on an automated decision making system to reduce the number of the staff they need to perform complex calculations, analysis, and to actually maintain the system.

Cloud users:

  • can access their data from anywhere, at any time (like e-mail), only with the usage of an internet connection
  • are relieving themselves from the stress of buying software licenses for each tool they need to install
  • hardware cost is brought down (data is stored and copied on the cloud)
  • need computers with less processing power (without losing on the performance)
  • save money on IT support, and on server and data storage space that usually requires high maintenance
  • can solve more complex problems easily and speedily, by using a grid of computers available on the cloud, instead of a single computer
  • can grow their business as much as they like, without experiencing failures in the system due to the large number of customers

Cloud services are usually paid for, as and when they are being used. Like a taxi fare. And the meter stops running as soon as you are not using the cloud any more. Whereas with regular hosting, you are paying for the server maintenance and the staff all of the time, and you are not sure that the service will always be optimal to your end users.

What Is It Made Of?

We could say that the cloud is made of layers, usually know as a front end layer, and the back end layer. The network layer is then used to connect end users’ devices.

On the front end of a cloud computing system there is:

  • a client
  • an application
  • user interface
  • basically, what the user “sees and interacts with”

On the back end of the system, there are:

  • computers that run the applications
  • servers (with a central server(s))
  • data storage systems
  • basically, what we call “the cloud”

Each application will have its own server, and a central server that monitors traffic on all other servers, and communicates through protocols. The software that is used in a cloud computing system, that allows computers to communicate, is known as middleware. As for the data storage, a cloud computing system must copy all data and store it at, at least, one other device. Thus, it requires twice the number of storage devices for the company that provides the service, or more. This is known as redundancy.

The resources, or the electronic equipment needed to handle the data, are stored and housed in data centers, also known as server farms. Actually, so much digital information depends on it.

Types of Cloud Computing

Cloud computing services will usually cover one of three things; they will either utilize the virtual servers to create a virtual IT, remotely hosted software, or a network storage, with an archive of the data. Those services are usually known as:

  • Infrastructure as a Service (IaaS)
  • Platform as a Service (PaaS)
  • Software as a Service (SaaS)
  • Storage as a Service (STaaS)
  • API as a Service (APIaaS)
  • …and more


Cloud Computing Types (Source: Wikipedia)

With all these types of cloud computing, it becomes even unnecessary to own your own infrastructure, or even own your own software. You can get what you need from the cloud, when you need it.

Remaining Questions

The only argument against cloud computing remains the one about privacy and security. However, the growth of mobile industry, the impatience of an average internet user, the millisecond of a page load time that makes all the difference in the e-commerce race, makes it impossible for company owners to ignore the benefits brought to their business by using the cloud.

Another discussion revolves around cloud computing as being a greener option, or not. Generally, it reduces the power consumption by decreasing the number of hardware components within individual businesses, and by not using the natural resources such as paper, it reduces the carbon footprint significantly. However, large data centers also use up enormous amounts of energy, in US alone, for example, the consumption equals to more than sixty billion kWh of electricity.

Don’t forget to like & share this post on social networks!!! I will keep on updating this blog. Please do follow!!!