How Can Server Monitoring Improve Performance?

It’s important to maintain a careful watch over a company server as misuse of this technology can lead to data loss and it can incur financial costs. Server monitoring tools provide administrators with an easy way to maintain a vigilant approach as they provide alerts and function to keep the administrator up to date and abreast of any problems – potential or current.

A network monitoring tool is a powerful application that can monitor bandwidth, availability and server performance monitoring.

Server Systems

Improve PerformanceFirstly let’s consider what a server is. A server is a system of computers that provides network services. It’s a collection of hardware and software that works together for the purpose of effective communication between computers. So web server monitoring is carried out through the use of web server software. This software checks the working conditions of the server and relays messages regarding CPU usage, performance level of the network in use and the health of the remaining disk space. Server monitoring can have many additional features such as alerting and benchmarking.

 

Server monitoring can be divided into several categories.

Firewall monitoring

Firewall MonitoringIt’s important to maintain a close watch on the security of your firewall and monitors can be used to perform this task. These tools are equipped with a number of different sensors and they undertake the process of firewall monitoring easily. By monitoring a firewall carefully you can get to know the exact activities going on in terms of data flow in and out of the system.

The security of the system is highly boosted, as any malware attempting to gain access to the system is automatically detected and a warning message will appear. The use of monitoring tools ensures that you are in control of your internet usage and it will indicate the top connections, top talkers, and top protocols.

Bandwidth monitoring

Bandwidth MonitoringTo monitor the bandwidth usage on a server, monitoring software needs to be used. This involves complete identification of the actual problems affecting a network system and will help administrators to work on the problem, rather than spending time attempting to identify it. This saves time and ensures a more effective management of bandwidth monitoring.

Monitoring the bandwidth usage keeps track of information about consumption as far as leased lines are concerned. It’s responsible for the effective monitoring of network connections, tracing usage trends, and measuring the bandwidth being used for billing purposes. This monitoring system contributes to decisions concerning router traffic balancing and it will warn the administrator if any flaws in network load are identified.

Router monitoring

Monitoring RoutersRouters are relatively expensive parts of server systems and as they are costly, it calls for effective monitoring to avoid incurring losses. By monitoring routers administrators can monitor the bandwidth and avoid over subscription that could lead to paying for more than is needed. By monitoring the bandwidth carefully administrators can avoid possible congestion and other related network problems.

Monitoring a router can optimize bandwidth allocations more effectively, ensuring that large networks run smoothly. Router monitoring entails spotting a problem and initiating upgrades, or even replacements if necessary. Importantly, the administrator is made aware of traffic trends, which in turn gives them the ability to plan for capacity and increment the best ROI possible.

Switch monitoring

Monitoring SwitchesTo avoid negative effects or failures on LAN, protective switch monitoring systems should always be in place. These systems allow for monitoring of port utilization and traffic and will provide necessary alerts if anything goes wrong. The monitors will detect any potential problems and it will in turn prevent them.

The switch monitoring system will alert administrators if a port is heavily used and direct them to a more underutilized port. To prevent data loss the administrator will be notified if a port starts to discard and switch; port mappers are used to quickly supervise the status of any device connected or interfacing with the port in question.

NetFlow monitoring and packet sniffing

Monitoring Packet SniffersIt’s important for an administrator to know when their bandwidth is being used, who is using it, and the reason why that person is using it. This is where NetFlow comes in and it works to keep administrators up to date about who is using what on the network. This is an important tool and it shows the administrator how the current usage may affect the network. If the administrator discovers that the bandwidth is over-stretched and over-used, they can take the necessary steps to negate the danger.

Packet sniffing can also be used when monitoring a network as it captures and records data flow. This system allows for discerning every single packet and carries out analysis on its predefined parameters. This is an addition to standard bandwidth capabilities and the sensors make use of the host machine cards.

Network and VoIP monitoring

Monitoring VOIPNetwork monitoring software functions in an important manner as it keeps the administrator informed before a failure or malfunction occurs within a network. This is useful to an administrator as they can take pre-emptive measure to prevent future flaws from happening, which in turn leads to a lesser cost to repair any issues. Effective network monitoring increases the efficiency of a network as it keeps track of bandwidth and data consumption.

It’s easy to install network monitors and they are just as easy to use. They support remote control as they provide notification techniques and multiple location monitoring. VoIP monitors constitute powerful QoS sensors that can measure jitter, the latency of a given network, and packet loss. With a good network monitoring tool, an administrator can be informed of his data usage instantly and can also be alerted and warned if the system monitoring tools detect a deterioration in quality.

These days, malware attacks are common and the need for network monitoring is more pressing than ever. Protecting data, especially that of customers, is vital to ensure that a business performs well in the modern connected age.

Is there anything else you monitor that you feel is critical to an IT department? If so, tell us about it in the comments below! And be sure to share this post with friends if you find it useful!

 

Advertisements

What is IPv6 and an IPv6 Address?

1

You are probably aware that today’s communications rely very heavily on the Internet Protocol. An IP address is a unique identifier for each machine connected to the Internet. With the ever growing size of the Internet, the demand for IP addresses increased drastically. Today’s IPv4 supports 2 to the power of 32 addresses. This number may make it sound like there’s plenty of space to accommodate all devices in the world, but the truth is that there are not many free IP classes left. This means that in the following years or even earlier, the whole IPv4 block will be exhausted.

This raises a huge problem with the Internet’s expansion, meaning that new technologies must be adopted to overcome this situation. This is why the new IPv6 technology was developed and implemented in some parts of the world. An IPv6 address is a 128-bit binary value that is represented by 32 hexadecimal digits. The IPv6 block would be sufficient to assign trillions of addresses to everyone from the whole world.

Plug and Play

There are some real advantages of using IPv6, this article will cover the most important aspects of this protocol. The IPv6 protocol supports autoconfiguration technology, meaning that no human interaction is needed for its configuration. This feature provides a good plug-and-play mechanism for all network devices and it can provide global coverage and flexibility. By having enough IP addresses, services like NAT (network address translation) are not required anymore.

This means that the transition from private networks to public ones and vice versa is done instantly without address translation. IPv6 uses multi-homing technology in which a physical device can support multiple IP addresses. IPv6 has a simpler header than IPv4 which has some real advantages, like increased routing performance and no broadcasts or checksums. All communications are done using IPsec technology which is why IPv6 provides increased security and easier portability for mobile users (mobile users can change networks without decreasing security).

Converting to IPv6 Will Take Some Time

2

Internet’s core communications will change to support the IPv6 protocol. To carry out this transition successfully, the whole operation must be done transparent to end users without any downtime. This process will take time and will not be done overnight. This means that both IPv4 and IPv6 will function in the same time. The migration between these two technologies will be done using some transition mechanisms like 6 to 4 tunneling, dual stacking routing and others.

I’ve written earlier that IPv6 addresses are composed of 128 bits represented as 32 hexadecimal digits. Each hexadecimal digit can be converted as a binary string. Let’s take the following example 2a80::9c62:ef40:769a:464b. This address contains a combination between numbers and digits. You must know that in hexadecimal numbering 10 is represented by the letter A, 11 by B and so on. By converting each group of 4 hexadecimal values we can obtain the binary value of the whole address. You can see that there is a group of two :: . This representation means that there are groups of 0s to complete the whole 128 bit address. The same thing could be written as 2a80:0000:0000:0000:9c62:ef40:769a:464b. Remember that this mechanism can be used only once. Simply put, :: means that the whole block is composed of 0s. For example, the IPv6 loopback address is written as ::1. Also, with IPv6 we can write groups of four zeros as one 0, something similar to the following example 01a3:0:25b6: 9c62:ef40:769a:0:464b.

There are several types of IPv6 addresses:

Private Addresses: these are reserved IP addresses used inside organizations and are not routable in the Internet. These IPv6 addresses start with FE followed by a digit from 8 to F interval. These private IP addresses are divided into two categories:

Link-local addresses – IPs used for communications inside a physical network segment. These IPs are not routable and are used for some special tasks like neighbor discovery or autoconfiguration. These IP addresses start with FE followed by a value from 8 to B, something similar to fe80::9c62:ef40:769a:464b.

Site-local addresses – even though the site-local IP addresses are deprecated, you would probably be interested to find out about them. These are IPv6 addresses similar to IPv4 private IPs. These addresses begin with FE followed by a value from C to F.

Loopback address – We’ve talked about this earlier, an IPv6 address used for testing purposes. By pinging this IP we redirect the traffic to the same machine (looping back the traffic).

Reserved addresses – these IPs are not leased to anyone and are reserved for future development

Global unicast addresses – IPs allocated to the five global RIRs. From Wikipedia:

“A regional Internet registry (RIR) is an organization that manages the allocation and registration of Internet number resources within a particular region of the world. Internet number resources include IP addresses and autonomous system (AS) numbers”.

IP addressesUnspecified address::/128 — The address with all bits set to 0 is called the unspecified address (corresponding to 0.0.0.0/32 in IPv4).
“This address must never be assigned to an interface and is to be used only in software before the application has learned its host’s source address appropriate for a pending connection. Routers must not forward packets with the unspecified address.
Applications may be listening on one or more specific interfaces for incoming connections, which are shown in listings of active internet connections by a specific IP address (and a port number, separated by a colon). When the unspecified address is shown it means that an application is listening for incoming connections on all available interfaces.”

Multicast addresses – though broadcasting is not implemented in IPv6, multicast addresses are used for several operations. In multicasting, one packet is transmitted from one node to sever network devices at the same time using one IPv6 address. Because IPv6, supports stateless address configuration, devices will use a combination of multicast addressing, Network Discovery Protocol and IMPv6 to acquire a network address.

I’ve written earlier that the transition from IPv4 to IPv6 will not be done overnight so network devices will need to support both protocols at the same time. For this reason new technologies were developed to support this transition:

Dual stack is one example of a transition mechanism in which network devices support both IPv4 and IPv6 over the same network portion. These devices use two protocol stacks, one for IPv4 and another one for IPv6. Based on the destination address, a router will choose which stack it will use.

IPv6 tunneling mechanism – IPv6 network packets are encapsulated into IPv4 packets. The header of the packet changes a little bit meaning that it will contain an identifier for the IPv6 packet, the IPv6 data and an IPv4 header. This is not an effective transmission mechanism, but it will serve as an intermediate solution for transmitting IPv6 packets without changing the network devices. This tunneling mechanism can be configured in two ways: manual Ipv6 to Ipv4 tunneling and dynamic 6to4 tunneling. Other well-known tunneling mechanisms include ISATAP and Teredo.

NAT Protocol Translation or NAT-PT – in this technology, routers will forward packets from an IPv4 to an IPv6 network and vice versa.

IPv6 is a complex Internet Protocol that will be adopted globally. There are many aspects to consider when implementing this protocol and probably I don’t know all about it. When using an automatic IPv6 configuration, the MAC address of your physical interface will be integrated to the IPv6 address to form a unique address. This raises a big concern in terms of security because MAC addresses can be discovered by malicious users but also user’s actions can be tracked over the Internet. I don’t know how this problem is resolved in IPv6 so please add anything you think is relevant for this topic.

If you’ve enjoyed this POST please SHARE it with others and stay tuned…..!!!

 

Step by Step Linux OS Boot Sequence

In this topic we will discuss in depth of Linux Boot Sequence. How a linux system boots?This will help administrators in troubleshooting some bootup problem. Before discussing about,  I will note down the major component we need to know who are responsible for the booting process.

1. BIOS(Basic Input/Output System)

2MBR(Master Boot Record)

3LILO or GRUB

LILO:-LInux LOade r

            GRUB:-GRand Unified Bootloader

4Kernel

5init

6Run Levels

  1. BIOS:

i. When we power on BIOS performs a Power-On Self-Test (POST) for all of the different hardware components in the system to make sure everything is working properly

ii. Also it checks for whether the computer is being started from an off position (cold boot) or from a restart (warm boot) is stored at this location.

iii. Retrieves information from CMOS (Complementary Metal-Oxide Semiconductor) a battery operated memory chip on the motherboard that stores time, date, and critical system information.

iv. Once BIOS sees everything is fine it will begin searching for an operating system Boot Sector on a valid master boot sector on all available drives like hard disks,CD-ROM drive etc.

v. Once BIOS finds a valid MBR it will give the instructions to boot and executes the first 512-byte boot sector that is the first sector (“Sector 0”) of a partitioned data storage device such as hard disk or CD-ROM etc .

2. MBR

i. Normally we use multi-level boot loader.Here MBR means I am referencing to DOS MBR.

ii. Afer BIOS executes a valid DOS MBR,the DOS MBR will search for a valid primary partition marked as bootable on the hard disk.

iii. If MBR finds a valid bootable primary partition then it executes the first 512-bytes of that partition which is second level MBR.

iv. In linux we have two types of the above mentioned second level MBR known as LILO and GRUB

3. LILO

i. LILO is a linux boot loader which is too big to fit into single sector of 512-bytes.

ii. So it is divided into two parts :an installer and a runtime module.

iii. The installer module places the runtime module on MBR.The runtime module has the info about all operating systems installed.

iv. When the runtime module is executed it selects the operating system to load and transfers the control to kernel.

v. LILO does not understand filesystems and boot images to be loaded and treats them as raw disk offsets

GRUB

i. GRUB MBR consists of 446 bytes of primary bootloader code and 64 bytes of the partition table.

ii. GRUB locates all the operating systems installed and gives a GUI to select the operating system need to be loaded.

iii. Once user selects the operating system GRUB will pass control to the karnel of that operating system. See below what is the difference between LILO and GRUB.

4. Kernel

i. Once GRUB or LILO transfers the control to Kernel,the Kernels does the following tasks

  • Intitialises devices and loads initrd module
  • mounts root filesystem

5. Init

i. The kernel, once it is loaded, finds init in sbin(/sbin/init) and executes it.

ii. Hence the first process which is started in linux is init process.

iii. This init process reads /etc/inittab file and sets the path, starts swapping, checks the file systems, and so on.

iv. It runs all the boot scripts(/etc/rc.d/*,/etc/rc.boot/*)

v. starts the system on specified run level in the file /etc/inittab

6. Runlevel

i. There are 7 run levels in which the linux OS runs and different run levels serves for different purpose.The descriptions are
given below.

  • 0  – halt
  • 1  – Single user mode
  • 2  – Multiuser, without NFS (The same as 3, if you don’t have networking)
  • 3  – Full multiuser mode
  • 4  – unused
  • 5  – X11
  • 6  – Reboot

ii. We can set in which runlevel we want to run our operating system by defining it on /etc/inittab file.

Now as per our setting in /etc/inittab the Operating System the operating system boots up and finishes the bootup process.

Below are given some few  important differences about LILO and GRUB

LILO GRUB
LILO has no interactive command interface GRUB has interactive command interface
LILO does not support booting from a network GRUB does support booting from a network
If you change your LILO config file, you have to rewrite the LILO stage one boot loader to the MBR GRUB automatically detects any change in config file and auto loads the OS
LILO supports only linux operating system GRUB supports large number of OS

To know more about the booting process you can follow the link below :-

http://www.ibm.com/developerworks/linux/library/l-linuxboot/ [HTML Web version]

OR

https://s3.ap-south-1.amazonaws.com/global-live-docs/MyPassionBehindBlogging/linuxboot_IBM.pdf [Downloadable PDF]

 

 

 

Database Scalability : Vertical Scaling vs Horizontal Scaling

Every organization must have the ability to handle an increase or decrease in the business demands. So, it is crucial that businesses are equipped with database scalability. Let’s read more about it.

Database Scalability

Scalability means the ability to expand the computer resources to handle the exponential growth   of work. It refers to the system’s capacity to handle an increase in load by increasing the total output when resources are added. Database scalability means the ability of a system’s database to scale up or down as per the requirement. If the database isn’t scalable, then the processes can slow down or even fail which can be quite detrimental to the business operations. Further, it enables the database to grow to a larger size to support more transactions as the volume of business and/or customer count grows.
There are two types of database scalability :

  1. Vertical Scaling or Scale-up
  2. Horizontal Scaling or Scale-out

Let us look at each of them in detail.

Scale-up or Vertical Scaling

It refers to the process of adding more physical resources such as memory, storage and CPU to the existing database server for improving the performance. Vertical scaling helps in upgrading the capacity of the existing database server. It results in a robust system. Some of its pros include:

Pros of Scaling-Up

  • It consumes less power as compared to running multiple servers
  • Administrative efforts will be reduced as you need to handle and manage just one system
  • Cooling costs are lesser than horizontal scaling
  • Reduced software costs
  • Implementation isn’t difficult
  • The licensing costs are less
  • The application compatibility is retained

Cons of Scaling up

  • There is a greater risk of hardware failure which can cause bigger outages
  • Limited scope of upgradeability in the future
  • Severe vendor lock-in
  • The overall cost of implementing is really expensive

Scale-Out or Horizontal Scaling

When you add more servers with less RAM and processors, it is known as horizontal scaling. It can also be defined as the ability to increase the capacity by connecting multiple software or hardware entities in such a manner that they function as a single logical unit. It is cheaper as a whole and it can literally scale infinitely, however, there are some limits imposed by software or other attributes of an environment’s infrastructure. When the servers are clustered, the original server is scaled out horizontally. If a cluster requires more resources to improve its performance and provide high availability, then the administrator can scale-out by adding more servers to the cluster.

Pros of Scaling-out

  • Much cheaper compared to scaling-up
  • Takes advantage of smaller systems
  • Easy to upgrade
  • Resilience is improved due to the presence of discrete, multiple systems
  • Easier to run fault-tolerance
  • Supporting linear increases in capacity

Cons of Scaling-out

  • The licensing fees are more
  • Utility costs such as cooling and electricity are high
  • It has a bigger footprint in the Data Center
  • More networking equipment such as routers and switches may be needed

Scaling out is not a new concept but it has gained momentum as the storage startups create new structures that leverage the power of modern x86 servers. This addresses various limitations of the older scale-up architectures. However, there are some risks of this model. Each of the x86 servers is a failure domain that doesn’t exist in a scale-up environment, which is usually handled by the layout of data on the array, where the copies of data are kept on at least two nodes. Another risk of scaling-out is that upgrading the nodes is quite complex and if you want to roll out new software to a hundred nodes, it becomes quite a tedious task.


 

Horizontal scaling means that you scale by adding more machines into your pool of resources whereas Vertical scaling means that you scale by adding more power (CPU, RAM) to an existing machine.

An easy way remember this is to think of a machine on a server rack, we add more machines across the horizontal direction and add more resources to a machine in the vertical direction.

dbscalability

In a database world horizontal-scaling is often based on partitioning of the data i.e. each node contains only part of the data , in vertical-scaling the data resides on a single node and scaling is done through multi-core i.e. spreading the load between the CPU and RAM resources of that machine.

With horizontal-scaling it is often easier to scale dynamically by adding more machines into the existing pool – Vertical-scaling is often limited to the capacity of a single machine, scaling beyond that capacity often involves downtime and comes with an upper limit.

A good example for horizontal scaling is Cassandra , MongoDB .. and a good example for vertical scaling is MySQL – Amazon RDS (The cloud version of MySQL). It provides an easy way to scale vertically by switching from small to bigger machines. This process often involves downtime.

In-Memory Data Grids such as GigaSpaces XAP, Coherence etc.. are often optimized for both horizontal and vertical scaling simply because they’re not bound to disk. Horizontal-scaling through partitioning and vertical-scaling through multi-core support.

Database scalability helps in eliminating performance bottlenecks. Understand your company’s scalability needs and implement the same. It is critical that you the weigh the pros and cons of both vertical and horizontal scaling before you decide on what to implement. What works for other companies may not work for you. Check the benefits of both the types against your requirements and you implement the right one. Undoubtedly, you will be amazed by the results you achieve.

SQL vs NoSQL: High-Level Differences

  • SQL databases are primarily called as Relational Databases (RDBMS); whereas NoSQL database are primarily called as non-relational or distributed database.
  • SQL databases are table based databases whereas NoSQL databases are document based, key-value pairs, graph databases or wide-column stores. This means that SQL databases represent data in form of tables which consists of n number of rows of data whereas NoSQL databases are the collection of key-value pair, documents, graph databases or wide-column stores which do not have standard schema definitions which it needs to adhered to.
  • SQL databases have predefined schema whereas NoSQL databases have dynamic schema for unstructured data.
  • SQL databases are vertically scalable whereas the NoSQL databases are horizontally scalable. SQL databases are scaled by increasing the horse-power of the hardware. NoSQL databases are scaled by increasing the databases servers in the pool of resources to reduce the load.
  • SQL databases uses SQL ( structured query language ) for defining and manipulating the data, which is very powerful. In NoSQL database, queries are focused on collection of documents. Sometimes it is also called as UnQL (Unstructured Query Language). The syntax of using UnQL varies from database to database.
  • SQL database examples: MySql, Oracle, Sqlite, Postgres and MS-SQL. NoSQL database examples: MongoDB, BigTable, Redis, RavenDb, Cassandra, Hbase, Neo4j and CouchDb
  • For complex queries: SQL databases are good fit for the complex query intensive environment whereas NoSQL databases are not good fit for complex queries. On a high-level, NoSQL don’t have standard interfaces to perform complex queries, and the queries themselves in NoSQL are not as powerful as SQL query language.
  • For the type of data to be stored: SQL databases are not best fit for hierarchical data storage. But, NoSQL database fits better for the hierarchical data storage as it follows the key-value pair way of storing data similar to JSON data. NoSQL database are highly preferred for large data set (i.e for big data). Hbase is an example for this purpose.
  • For scalability: In most typical situations, SQL databases are vertically scalable. You can manage increasing load by increasing the CPU, RAM, SSD, etc, on a single server. On the other hand, NoSQL databases are horizontally scalable. You can just add few more servers easily in your NoSQL database infrastructure to handle the large traffic.
  • For high transactional based application: SQL databases are best fit for heavy duty transactional type applications, as it is more stable and promises the atomicity as well as integrity of the data. While you can use NoSQL for transactions purpose, it is still not comparable and sable enough in high load and for complex transactional applications.
  • For support: Excellent support are available for all SQL database from their vendors. There are also lot of independent consultations who can help you with SQL database for a very large scale deployments. For some NoSQL database you still have to rely on community support, and only limited outside experts are available for you to setup and deploy your large scale NoSQL deployments.
  • For properties: SQL databases emphasizes on ACID properties ( Atomicity, Consistency, Isolation and Durability) whereas the NoSQL database follows the Brewers CAP theorem ( Consistency, Availability and Partition tolerance )
  • For DB types: On a high-level, we can classify SQL databases as either open-source or close-sourced from commercial vendors. NoSQL databases can be classified on the basis of way of storing data as graph databases, key-value store databases, document store databases, column store database and XML databases.

Popular SQL databases and RDBMS’s

  • MySQL—the most popular open-source database, excellent for CMS sites and blogs.
  • Oracle—an object-relational DBMS written in the C++ language. If you have the budget, this is a full-service option with great customer service and reliability. Oracle has also released an Oracle NoSQL database.
  • IMB DB2—a family of database server products from IBM that are built to handle advanced “big data” analytics.
  • Sybase—a relational model database server product for businesses primarily used on the Unix OS, which was the first enterprise-level DBMS for Linux.
  • MS SQL Server—a Microsoft-developed RDBMS for enterprise-level databases that supports both SQL and NoSQL architectures.
  • Microsoft Azure—a cloud computing platform that supports any operating system, and lets you store, compute, and scale data in one place. A recent survey even put it ahead of Amazon Web Services and Google Cloud Storage for corporate data storage.
  • MariaDB—an enhanced, drop-in version of MySQL.
  • PostgreSQL—an enterprise-level, object-relational DBMS that uses procedural languages like Perl and Python, in addition to SQL-level code.

Popular NoSQL Databases

  • MongoDB—the most popular NoSQL system, especially among startups. A document-oriented database with JSON-like documents in dynamic schemas instead of relational tables that’s used on the back end of sites like Craigslist, eBay, Foursquare. It’s open-source, so it’s free, with good customer service.
  • Apache’s CouchDB—a true DB for the web, it uses the JSON data exchange format to store its documents; JavaScript for indexing, combining and transforming documents; and, HTTP for its API.
  • HBase—another Apache project, developed as a part of Hadoop, this open-source, non-relational “column store” NoSQL DB is written in Java, and provides BigTable-like capabilities.
  • Oracle NoSQL—Oracle’s entry into the NoSQL category.
  • Apache’s Cassandra DB—born at Facebook, Cassandra is a distributed database that’s great at handling massive amounts of structured data. Anticipate a growing application? Cassandra is excellent at scaling up. Examples: Instagram, Comcast, Apple, and Spotify.
  • Riak—an open-source key-value store database written in Erlang. It has fault-tolerance replication and automatic data distribution built in for excellent performance.
  • Redis – an open source (BSD licensed), in-memory data structure store, used as a database, cache and message broker. It supports data structures such as strings, hashes, lists, sets, sorted sets with range queries, bitmaps, hyperloglogs and geospatial indexes with radius queries. Redis has built-in replication, Lua scripting, LRU eviction, transactions and different levels of on-disk persistence, and provides high availability via Redis Sentinel and automatic partitioning with Redis Cluster.

What database solution is right for you???

Don’t forget to like & share this post on social networks!!! I will keep on updating this blog. Please do follow!!!

 

 

 

Introduction to OpenStack

4

 

OpenStack is an open-source cloud infrastructure solution for public and private clouds. It is composed of several modules that control large pools of compute, storage and networking resources throughout a datacenter. To  facilitate handling of theses components OpenStack implements a dashboard (Horizon)

OpenStack was originally developed in 2010 as a joint project of Rackspace and NASA. In September 2012 the control of OpenStack is transferred to the OpenStack Foundation to promote the development, distribution and adoption among the community, and nowadays more than 500 companies are part of this project, some of them as big as AT&T, IBM, Red Hat or Intel. OpenStack is provided under an Apache 2.0 license.

1

Components:

OpenStack consists of many components, developed as independent projects, that can be combined in custom deployments that only expose the functionality required for the intended applications.

There are 2 kinds of component of OpenStack as regards governance:

Core components, that are common to most deployments and are developed and released in a unified way;

And “big tent” components, that are developed and released independently, but adhere to OpenStack processes: open source license, open community, use of the OpenStack build model.

Core Components:

2

The following list shows a brief description of the main components. For more information about components you can visit  openstack software:

  1. Compute (Nova): It is the main part of an IaaS System. It is designed to launch instances and manage them, and it also takes care of control between services.
  2. Object Storage (Swift): An object storage system, it offers cloud storage software so that you can store and retrieve arbitrary data with a simple API. The main feature is that it is built for scalability and optimizability. It is highly recommended to store unstructured data that can grow without bounds.
  3. Networking (Neutron): This is the project that provides network connectivity as a service between devices managed by OpenStack services.
  4. Block Storage (Cinder): It’s designed to present block storage resources to end users that can be consumed by Nova.
  5. Identity (Keystone): It’s the identity service for authentication and authorization. It currently supports token-based authN and user-service authorization, in future versions it will support oAuth, SAML and  openID.
  6. Image (Glance): This project is designed to manage images and snapshots, that can be stored using Swift.
  7. Dashboard (Horizon): Horizon is the canonical implementation of Openstack’s Dashboard, which provides a web based user interface to OpenStack services including Nova, Swift, Keystone, etc.

Releases

OpenStack is developed and released in 6-month cycles, although big tent components are released independently from Openstack releases.

This is a brief summary of what we consider important for anyone who wants to enter the world of openstack. If you seek to expand on this knowledge we recommend stopping by the official website of the project.

References: https://www.openstack.org/

Don’t forget to like & share this post on social networks!!! I will keep on updating this blog. Please do follow!!!