Containers: Passing fad or tech nirvana?

Linux containers are hot right now in software development.

What are the main benefits of Linux containers?

There are two main categories of containers: host-based and application-based.

Host-based containers are akin to lightweight VMs. They run an entire “host” of applications inside of them and function like little machines. Their initial benefit is they have less overhead than virtualization technologies like KVM or VMWare. You can more fully utilize your hardware resources by cramming more containers onto a host box than you could with VMs.

Application-based containers run just a single process or, for the slightly impure, a tight collection of processes that provide a single service. These containers have been declared by enthusiasts as perfect for the equally-trendy microservices. One of their benefits is isolating services’ deployment and implementation. The application developer has complete control over the library versions and code in the container, and being encapsulated, the containers run on any capable host machine. Equally important, the developer doesn’t have to worry about extra dependencies or software conflicts. Need to use Python 3 vs 2? No problem! It runs in its own container so you won’t clash with all the other Python 2 applications. Think of an application container as an ultimate packaging system.

Since a container’s root filesystem only contains the code that your application or host needs to do its job (e.g., forget about the extraneous OS files), their disk footprint can be much smaller. This is a clear win for higher density workloads.

Another benefit that I have perceived with both types of containers is their portability and speed. While there has been a lot of work with VMs to make them portable using packaging standards, it is still a non-trivial process to get a VM to run under multiple hypervisor technologies. Containers are much simpler, some consisting of just metadata and a tarball of the root filesystem. Popular container formats are already being passed around from one public cloud provider to another quite easily. Try this with a VM by exporting an EC2 AMI and importing it into GCE–it isn’t seamless.

Containers are fast because there is no virtualized hardware to boot. You start the container and it is almost instantly running. In many cases we are talking sub-second start-up times. Because there is no virtual hardware to pass through, they don’t experience a CPU or I/O penalty.

A final benefit in my mind is that containers are being realized foremost by open-source developers and tools. Unlike other data center trends of past years, open-source appears to have the momentum in the leading force behind Linux containers.

What are the downsides to Linux containers?

The kernel tech used to enable Linux containers has been around for a while, but the tools to easily manage them are nascent and rapidly changing. Best practices are still being formed–some may even turn out to be anti-patterns. Betting important projects on a toolset that may soon become extinct is a definite downside.

Some areas in which there are still no battle-tested solutions for containers include networking and persistent storage. There are lots of ways to do this, but they are all very new, and there are no clear winners. And what about security? When VMs first came out, there were (still are?) concerns about lurking vulnerabilities that allow people to break out of the guest OS and infiltrate the host. Likewise, containers and related tools are still building up their security defenses. There are known security concerns and likely more than a few undiscovered risks.

Lastly, how does one sanely manage a data center filled with tens of thousands of containers? These things, by design, are supposed to run in higher densities than VMs. Think about multi-tenancy, resource management, service discovery, advanced scheduling, data locality, troubleshooting, etc. In my estimation, we don’t yet have tools that help us maintain this volume of complexity.

Don’t get me wrong, these problems are being worked on right now by very motivated people. Most of these downsides will be addressed sooner than later.

Are Linux containers here to stay?

Yes, they are. For one reason, containers have already been around for a long time. Upstream kernel support for Linux containers may be new to the party, but AIX lpars, Solaris zones, and BSD jails have been in use for years. Even on Linux, OpenVZ has enabled containers as a staple of multi-tenant web hosting.

These non-Linux platforms have proven the utility of this technology. Containers, if nothing else, will be another tool that will always be used to help develop, test, deploy, and isolate software. Time will tell if they become indispensible staples like VMs or be relegated to lighter use. My prediction is they will be heavily utilized and, overtime, supplant VMs for many use cases.

There are a lot of tools popping up to help conquer the container landscape (e.g., LXC, Docker, Kubernetes, rkt, LXD, Mesosphere, etc.). Some of these will not survive. They will not gain necessary adoption rates or will suffer anemic investment. Eventually, one methodology may win out over another. It is still much too soon to foresee how things will play out.

Now’s the Best Time for Container Adoption???

Because the technology is complex, IT staff will need to find creative and persuasive ways to help decision-makers understand why containers are a must today. Describing the benefits to enterprises, which are many, should make a good impression. They include speeding the configuration cycle, setup for virtual machines and application delivery cycles. Containers also standardize how suppliers develop and deliver software. What is most likely to get leaders’ attention, though, is the cost savings. When you start using containers in your enterprise, you are sure to find more benefits of your own.

OK, folks that’s it for this post. Have a nice day guys…… Stay tuned…..!!!!!

Don’t forget to like & share this post on social networks!!! I will keep on updating this blog. Please do follow!!!

Advertisements

Networking Subnet Cheat Sheet

The tables below are commonly used subnet masks and hosts.

Class C

Mask  Notation   Subnets   Hosts 
255.255.255.0 /24 1 256
255.255.255.128 /25 2 128
255.255.255.192 /26 4 64
255.255.255.224 /27 8 32
255.255.255.240 /28 16 16
255.255.255.248 /29 32 8
255.255.255.252 /30 64 4
255.255.255.254 /31 128 2
255.255.255.255 /32 256 1

Class B

Mask  Notation   Subnets   Hosts 
255.255.0.0 /16 1 65,536
255.255.128.0 /17 2 32,768
255.255.192.0 /18 4 16,384
255.255.224.0 /19 8 8,192
255.255.240.0 /20 16 4,096
255.255.248.0 /21 32 2,048
255.255.252.0 /22 64 1,024
255.255.254.0 /23 128 512
255.255.255.0 /24 256 256

Class A

Mask  Notation   Subnets   Hosts 
255.0.0.0 /8 1 16,777,216
255.128.0.0 /9 2 8,388,608
255.192.0.0 /10 4 4,194,304
255.224.0.0 /11 8 2,097,152
255.240.0.0 /12 16 1,048,576
255.248.0.0 /13 32 524,288
255.252.0.0 /14 64 262,144
255.254.0.0 /15 128 131,072
255.255.0.0 /16 256 65,536

Netmask (binary) CIDR Notes
_____________________________________________________________________________
255.255.255.255 11111111.11111111.11111111.11111111 /32 Host (single addr)
255.255.255.254 11111111.11111111.11111111.11111110 /31 Unuseable
255.255.255.252 11111111.11111111.11111111.11111100 /30 2 useable
255.255.255.248 11111111.11111111.11111111.11111000 /29 6 useable
255.255.255.240 11111111.11111111.11111111.11110000 /28 14 useable
255.255.255.224 11111111.11111111.11111111.11100000 /27 30 useable
255.255.255.192 11111111.11111111.11111111.11000000 /26 62 useable
255.255.255.128 11111111.11111111.11111111.10000000 /25 126 useable
255.255.255.0 11111111.11111111.11111111.00000000 /24 “Class C” 254 useable

255.255.254.0 11111111.11111111.11111110.00000000 /23 2 Class C’s
255.255.252.0 11111111.11111111.11111100.00000000 /22 4 Class C’s
255.255.248.0 11111111.11111111.11111000.00000000 /21 8 Class C’s
255.255.240.0 11111111.11111111.11110000.00000000 /20 16 Class C’s
255.255.224.0 11111111.11111111.11100000.00000000 /19 32 Class C’s
255.255.192.0 11111111.11111111.11000000.00000000 /18 64 Class C’s
255.255.128.0 11111111.11111111.10000000.00000000 /17 128 Class C’s
255.255.0.0 11111111.11111111.00000000.00000000 /16 “Class B”

255.254.0.0 11111111.11111110.00000000.00000000 /15 2 Class B’s
255.252.0.0 11111111.11111100.00000000.00000000 /14 4 Class B’s
255.248.0.0 11111111.11111000.00000000.00000000 /13 8 Class B’s
255.240.0.0 11111111.11110000.00000000.00000000 /12 16 Class B’s
255.224.0.0 11111111.11100000.00000000.00000000 /11 32 Class B’s
255.192.0.0 11111111.11000000.00000000.00000000 /10 64 Class B’s
255.128.0.0 11111111.10000000.00000000.00000000 /9 128 Class B’s
255.0.0.0 11111111.00000000.00000000.00000000 /8 “Class A”

254.0.0.0 11111110.00000000.00000000.00000000 /7
252.0.0.0 11111100.00000000.00000000.00000000 /6
248.0.0.0 11111000.00000000.00000000.00000000 /5
240.0.0.0 11110000.00000000.00000000.00000000 /4
224.0.0.0 11100000.00000000.00000000.00000000 /3
192.0.0.0 11000000.00000000.00000000.00000000 /2
128.0.0.0 10000000.00000000.00000000.00000000 /1
0.0.0.0 00000000.00000000.00000000.00000000 /0 IP space

Net Class Addr Range NetMask Net Addr Bits Host Addr Bits Total Number of Hosts  
A 0-127 255.0.0.0 8 24 16777216 (i.e. 114.0.0.0)
B 128-191 255.255.0.0 16 16 65536 (i.e. 150.0.0.0)
C 192-254 255.255.255.0 24 8 256 (i.e. 199.0.0.0)
D 224-239 (multicast)        
E 240-255 (reserved)        
F 208-215 255.255.255.240 28 4 16  
G 216/8 ARIN – North America        
G 217/8 RIPE NCC – Europe        
G 218-219/8 APNIC        
H 220-221 255.255.255.248 29 3 8 (reserved)
K 222-223 255.255.255.254 31 1 2 (reserved)

(ref: RFC1375 & http://www.iana.org/assignments/ipv4-address-space )
( http://www.iana.org/numbers.htm )
———————————————————-

The current list of special use prefixes:
0.0.0.0/8
127.0.0.0/8
192.0.2.0/24
10.0.0.0/8
172.16.0.0/12
192.168.0.0/16
169.254.0.0/16
all D/E space
(ref: RFC1918 http://www.rfc-editor.org/rfc/rfc1918.txt )
(rfc search: http://www.rfc-editor.org/rfcsearch.html )

 

Martians: (updates at: www.iana.org/assignments/ipv4-address-space )

no ip source-route
access-list 100 deny ip host 0.0.0.0 any
deny ip 0.0.0.0 0.255.255.255 any log ! antispoof
deny ip 0.0.0.0 0.255.255.255 0.0.0.0 255.255.255.255 ! antispoof
deny ip any 255.255.255.128 0.0.0.127 ! antispoof
deny ip host 0.0.0.0 any log ! antispoof
deny ip host [router intf] [router intf] ! antispoof
deny ip xxx.xxx.xxx.0 0.0.0.255 any log ! lan area
deny ip 0/8 0.255.255.255 any log ! IANA – Reserved
deny ip 1/8 0.255.255.255 any log ! IANA – Reserved
deny ip 2/8 0.255.255.255 any log ! IANA – Reserved
deny ip 5/8 0.255.255.255 any log ! IANA – Reserved
deny ip 7/8 0.255.255.255 any log ! IANA – Reserved
deny ip 10.0.0.0 0.255.255.255 any log ! IANA – Private Use
deny ip 23/8 0.255.255.255 any log ! IANA – Reserved
deny ip 27/8 0.255.255.255 any log ! IANA – Reserved
deny ip 31/8 0.255.255.255 any log ! IANA – Reserved
deny ip 36-37/8 0.255.255.255 any log ! IANA – Reserved
deny ip 39/8 0.255.255.255 any log ! IANA – Reserved
deny ip 41-42/8 0.255.255.255 any log ! IANA – Reserved
deny ip 50/8 0.255.255.255 any log ! IANA – Reserved
deny ip 58-60/8 0.255.255.255 any log ! IANA – Reserved
deny ip 69-79/8 0.255.255.255 any log ! IANA – Reserved
deny ip 82-95/8 0.255.255.255 any log ! IANA – Reserved
deny ip 96-126/8 0.255.255.255 any log ! IANA – Reserved
deny ip 127/8 0.255.255.255 any log ! IANA – Reserved
deny ip 169.254.0.0 0.0.255.255 any log ! link-local network
deny ip 172.16.0.0 0.15.255.255 any log ! reserved
deny ip 192.168.0.0 0.0.255.255 any log ! reserved
deny ip 192.0.2.0 0.0.0.255 any log ! test network
deny ip 197/8 0.255.255.255 any log ! IANA – Reserved
deny ip 220/8 0.255.255.255 any log ! IANA – Reserved
deny ip 222-223/8 0.255.255.255 any log ! IANA – Reserved
deny ip 224.0.0.0 31.255.255.255 any log ! multicast
deny ip 224.0.0.0 15.255.255.255 any log ! unless MBGP-learned routes
deny ip 224-239/8 0.255.255.255 any log ! IANA – Multicast
deny ip 240-255/8 0.255.255.255 any log ! IANA – Reserved

filtered source addresses

0/8 ! broadcast
10/8 ! RFC 1918 private
127/8 ! loopback
169.254.0/16 ! link local
172.16.0.0/12 ! RFC 1918 private
192.0.2.0/24 ! TEST-NET
192.168.0/16 ! RFC 1918 private
224.0.0.0/4 ! class D multicast
240.0.0.0/5 ! class E reserved
248.0.0.0/5 ! reserved
255.255.255.255/32 ! broadcast

ARIN administrated blocks:
24.0.0.0/8 (portions of)
63.0.0.0/8
64.0.0.0/8
65.0.0.0/8
66.0.0.0/8
196.0.0.0/8
198.0.0.0/8
199.0.0.0/8
200.0.0.0/8
204.0.0.0/8
205.0.0.0/8
206.0.0.0/8
207.0.0.0/8
208.0.0.0/8
209.0.0.0/8
216.0.0.0/8
———————————————————-

well known ports: (rfc1700.txt)
www.iana.org/assignments/port-numbers

protocol numbers:
www.iana.org/assignments/protocol-numbers
www.iana.org/numbers.htm

ICMP(Types/Codes)

Testing Destination Reachability & Status
(0/0) Echo-Reply
(8/0) Echo
Unreachable Destinations
(3/0) Network Unreachable
(3/1) Host Unreachable
(3/2) Protocol Unreachable
(3/3) Port Unreachable
(3/4) Fragmentaion Needed and DF set (Pkt too big)
(3/5) Source Route Failed
(3/6) Network Unknown
(3/7) Host Unknown
(3/9) DOD Net Prohibited
(3/10) DOD Host Prohibited
(3/11) Net TOS Unreachable
(3/12) Host TOS Unreachable
(3/13) Administratively Prohibited
(3/14) Host Precedence Unreachable
(3/15) Precedence Unreachable
Flow Control
(4/0) Source-Quench [RFC 1016]
Route Change Requests from Gateways
(5/0) Redirect Datagrams for the Net
(5/1) Redirect Datagrams for the Host
(5/2) Redirect Datagrams for the TOS and Net
(5/3) Redirect Datagrams for the TOS and Host
Router
(6/-) Alternate-Address
(9/0) Router-Advertisement
(10/0) Router-Solicitation
Detecting Circular or Excessively Long Routes
(11/0) Time to Live Count Exceeded
(11/1) Fragment Reassembly Time Exceeded
Reporting Incorrect Datagram Headers
(12/0) Parameter-Problem
(12/1) Option Missing
(12/2) No Room for Option
Clock Synchronization and Transit Time Estimation
(13/0) Timestamp-Request
(14/0) Timestamp-Reply
Obtaining a Network Address (RARP Alternative)
(15/0) Information-Request
(16/0) Information-Reply
Obtaining a Subnet Mask [RFC 950]
(17/0) Address Mask-Request
(18/0) Address Mask-Reply
Other
(30/0) Traceroute
(31/0) Conversion-Error
(32/0) Mobile-Redirect

OK, folks that’s it for this post. Have a nice day guys…… Stay tuned…..!!!!!

Don’t forget to like & share this post on social networks!!! I will keep on updating this blog. Please do follow!!!

 

Common paths in cPanel and WHM

Common paths in cPanel and WHM, Useful for regular use. Source: http://www.webhostingbuzz.com/wiki/common-paths-cpanel-and-whm/

Note: I don’t take any credits for this article (All credits go to the author who published it), I’ve just converted into markdown format for regular reference (Personal preference).

For those users who use cPanel/WHM on their virtual or dedicated servers the following article describes common system paths and utilities.

Paths of Base Modules


PHP /usr/bin/php

MySQL /var/lib/mysql

Sendmail /usr/bin/sendmail

ImageMagick /usr/local/bin/convert or /usr/bin/convert

Tomcat /usr/local/jakarta/tomcat

Perl /usr/bin/perl

Ruby /usr/lib/ruby/

Ruby Gems /usr/lib/ruby/gems

FFMPEG /usr/bin/ffmpeg

Mplayer /usr/bin/mplayer

LAME /usr/local/bin/lame

FLV Tool /usr/local/bin/flvtool2

 

User Directories


Document Root /home/username

WWW Directory /home/username/public_html

CGI Directory /home/username/public_html/cgi-bin

 

cPanel / WHM Core Directories and Files


/usr/local/cpanel/bin Houses only scripts and binaries which provide installation and configuration of many cPanel services.

/var/cpanel Houses proprietary configuration data for cPanel, including:

  • Primary cPanel configuration
  • User configurations
  • Reseller configurations
  • Accounting, conversion, and update logs
  • Bandwidth data
  • Customized service templates

 

/var/cpanel/cpanel.config The primary cPanel configuration file. Each variable within influences the way cPanel behaves, variables are line delimited, with variables separated by an equal sign. If this file does not exist cPanel falls back to defaults.

/var/cpanel/resellers Lists each reseller with a comma-delimited list of WHM resources that reseller has access to.

/var/cpanel/accounting.log Contains a list of accounting functions performed through WHM, including account removal and creation.

/var/cpanel/bandwidth Files contain a list of bandwidth history for each account, each named after their respective user. History files are stored in human-readable format, while actual bandwidth data are stored in round robin databases.

/var/cpanel/features File name is inherited from the feature list name. Contains a line delimited list of feature variables and a zero or one value. Variables control what cPanel resources are available to users.

/var/cpanel/packages Contains a list of packages named after the packages they represent. If a package belongs to reseller file name is prefixed with reseller username. Each of these values determines the value created in cPanel user file.

/var/cpanel/users Contains a list of cPanel user configuration files named after the user they pertain to. Variables define account resources, themes, domains, etc.

Other /var/cpanel Directories

LOGS This directory contains logs from account copies/transfers.

UPDATELOGS Contains the output of each cPanel update executed on the server.

MAINIPS Named after the respective reseller users they represent, each contains only the IP address which should be used as resellers main shared IP.

ZONETEMPLATES Contains customized DNS zone templates created in WHM.

/scripts This directory houses a large number of scripts which serve as building blocks for many cPanel/WHM features. These scripts can be used to:

  • Update cPanel and many of its services
  • Customize account creation routines
  • Perform backups of cPanel accounts
  • Install and update cPanel managed services

 

cPanel Maintenance Scripts


By default cPanel applies nightly updates at 2:13AM server time via the root crontab. /scripts/upcp dispatches these updates using the following key components:

  • /scripts/updatenow – synchronizes /scripts directory
  • /scripts/sysup – updates cPanel managed rpms
  • /scripts/rpmup – updates all other system updates

Updates are logged to timestamped files in /var/cpanel/updatelogs. Update configuration is stored in /etc/cpupdate.conf

Account Management Scripts

/scripts/wwwacct (account creation) /scripts/killacct (account termination) /scripts/suspendacct (account suspension)/scripts/unsuspendacct (account resuming) /scripts/addpop (create pop account) /scripts/updateuserdomains Updates the user:owner and user:domain tables stored in: /etc/userdomains /etc/trueuserdomains /etc/trueuserowners These tables are used to enumerate and keep track of accounts and their owners.

Package Management

/scripts/ensurerpm Takes argument list of rpms, which are then passed to the underlying package manager.

/scripts/ensurepkg The equivalent of ensurerpm for FreeBSD. Updates specified packages from ports.

/scripts/realperlinstaller Takes argument list of perl modules to install via CPAN. Each of the aforementioned scripts can accept an argument of ‘–force’ to force package installations.

/scripts/mysqlup Can be called to apply MySQL updates independent of upcp.

/scripts/cleanupmysqlprivs Will clean up the default MySQL privilege tables, by installing a more restrictive privilege schema.

/scripts/mysqlconnectioncheck Will verify that mysql is accessible with password stored in /root/.my.cnf and force a reset with a random 16 character string if inaccessible.

/scripts/eximup Can be called to apply exim updates independent of upcp.

/scripts/buildeximconf Will rebuild exim.conf, and merge local, distribution, and cPanel configurations.

/scripts/rebuildnamedconf Rebuild named.conf based on existing zone files

/scripts/easyapache Download, extract, and execute apache build script

/scripts/rebuildhttpdconf Rebuilds httpd.conf based on DNS entries found in each cPanel user configuration

Other cPanel Scripts

/scripts/restartsrv_{servicename} The majority of cPanel managed service can be scripts named appropriately.

/scripts/makecpphp Will rebuild the PHP interpreter used internally by cpsrvd.

/usr/local/cpanel/bin/checkperlmodules Will scan for and install any Perl modules required by cPanel.

/scripts/fullhordereset Updates horde and resets the horde mysql user password.

/scripts/fixquotas

Will attempt to rebuild quota database per information stored in /etc/quota.conf

Source: webhostingbuzz.com

OK, folks that’s it for this post. Have a nice day guys…… Stay tuned…..!!!!!

Don’t forget to like & share this post on social networks!!! I will keep on updating this blog. Please do follow!!!

Changing default RDP port of Amazon EC2 server (Microsoft Windows OS)

Below are the Steps to change the RDP port for EC2 server (Microsoft Windows OS) in AWS. YOU NEED TO PERFORM ALL BELOW STEPS IN THE SAME SEQUENCE ELSE YOU CAN LOSE SERVER/ RDP ACCESS.

1. Configure your Security Group and allow inbound access to the custom port you want to use for RDP (Say 7777).

Allow port via Security Group Firewall

Image 1

2. Once done, next step is to open up the port 7777 from the server firewall so that external system can connect on this port.

You can manually open up the port from the Firewall settings or you can run following command if you have admin access.

netsh advfirewall firewall add rule name=“Custom RDP Port“ dir=in action=allow protocol=TCP localport=7777

Allow port via Windows Firewall Inbound Rule

Image 2

3. Last step is to change the RDP listening Port:

To change the port that Remote Desktop listens on, follow these steps.

Important:  This section, method, or task contains steps that tell you how to modify the registry. However, serious problems might occur if you modify the registry incorrectly. Therefore, make sure that you follow these steps carefully. For added protection, back up the registry before you modify it. Then, you can restore the registry if a problem occurs.

  • Start Registry Editor.
  • Locate and then click the following registry subkey:
    HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\TerminalServer\WinStations\RDP-Tcp\PortNumber
  • On the Edit menu, click Modify, and then click Decimal. Type the new port number(7777 in our case), and then click OK.
  • Quit Registry Editor.
  • Restart the computer.
Port modification via Registry Editor

Image 3

4. Now, you will have to initiate RDP access to our Server :-

RDP1

Image 4

RDP Successful over 7777

Image 5

Bingo…………….!!!! Our RDP Connection to EC2 Server is successful…..!!!!!!

OK, folks that’s it for this post. Have a nice day guys…… Stay tuned…..!!!!!

Don’t forget to like & share this post on social networks!!! I will keep on updating this blog. Please do follow!!!

Automated EBS snapshots using Amazon CloudWatch Events

Amazon CloudWatch is a monitoring service for AWS cloud resources and the applications running on it. CloudWatch helps you to collect and track metrics for your AWS resources. You can configure alarm to help you react when changes happen to your resources. For example, you can create a alarm when your EC2 instance is utilizing more CPU than the normal usage limit.

You can do more than creating alarm by leveraging the CloudWatch Events feature. In this tutorial, we are going to see the one such use case of CloudWatch Events where you can automate the EBS(Elastic Block Storage) snapshots creation.

What is CloudWatch Events?

Amazon CloudWatch Events helps you to respond to the changes in your AWS resources and to take the necessary corrective actions. You can create rules that self-trigger on an automated schedule in CloudWatch Events using cron or rate expressions. All scheduled events use UTC time zone and the minimum precision for schedules is 1 minute.

CloudWatch Events supports the following formats for schedule expressions.

Formats

  • Cron Expressions
  • Rate Expressions

Cron Expressions

Cron expressions have six required fields, which are separated by white space.

Syntax

cron(fields)
Field Values Wildcards
Minutes 0-59 , – * /
Hours 0-23 , – * /
Day-of-month 1-31 , – * ? / L W
Month 1-12 or JAN-DEC , – * /
Day-of-week 1-7 or SUN-SAT , – * ? L #
Year 1970-2199 , – * /

Wildcards

  • The , (comma) wildcard includes additional values. In the Month field, JAN,FEB,MAR would include January, February, and March.
  • The  (dash) wildcard specifies ranges. In the Day field, 1-15 would include days 1 through 15 of the specified month.
  • The * (asterisk) wildcard includes all values in the field. In the Hours field, * would include every hour.
  • The / (forward slash) wildcard specifies increments. In the Minutes field, you could enter 1/10 to specify every tenth minute, starting from the first minute of the hour (for example, the 11th, 21st, and 31st minute, and so on).
  • The ? (question mark) wildcard specifies one or another. In the Day-of-month field you could enter 7 and if you didn’t care what day of the week the 7th was, you could enter ? in the Day-of-week field.
  • The L wildcard in the Day-of-month or Day-of-week fields specifies the last day of the month or week.
  • The W wildcard in the Day-of-month field specifies a weekday. In the Day-of-month field, 3W specifies the day closest to the third weekday of the month.
  • The # wildcard in the Day-of-week field specifies a certain instance of the specified day of the week within a month. For example, 3#2 would be the second Tuesday of the month: the 3 refers to Tuesday because it is the third day of each week, and the 2 refers to the second day of that type within the month.

Limits

  • You can’t specify the Day-of-month and Day-of-week fields in the same cron expression. If you specify a value (or a *) in one of the fields, you must use a ? (question mark) in the other.
  • Cron expressions that lead to rates faster than 1 minute are not supported.

Examples

You can use the following sample cron strings when creating a rule with schedule.

Minutes Hours Day of month Month Day of week Year Meaning
0 10 * * ? * Run at 10:00 am (UTC) every day
15 12 * * ? * Run at 12:15 pm (UTC) every day
0 18 ? * MON-FRI * Run at 6:00 pm (UTC) every Monday through Friday
0 8 1 * ? * Run at 8:00 am (UTC) every 1st day of the month
0/15 * * * ? * Run every 15 minutes
0/10 * ? * MON-FRI * Run every 10 minutes Monday through Friday
0/5 8-17 ? * MON-FRI * Run every 5 minutes Monday through Friday between 8:00 am and 5:55 pm (UTC)

Rate Expressions

A rate expression starts when you create the scheduled event rule, and then runs on its defined schedule.

Rate expressions have two required fields. Fields are separated by white space.

Syntax

rate(value unit)
value
A positive number.

unit
The unit of time.

Valid values: minute | minutes | hour | hours | day | days

Limits

If the value is equal to 1, then the unit must be singular. Similarly, for values greater than 1, the unit must be plural. For example, rate(1 hours) and rate(5 hour) are not valid, but rate(1 hour) and rate(5 hours) are valid.

Prerequisites for this tutorial

  1. You must have an active AWS account and access to AWS management console.
  2. An EBS volume for which snapshot creation will be automated. Ideally you can have a EBS backed EC2 instance.

Create CloudWatch Events rule

  • Open the CloudWatch console (https://console.aws.amazon.com/cloudwatch/).
  • For this example, I’m using the region EU (Frankfurt).
  • Choose Events in the left navigation pane. Then choose Create rule.
  • For the Event Source, choose Schedule.
  • For this example, we are going to schedule the EBS snapshots with cron expression.
1

Image 1

  • Choose Add target. Select the EC2 CreateSnapshot API call from the drop down.
  • For Volume ID, grab your EBS volume ID and paste it here. Then choose Configure details.
2

Image 2

Note: To get EBS volume ID, go to EC2 console and choose Volumes under ELASTIC BLOCK STORE in left navigation pane.

  • In the next step, provide the details for Rule definition. Enter the name for Rule and optional description.
3

Image 3

  • For AWS permissions, choose Create new role. Then select Basic events execution role. This automatically creates a new IAM role which will allow CloudWatch to access your EC2 resources.
  • Now you will be taken to IAM console which will request your permission to access the resources in your AWS account.
  • Give the Role Name of your choice. For this example, I have given the name as My_AWS_Events_Role.
  • You can review the policy document which is attached for this IAM role.
  • Choose View Policy Document. You should see the policy listed in JSON format.

4

  • You can edit the policy document as per your need. For example, you might want to allow the ec2:TerminateInstances action for the role. For this tutorial, I’m leaving the default policy document unchanged. Now choose Allow.
  • Now you will be taken back to Configure rule details page with the IAM role selected.
  • Choose Create rule.
  • You should now see the Success message for rule created.
5

Image 5

Check your EBS Snapshots

You have created the CloudWatch Events rule to automate the EBS snapshots creation with the help of cron expression. Now go ahead to the EC2 console and check the Snapshots under ELASTIC BLOCK STORE in the left navigation pane.

Now, as per the schedule, you should see the EBS snapshot created.

6

Image 6

Clean up your resources

Disable or delete the CloudWatch Events rule

Go to the CloudWatch console and disable or delete the events rule.

7

Image 7

 

Clean up the EC2 instance and EBS volume

In case if you have launched the EC2 instance for this tutorial, stop or terminate those in order to stop incurring charges.

Clean up the EC2 instance and EBS volume

In case if you have launched the EC2 instance for this tutorial, stop or terminate those in order to stop incurring charges.

Make sure to delete the EBS volume if you have created it for this tutorial. Also delete the EBS snapshot which is created automatically if you do not need it.

Delete the AWS Events IAM role

Go to the IAM console and delete the IAM role (My_AWS_Events_Role) created as part of this tutorial.

Conclusion

You can do more with the CloudWatch Events. For example, using Events you can invoke AWS Lambda function to update DNS entries when your EC2 instance is ready.

OK, folks that’s it for this post. Have a nice day guys…… Stay tuned…..!!!!!

Don’t forget to like & share this post on social networks!!! I will keep on updating this blog. Please do follow!!!

CPanel To CPanel – Server Webmail Migration

Moving email folders to a new server during a server migration ought to be easy.

In cPanel we can make a full backup of a server account. This back up stores emails and email account information in the directory homedir/mail/. But, what do we do when we want to move emails from one server to another without taking a full back up of the server space? How do we migrate webmail emails between a server when the origin server has multiple addon domains or when we want only the emails and no other data?

Here is a quick and easy way to export all emails from one server and import them into another.

Assumptions

  • The domain name of the email accounts has not changed.
  • The new web server is devoid of emails for the email accounts being moved to it. The email restoration process deletes existing email profile data.
  • Both servers use cPanel & Dovecot. Dovecot is usually installed with cPanel. This email migration process can be adapted to non cPanel server management software. This guide provides instructions for backing up and restoring emails for cPanel accounts only.

Server webmail migration guide

Step 1: Backup and download email folders

In this step we create an export file that contains the emails stored on server 1. This will be our email backup file.

  1. Login to cPanel on server 1
  2. Open File Manager
  3. Go up one directory above public_html (see image 1)
  4. Enter the mail directory (see image 2)
  5. Right-click the directory with the same name as the domain the emails belong to (see image 2)
  6. Compress the directory
  7. Download the compressed file. You might need to reload the directory to see it

The ‘mail’ directory contains sub directories for each email domain. Each sub directory is named after the domain the emails belong to. Every sub directory with a domain name contains mail folders for that domain. For example, the emails for all of TEST’s email addresses are stored in /mail/test.com e.g. /mail/test.com/webmaster and /mail/test.com/noreply.

How-to-back-up-cPanel-emails-Screenshot-1

Image 1

How-to-back-up-cPanel-emails-Screenshot-2

Image 2

Import-email-file-backups-into-cPanel-Screenshot-3

Image 3

Step 2: Restore email backups

In this step we import the email backup file into server 2.

  1. Login to cPanel on server 2
  2. Use cPanel to recreate the email accounts (see image 3)
  3. Open File Manager
  4. Go up one directory above public_html (see image 1)
  5. Enter the mail directory (see image 2)
  6. Upload the email backup file
  7. Unzip the file
  8. Delete the zip file
  9. Open webmail
  10. Configure Webmail to show the new email folders

We recreate the email addresses on server 2 before we unzip the backup file because otherwise cPanel deletes the migrated emails when new accounts are created.

Email migration can be easy

Email migration between servers is straight forward. Create an email directory export file. Recreate email accounts on the target server before uploading the export file. Upload the export file to the target server.

The webmail migration method explained above here works when the domain name of the source server is the same as the domain name of the target server.

OK, folks that’s it for this post. Have a nice day guys…… Stay tuned…..!!!!!

Don’t forget to like & share this post on social networks!!! I will keep on updating this blog. Please do follow!!!

Microsoft Azure Online Resources

Azure

I will try to list here all the useful links related to Microsoft Azure like management portal, documentation, training and tools.

Management portals:

Azure Portal (the new / modern /Ibiza portal)

Azure Classic Portal (legacy portal but still needed in some case)

Azure subscription management (check your subscription and billing)

Social:

Microsoft Tech Community Azure (official forum)

Microsoft Azure on Facebook

Microsoft Azure on Twitter

Support:

Azure Support on Twitter (free support if you don’t have support plan)

Azure status (health status of all Azure services across all regions)

Tooling:

Azure Resource Explorer (explore Azure resources as “code”)

armviz.io (graphic representation for ARM templates)

Azure Quickstart Templates (ARM templates on GitHub)

PowerShell Gallery Azure RM module

Visual Studio Code (free code editor)

Visual Studio IDE (free community edition)

Learning and training:

Azure Documentation (official documentation portal for all Azure products)

Microsoft Virtual Academy Azure Courses

Microsoft Learning OpenEdx (new learning platform for Azure)

Microsoft Mechanics Azure playlist (YouTube)

Microsoft Azure YouTube Channel

Microsoft Ignite YouTube Channel (not only Azure)

Channel 9 Azure Friday (also published on YouTube)

Offers:

Azure free trial (currently 200$ for one month)

Microsoft IT Pro Cloud Essentials (training and offers for IT Pro)

Visual Studio Dev Essentials (training and offers for Developers)

I will keep posting new articles regarding “Microsoft Azure” for sure so that you will get some more insights into Microsoft Azure.

OK, folks that’s it for this post. Have a nice day guys…… Stay tuned…..!!!!!

Don’t forget to like & share this post on social networks!!! I will keep on updating this blog. Please do follow!!!

 

How to record your Putty Terminal session in RHEL?

In this post, we’re going to look How to record your Putty terminal session in RHEL machines.

Just in a simple way,

Open a new Putty terminal and run below command to start recording,

# script -t 2> timing.log -a ouput.session

t – Dump the timing data to STDERR

2> – To redirect the STDERR to timing.log

To stop the recording press crtl+d keys.

To play the recorded content run below command,

# scriptreplay timing.log output.session

It’s very easy to save your time to review the tasks which was performed on the server.

Hope you will like this feature. If any queries, leave your comment below.

OK, folks that’s it for this post. Have a nice day guys…… Stay tuned…..!!!!!

Don’t forget to like & share this post on social networks!!! I will keep on updating this blog. Please do follow!!!

Storage Terminology Concepts in Linux

Introduction

Linux has robust systems and tooling to manage hardware devices, including storage drives. In this post I’ll cover, at a high level, how Linux represents these devices and how raw storage is made into usable space on the server.

What is Block Storage?

Block storage is another name for what the Linux kernel calls a block device. A block device is a piece of hardware that can be used to store data, like a traditional spinning hard disk drive (HDD), solid state drive (SSD), flash memory stick, etc. It is called a block device because the kernel interfaces with the hardware by referencing fixed-size blocks, or chunks of space.

So basically, block storage is what you think of as regular disk storage on a computer. Once it is set up, it basically acts as an extension of the current filesystem tree, and you can write to or read information from the drive seamlessly.

What are Disk Partitions?

Disk partitions are a way of breaking up a storage drive into smaller usable units. A partition is a section of a storage drive that can be treated in much the same way as a drive itself.

Partitioning allows you to segment the available space and use each partition for a different purpose. This gives the user a lot of flexibility allowing them to potentially segment their installation for easy upgrading, multiple operating systems, swap space, or specialized filesystems.

While disks can be formatted and used without partitioning, some operating systems expect to find a partition table, even if there is only a single partition written to the disk. It is generally recommended to partition new drives for greater flexibility down the road.

MBR vs GPT

When partitioning a disk, it is important to know what partitioning format will be used. This generally comes down to a choice between MBR (Master Boot Record) and GPT (GUID Partition Table).

MBR is the traditional partitioning system, which has been in use for over 30 years. Because of its age, it has some serious limitations. For instance, it cannot be used for disks over 2TB in size, and can only have a maximum of four primary partitions. Because of this, the fourth partition is typically set up as an “extended partition”, in which “logical partitions” can be created. This allows you to subdivide the last partition to effectively allow additional partitions.

GPT is a more modern partitioning scheme that attempts to resolve some of the issues inherent with MBR. Systems running GPT can have many more partitions per disk. This is usually only limited by the restrictions imposed by the operating system itself. Additionally, the disk size limitation does not exist with GPT and the partition table information is available in multiple locations to guard against corruption. GPT can also write a “protective MBR” which tells MBR-only tools that the disk is being used.

In most cases, GPT is the better choice unless your operating system or tooling prevent you from using it.

Formatting and Filesystems

While the Linux kernel can recognize a raw disk, the drive cannot be used as-is. To use it, it must be formatted. Formatting is the process of writing a filesystem to the disk and preparing it for file operations. A filesystem is the system that structures data and controls how information is written to and retrieved from the underlying disk. Without a filesystem, you could not use the storage device for any file-related operations.

There are many different filesystem formats, each with trade-offs across a number of different dimensions, including operating system support. On a basic level, they all present the user with a similar representation of the disk, but the features that each supports and the mechanisms used to enable user and maintenance operations can be very different.

Some of the more popular filesystems for Linux are:

  • Ext4: The most popular default filesystem is Ext4, or the fourth version of the extended filesystem. The Ext4 filesystem is journaled, backwards compatible with legacy systems, incredibly stable, and has mature support and tooling. It is a good choice if you have no specialized needs.
  • XFS: XFS specializes in performance and large data files. It formats quickly and has good throughput characteristics when handling large files and when working with large disks. It also has live snapshotting features. XFS uses metadata journaling as opposed to journaling both the metadata and data. This leads to fast performance, but can potentially lead to data corruption in the event of an abrupt power loss.
  • Btrfs: Btrfs is modern, feature-rich copy-on-write filesystem. This architecture allows for some volume management functionality to be integrated within the filesystem layer, including snapshots, cloning, volumes, etc. Btrfs still runs into some problems when dealing with full disks. There is some debate over its readiness for production workloads and many system administrators are waiting for the filesystem to reach greater maturity.
  • ZFS: ZFS is a copy-on-write filesystem and volume manager with a robust and mature feature set. It has great data integrity features, can handle large filesystem sizes, has typical volume features like snapshotting and cloning, and can organize volumes into RAID and RAID-like arrays for redundancy and performance purposes. In terms of use on Linux, ZFS has a controversial history due to licensing concerns. Ubuntu is now shipping a binary kernel module for it however, and Debian includes the source code in its repositories. Support across other distributions is yet to be determined.

How Linux Manages Storage Devices?

Device Files in /dev

In Linux, almost everything is represented by a file. This includes hardware like storage drives, which are represented on the system as files in the /dev directory. Typically, files representing storage devices start with sd or hd followed by a letter. For instance, the first drive on a server is usually something like /dev/sda.

Partitions on these drives also have files within /dev, represented by appending the partition number to the end of the drive name. For example, the first partition on the drive from the previous example would be /dev/sda1.

While the /dev/sd* and /dev/hd* device files represent the traditional way to refer to drives and partitions, there is a significant disadvantage of in using these values by themselves. The Linux kernel decides which device gets which name on each boot, so this can lead to confusing scenarios where your devices change device nodes.

To work around this issue, the /dev/disk directory contains subdirectories corresponding with different, more persistent ways to identify disks and partitions on the system. These contain symbolic links that are created at boot back to the correct /dev/[sh]da* files. The links are named according to the directory’s identifying trait (for example, by partition label in for the /dev/disk/by-partlabel directory). These links will always point to the correct devices, so they can be used as static identifiers for storage spaces.

Some or all of the following subdirectories may exist under /dev/disk:

  • by-label: Most filesystems have a labeling mechanism that allows the assignment of arbitrary user-specified names for a disk or partition. This directory consists of links that named after these user-supplied labels.
  • by-uuid: UUIDs, or universally unique identifiers, are a long, unique string of letters and numbers that can be used as an ID for a storage resource. These are generally not very human-readable, but are pretty much guaranteed to be unique, even across systems. As such, it might be a good idea to use UUIDs to reference storage that may migrate between systems, since naming collisions are less likely.
  • by-partlabel and by-partuuid: GPT tables offer their own set of labels and UUIDs, which can also be used for identification. This functions in much the same way as the previous two directories, but uses GPT-specific identifiers.
  • by-id: This directory contains links generated by the hardware’s own serial numbers and the hardware they are attached to. This is not entirely persistent, because the way that the device is connected to the system may change its by-id name.
  • by-path: Like by-id, this directory relies on the storage devices connection to the system itself. The links here are constructed using the system’s interpretation of the hardware used to access the device. This has the same drawbacks as by-id as connecting a device to a different port can alter this value.

Usually, by-label or by-uuid are the best options for persistent identification of specific devices.

Mounting Block Devices

The device file within /dev are used to communicate with the Kernel driver for the device in question. However, a more helpful abstraction is needed in order to treat the device as a segment of available space.

In Linux and other Unix-like operating systems, the entire system, regardless of how many physical devices are involved, is represented by a single unified file tree. As such, when a filesystem on a drive or partition is to be used, it must be hooked into the existing tree. Mounting is the process of attaching a formatted partition or drive to a directory within the Linux filesystem. The drive’s contents can then be accessed from that directory.

Drives are almost always mounted on dedicated empty directories (mounting on a non-empty directory means that the directory’s usual contents will be inaccessible until the drive is unmounted). There are many different mounting options that can be set to alter the behavior of the mounted device. For example, the drive can be mounted in read-only mode to ensure that its contents won’t be altered.

The Filesystem Hierarchy Standard recommends using /mnt or a subdirectory under it for temporarily mounted filesystems. If this matches your use case, this is probably the best place to mount it. It makes no recommendations on where to mount more permanent storage, so you can choose whichever scheme you’d like. In many cases, /mnt or /mnt subdirectories are used for more permanent storage as well.

Making Mounts Permanent with /etc/fstab

Linux systems look at a file called /etc/fstab (filesystem table) to determine which filesystems to mount during the boot process. Filesystems that do not have an entry in this file will not be automatically mounted (the exception being those defined by systemd .mount unit files, although these are not common at the moment).

The /etc/fstab file is fairly simple. Each line represents a different filesystem that should be mounted. This line specifies the block device, the mount point to attach it to, the format of the drive, and the mount options, as well as a few other pieces of information.

More Complex Storage Management

While most simple use cases do not need additional management structures, more performance, redundancy, or flexibility can be obtained by more complex management paradigms.

What is RAID?

RAID stands for redundant array of independent disks. RAID is a storage management and virtualization technology that allows you to group drives together and manage them as a single unit with additional capabilities.

The characteristics of a RAID array depend on its RAID level, which basically defines how the disks in the array relate to each other. The level chosen has an impact on the performance and redundancy of the set. Some of the more common levels are:

  • RAID 0: This level indicates drive striping. This means that as data is written to the array, it is split up and distributed among the disks in the set. This offers a performance boost as multiple disks can be written to or read from simultaneously. The downside is that a single drive failure can lose all of the data in the entire array, since no one disk contains enough information about the contents to rebuild.
  • RAID 1: RAID 1 is basically drive mirroring. Anything written to a RAID 1 array is written to multiple disks. The main advantage is data redundancy, which allows data to survive hard drive lose in either side of the mirror. Because multiple drives contain the same data, usable capacity is reduced half.
  • RAID 5: RAID 5 stripes data across multiple drives, similar to RAID 0. However, this level also implements a distributed parity across the drives. This basically means that if drive fails, the remaining drives can rebuild the array using the parity information shared between them. The parity information is enough to rebuild any one disk, meaning the array can survive any one disk loss. The parity information reduces the available space in the array by the capacity of one disk.
  • RAID 6: RAID 6 has the same properties as RAID 5, but provides double parity. This means that RAID 6 arrays can withstand the loss of any 2 drives. The capacity of the array is again affected by the parity amount, meaning that the usable capacity is reduced by two disks worth of space.
  • RAID 10: RAID 10 is a combination of levels 1 and 0. First, two sets of mirrored arrays are made. Then, data is striped across them. This creates an array that has some redundancy characteristics while providing good performance. This requires quite a few drives however, and the total capacity is half of the combined disk space.

What is LVM?

LVM, or Logical Volume Management, is a system that abstracts the physical characteristics of the underlying storage devices in order to provide increased flexibility and power. LVM allows you to create groups of physical devices and manage it as if it were one single block of space. You can then segment the space as needed into logical volumes, which function as partitions.

LVM is implemented on top of regular partitions, and works around many of the limitations inherent with classical partitions. For instance, using LVM volumes, you can easily expand partitions, create partitions that span multiple drives, take live snapshots of partitions, and moving volumes to different physical disks. LVM can be used in conjunction with RAID to provide flexible management with traditional RAID performance characteristics.

OK, folks that’s it for this post. Have a nice day guys…… Stay tuned…..!!!!!

Don’t forget to like & share this post on social networks!!! I will keep on updating this blog. Please do follow!!!

wp-config.php File – An In-Depth View on How to Configure WordPress

One of the most important files of a WordPress installation is the configuration file. It resides in the root directory and contains constant definitions and PHP instructions that make WordPress work the way you want.
The wp-config.php file stores data like database connection details, table prefix, paths to specific directories and a lot of settings related to specific features we’re going to dive into in this post.

The basic wp-config.php file

When you first install WordPress, you’re asked to input required information like database details and table prefix. Sometimes your host will set-up WordPress for you, and you won’t be required to manually run the set-up. But when you’re manually running the 5-minute install, you will be asked to input some of the most relevant data stored into wp-config.

When you run the set-up, you will be required to input data that will be stored into wp-config.php file
When you run the set-up, you will be required to input data that will be stored into wp-config.php file

 

Here is a basic wp-config.php file:

// ** MySQL settings - You can get this info from your web host ** //
/** The name of the database for WordPress */
define('DB_NAME', 'database_name_here');

/** MySQL database username */
define('DB_USER', 'username_here');

/** MySQL database password */
define('DB_PASSWORD', 'password_here');

/** MySQL hostname */
define('DB_HOST', 'localhost');

/** Database Charset to use in creating database tables. */
define('DB_CHARSET', 'utf8');

/** The Database Collate type. Don't change this if in doubt. */
define('DB_COLLATE', '');

define('AUTH_KEY',	'put your unique phrase here');
define('SECURE_AUTH_KEY',	'put your unique phrase here');
define('LOGGED_IN_KEY',	'put your unique phrase here');
define('NONCE_KEY',	'put your unique phrase here');
define('AUTH_SALT',	'put your unique phrase here');
define('SECURE_AUTH_SALT',	'put your unique phrase here');
define('LOGGED_IN_SALT',	'put your unique phrase here');
define('NONCE_SALT',	'put your unique phrase here');

$table_prefix = 'wp_';

/* That's all, stop editing! Happy blogging. */

Usually, this file is automatically generated when you run the set-up, but occasionally WordPress does not have privileges to write in the installation folder. In this situation, you should create an empty wp-config.php file, copy and paste content from wp-config-sample.php, and set the proper values to all defined constants. When you’re done, upload your file into the root folder and run WordPress.

Note: constant definitions and PHP instructions come in a specific order we should never change. And we should never add contents under the following comment line:

/* That's all, stop editing! Happy blogging. */

First, come the definitions of database constants you should have received from your host:

  • DB_NAME
  • DB_USER
  • DB_PASSWORD
  • DB_HOST
  • DB_CHARSET
  • DB_COLLATE

Following database details, eight security keys will make the site more secure against hackers. When you run the installation WordPress will automatically generate security and salt keys, but you can change them anytime, adding any arbitrary string. For better security, consider to use the online generator.

$table_prefix variable stores the prefix of all WordPress tables. Unfortunately, anyone knows its default value and this could open WordPress database to a vulnerability, which can be easily fixed by setting a custom value for $table_prefix when running the set-up.
To change table prefix in a working website, you should run several queries against the database, then manually edit the wp-config.php file. If you don’t have access to the database or you don’t have the required knowledge to build custom queries, then you can install a plugin like Change Table Prefix that will rename database tables and field names, and update the config file with no risk.

Note: it’s a good practice to back-up WordPress files and database even if you will change the table prefix with a plugin.

So far the analysis has been limited to the basic configuration. But we have at our disposal many constants we can define to enable features, customize and secure the installation.

Over basic configuration: editing the file system

WordPress file system is well known by users and hackers. For this reason, you may consider to change the built-in file structure by moving specific folders in arbitrary locations and setting the corresponding URLs and paths in wp-config file.
First, we can move the content folder by defining two constants. The first one sets the full directory path:

define( 'WP_CONTENT_DIR', dirname(__FILE__) . '/site/wp-content' );

The second sets the new directory URL:

define( 'WP_CONTENT_URL', 'http://example.com/site/wp-content' );

We can move just the plugin folder by defining the following constants:

define( 'WP_PLUGIN_DIR', dirname(__FILE__) . '/wp-content/mydir/plugins' );
define( 'WP_PLUGIN_URL', 'http://example.com/wp-content/mydir/plugins' );

The same way, we can move the uploads folder, by setting the new directory path:

define( 'UPLOADS', 'wp-content/mydir/uploads' );

Note: All paths are relative to ABSPATH, and they should not contain a leading slash.

When done, arrange the folders and reload WordPress.

The image shows the built-in file structure compared to a customized structure
The image shows the built-in file structure compared to a customized structure

It’s not possible to move /wp-content/themes folder from the wp-config file, but we can register a new theme directory in a plugin or a theme’s functions file.

Features for developers: debug mode and saving queries

If you are a developer you can force WordPress to show errors and warnings that will help you in theme and plugin debugging. To enable debug mode you just have to set WP_DEBUG value to true, as shown below:

define( 'WP_DEBUG', true );

WP_DEBUG is set to false by default. If you need to disable debug mode, you can just remove the definition, or set the constant’s value to false.
When you’re working on a living site, you should disable debug mode. Errors and warnings should never be shown to site viewers because it can provide valuable information to hackers. But what if you have to debug anyway?
In such situations, you can force WordPress to keep memory of errors and warning in debug.log file, placed in /wp-content folder. To enable this feature, copy and paste the following code in your wp-config.php file:

define( 'WP_DEBUG', true );
define( 'WP_DEBUG_LOG', true );
define( 'WP_DEBUG_DISPLAY', false );
@ini_set( 'display_errors', 0 );

To make this feature work we first need to enable debug mode. Then, setting WP_DEBUG_LOGto true we force WordPress to store messages into debug.log file, while defining WP_DEBUG_DISPLAY to false we hide them from the screen. Finally, we set to 0 the value of PHP variable display_errors so that error messages won’t be printed to the screen. wp-config is never loaded from the cache. For this reason, it is a good place to override php.ini settings.

Note: This is a great feature you can take advantage of to register messages that WordPress would not print on the screen. As an example, when the publish_post action is triggered WordPress loads a script that saves data, then redirects the user to the post editing page. In this situation you can register messages, but not print them on the screen.

Another debugging constant determines the versions of scripts and styles to be loaded. Set SCRIPT_DEBUG to true if you want to load uncompressed versions:

define( 'SCRIPT_DEBUG', true );

If your theme or plugin shows data retrieved from the database, you may want to store query details for subsequent review. The SAVEQUERIES constant forces WordPress to store query information into $wpdb->queries array. These details would be printed adding the following code to the footer template:

if ( current_user_can( 'administrator' ) ) {
global $wpdb;
echo '<pre>';
print_r( $wpdb->queries );
echo '</pre>';
}

Content related settings

When your website grows up, you may want to reduce the number of post revisions. By default, WordPress automatically saves revisions each 60 seconds. We can change this value by setting a custom interval in wp-config as follows:

define( 'AUTOSAVE_INTERVAL', 160 );

Of course, you can decrease the auto-save interval, as well.
Each time we save our edits, WordPress adds a row to the posts table, so that we could restore previous revisions of posts and pages. This is a useful functionality that could turn into a problem when our site grows big. Fortunately, we can decrease the maximum number of post revisions to be stored, or disable the functionality at all.
If you’d want to disable post revisions, define the following constant:

define( 'WP_POST_REVISIONS', false );

If you’d want to limit the maximum number of revisions, instead, add the following line:

define( 'WP_POST_REVISIONS', 10 );

By default, WordPress stores trashed posts, pages, attachments and comments for 30 days, then deletes them permanently. We can change this value with the following constant:

define( 'EMPTY_TRASH_DAYS', 10 );

We can even disable trash, setting its value to 0, but consider that WordPress will not allow you to restore contents anymore.

Allowed memory size

Occasionally you may receive a message like the following:

Fatal error: Allowed memory size of xxx bytes exhausted …

The maximum memory size depends on the server configuration. In case you didn’t have access to php.ini file, you can increase memory limit just for WordPress by setting the WP_MEMORY_LIMIT constant in wp-config file. By default, WordPress try to allocate 40Mb to PHP for single sites and 64MB for multisite installations. Of course, if PHP allocated memory is greater than 40Mb (or 64Mb), WordPress will adopt the maximum value.
That being said, you can set a custom value with the following line:

define( 'WP_MEMORY_LIMIT', '128M' );

If needed, you can set a maximum memory limit, as well, with the following statement:

define( 'WP_MAX_MEMORY_LIMIT', '256M' );

Automatic updates

Starting from version 3.7, WordPress supports automatic updates for security releases. This is an important feature that allows site admins to keep their website secure all the time.
You can disable all automatic updates by defining the following constant:

define( 'AUTOMATIC_UPDATER_DISABLED', true );

Maybe it’s not a good idea to disable security updates, but it’s your choice.
By default, automatic updates do not work with major releases, but you can enable any core updates defining WP_AUTO_UPDATE_CORE as follows:

# Disables all core updates:
define( 'WP_AUTO_UPDATE_CORE', false );

# Enables all core updates, including minor and major:
define( 'WP_AUTO_UPDATE_CORE', true );

Default value is minor:

define( 'WP_AUTO_UPDATE_CORE', 'minor' );

An additional constant disables auto-updates (and any update or change to any file). If you set DISALLOW_FILE_MODS to true, all file edits will be disabled, even theme and plugin installations and updates. For this reason, its usage is not recommended.

Security settings

We can use wp-config file to increase site security. In addition to changes to the file structure we’ve looked at above, we can lock down some features that could open unnecessary vulnerabilities. First of all, we can disable the file editor provided in the admin panel. The following constant will hide the Appearance Editor screen:

define( 'DISALLOW_FILE_EDIT', true );

Note: consider that some plugins could not work properly if this constant is defined to true.

disallow_file_edit

A security feature is Administration over SSL. If you’ve purchased an SSL certificate, and it’s properly configured, you can force WordPress to transfer data over SSL at any login and admin session. Use the following constant:

define( 'FORCE_SSL_ADMIN', true );

Check the Codex if you need more information about Administration over SSL.

Other two constants allow to block external requests and list admitted hosts.

define( 'WP_HTTP_BLOCK_EXTERNAL', true );
define( 'WP_ACCESSIBLE_HOSTS', 'example.com,*.anotherexample.com' );

In this example, we have first disabled all accesses from external hosts, then listed allowed hosts, separated by commas (wildcards are allowed).

Other advanced settings

WP_CACHE set to true includes wp-content/advanced-cache.php script. This constant has effect only if you install a persistent caching plugin.

CUSTOM_USER_TABLE and CUSTOM_USER_META_TABLE are used to set custom user tables other than default wp_users and wp_usermeta tables. These constants enable a useful feature that allows site users to access several websites with just one account. For this feature to work, all installations should share the same database.

Starting from version 2.9, WordPress support Automatic Database Optimizing. Thanks to this feature, setting WP_ALLOW_REPAIR to true, WordPress will automatically repair a corrupted database.

WordPress creates a new set of images each time you edit an image. If you’d restore the original image, all generated sets will remain on the server. You can overwrite this behavior by setting IMAGE_EDIT_OVERWRITE to true, so that, when you restore the original image, all edits will be deleted from the server.

Lockdown wp-config.php

Now we know why wp-config.php is one of the most important WordPress files. So, why don’t we hide it to hackers? First of all, we can move wp-config one level above WordPress root folder (just one level). However, this technique is a bit controversial, so I would suggest to adopt other solutions to protect the file. If your website is running on Apache Web Server, you can add the following directives to .htaccess file:

<files wp-config.php>
order allow,deny
deny from all
</files>

If the website is running on Nginx, you can add the following directive to the configuration file:

location ~* wp-config.php { deny all; }

Note: these instructions should be added only after the set-up is complete.

Conclusions

In this post, I’ve listed a lot of WordPress constant that we can define into wp-config file. Some of these constants are of common usage, and their functions are easy to understand. Other constants enables advanced features that require a deep knowledge of WordPress and site administration.
I’ve listed the most common features, leaving apart some advanced features.

OK, folks that’s it for this post. Have a nice day guys…… Stay tuned…..!!!!!

Don’t forget to like & share this post on social networks!!! I will keep on updating this blog. Please do follow!!!