It’s not possible for a person who only understands Spanish to communicate with another who only understands English. Similarly, we need common standards of communication between computers, so that the computers involved understand each other. The standards for communications are called protocols.
In the 80s, the TCP/IP protocol came into widespread use and with it the proliferation of Internet (the word comes from “inter-network” – network of computer networks). TCP/IP defined how computers connected via a network would communicate using the client-server paradigm.
People started inventing all sorts of application protocols on top of TCP/IP for different purposes (file sharing, email, etc).
In late 80s and early 90s, Sir Tim Berners Lee, who was at that time working at CERN , invented a new protocol on top of TCP/IP, to share research documents among scientists working at CERN. This protocol, called HTTP, short for HyperText Transfer Protocol, became the basis for World Wide Web. It defined the concepts of Webpages (documents scientists at CERN were sharing), and URLs (a human-friendly way of referencing the documents) and Hyperlinks (the “links” that we click on to get us to a new webpage) to access the documents. Before Google and other search engines came along, URLs and Hyperlinks were the only ways you could navigate the Web.
In the first decade of the new millenium, Amazon wanted to introduce something new. It found out that at any given time the servers at Amazon were using only about 10% of their full computational power. So why not provide the unused computational power as service to those who need it?
Before, Webservers served only webpages. Now webservers could serve computational power (storage, processing and network bandwidth). Users could build Web applications and use Amazon’s unused computational power to host those applications and serve them to users. Better yet, just like utility services are provided – the more you use, the more you pay and “no use, no bill”. Before, you could only buy fixed amount of server storage and network bandwidth capacity and pay a fixed bill. Now, if need arose, you could stretch or squeeze computational power – from this came the name “elastic computing” or “utility computing”.
An important thing follows. Before, you could not only buy fixed amount of server storage, but you could only buy “fixed amount of server storage” located in a “fixed computer or fixed set of computers”. Now, the notion of a fixed set of computers (in a huge data center consisting of thousands of computers) was gone. You have no idea where your code is running and where Amazon’s or some other company’s code is running. The servers determine which computer to use based on which ones are unused. Thus, hardware is abstracted away from software – a form of virtualization.
|Cloud Computing in action
The virtualized servers were called Cloud and a new computing paradigm was born – Cloud Computing.
So, what’s the big deal?
Well, before the advent of cloud computing, computational resources you could use at a time had an upper limit. Now, if need arose, you could stretch or squeeze that limit to suit your needs.
Suppose, you own a soccer news website that has spikes right before and during important soccer games – that is, users use your website frequently just before and during soccer games. Now, if you host your website on a cloud computing platform, the platform would stretch and squeeze computational resources it uses according to the need and determine your bills accordingly.
On the other hand, if you host on a non-cloud computing platform, during the times of high usage, users might not be able to access your site – if the upper limit is crossed. If you want to make sure that your users can always use your site (in a non-cloud computing platform), then you had to determine the highest amount of computational resources your website uses during the times of spike and pay for the usage of highest amount of computational resources for a whole month or a whole year! (though highest amount is only used say only once or twice a week.)
Or suppose, you need to get some heavy calculation oriented stuff – image processing, scientific calculation or something like that – done in a short time. You could use a cloud computing platform for a limited time and pay your bills accordingly.
So, Cloud Computing gives you an opportunity to use computational power according to your needs at cheaper rates like never before.
You might be wondering about the influence of Cloud Computing among general consumers. When you heard the phrase “moving to Cloud” – it seemed like a complete revolution and my explanation of Cloud Computing seems nothing like that.
Well, the cloud also refers to the Web and “moving to cloud” refers to moving all your computational needs to the cloud or Web.
Need word processing? Use Google Docs. Need storage space? No need to rely on your hard-drive, use Google Drive. Want to manage your company or customers? Use Google Apps or Salesforce or Basecamp. If you move everything to the cloud, you are no longer tied to a single device or a set of devices, but can access your data and work from anywhere.
The fact is just as server side Cloud Computing paradigm at Amazon, Google and other companies was unfolding, Network speed and processing power were increasing and with that the culture of “moving to cloud”. So both happened at the same time –
- a particular way of utilizing servers just like other utilities and
- moving all your computational needs to the Web.
Both the technical concept (servers as utility) and popular parlance concept (“moving to cloud”) were termed as “Cloud Computing”. A more important fact is that each helped the other grow.
It’s up to you to understand the right meaning from the context.
Cloud Computing Platforms