Vision Of Collaborative Research Platform

With the proliferation of global collaboration platform the World Wide Web, we need to direct research in the right direction – one that centers on solving real problems collaboratively, not just publication of random research papers (for getting promoted!).

A collaborative research platform could define substantial problems, each of them, if solved, can bring about revolutions, and break the substantial problems into manageable pieces, i.e., sub-problems and the sub-problems into even more manageable pieces, i.e., sub-sub-problems. Individual researchers or research groups could pick and work on manageable problems that match their abilities and interests and publish and share results on the platform. Researchers could define new problems, define new sub-problems, suggest improvements and changes on the platform.

A research paper could be co-authored by hundreds or even thousands of researchers scattered throughout the world.

For example, a high level problem could be “Codify Biology to the point that you can control biological processes and organisms”. The problem should have measurable, quantifiable goals. This high level problem could be divided into more manageable pieces and then into even more manageable pieces until they are solvable by individual researchers or research groups.

This would help the entire scientific community move forward towards practical goals much more rapidly.

The Story Of Computer Networking: Progression From Internet To Web and Cloud


In the 60s and 70s, Scientists and Engineers at DARPA [1] and elsewhere were experimenting with ways of communication between computers and communication between computer networks. The paradigm was different from peer-to-peer communication, the one we see when we use our smart-phones or not-so-smart-phones! Mainframes were already in use with terminals sharing computational resources. Applications demanded Computer Networking to be similar, of Client-Server nature – clients sharing computational resources (storage, important files stored in storage, supercomputing power) of servers, which are always online.

It’s not possible for a person who only understands Spanish to communicate with another who only understands English. Similarly, we need common standards of communication between computers, so that the computers involved understand each other. The standards for communications are called protocols

In the 80s, the TCP/IP protocol came into widespread use and with it the proliferation of Internet (the word comes from “inter-network” – network of computer networks). TCP/IP defined how computers connected via a network would communicate using the client-server paradigm. 


People started inventing all sorts of application protocols on top of TCP/IP for different purposes (file sharing, email, etc).

In late 80s and early 90s, Sir Tim Berners Lee, who was at that time working at CERN [2], invented a new protocol on top of TCP/IP, to share research documents among scientists working at CERN. This protocol, called HTTP, short for HyperText Transfer Protocol, became the basis for World Wide Web. It defined the concepts of Webpages (documents scientists at CERN were sharing), and URLs (a human-friendly way of referencing the documents) and Hyperlinks (the “links” that we click on to get us to a new webpage) to access the documents. Before Google and other search engines came along, URLs and Hyperlinks were the only ways you could navigate the Web. 

Cloud Computing

In the first decade of the new millenium, Amazon wanted to introduce something new. It found out that at any given time the servers at Amazon were using only about 10% of their full computational power. So why not provide the unused computational power as service to those who need it? 

Before, Webservers served only webpages. Now webservers could serve computational power (storage, processing and network bandwidth). Users could build Web applications and use Amazon’s unused computational power to host those applications and serve them to users. Better yet, just like utility services are provided – the more you use, the more you pay and “no use, no bill”. Before, you could only buy fixed amount of server storage and network bandwidth capacity and pay a fixed bill. Now, if need arose, you could stretch or squeeze computational power – from this came the name “elastic computing” or “utility computing”

An important thing follows. Before, you could not only buy fixed amount of server storage, but you could only buy “fixed amount of server storage” located in a “fixed computer or fixed set of computers”. Now, the notion of a fixed set of computers (in a huge data center consisting of thousands of computers) was gone. You have no idea where your code is running and where Amazon’s or some other company’s code is running. The servers determine which computer to use based on which ones are unused. Thus, hardware is abstracted away from software – a form of virtualization

Cloud Computing in action

The virtualized servers were called Cloud and a new computing paradigm was born – Cloud Computing.

So, what’s the big deal?

Well, before the advent of cloud computing, computational resources you could use at a time had an upper limit. Now, if need arose, you could stretch or squeeze that limit to suit your needs. 

Suppose, you own a soccer news website that has spikes right before and during important soccer games – that is, users use your website frequently just before and during soccer games. Now, if you host your website on a cloud computing platform, the platform would stretch and squeeze computational resources it uses according to the need and determine your bills accordingly. 

On the other hand, if you host on a non-cloud computing platform, during the times of high usage, users might not be able to access your site – if the upper limit is crossed. If you want to make sure that your users can always use your site (in a non-cloud computing platform), then you had to determine the highest amount of computational resources your website uses during the times of spike and pay for the usage of highest amount of computational resources for a whole month or a whole year! (though highest amount is only used say only once or twice a week.)

Or suppose, you need to get some heavy calculation oriented stuff – image processing, scientific calculation or something like that – done in a short time. You could use a cloud computing platform for a limited time and pay your bills accordingly. 

So, Cloud Computing gives you an opportunity to use computational power according to your needs at cheaper rates like never before. 

You might be wondering about the influence of Cloud Computing among general consumers. When you heard the phrase “moving to Cloud” – it seemed like a complete revolution and my explanation of Cloud Computing seems nothing like that. 

Well, the cloud also refers to the Web and “moving to cloud” refers to moving all your computational needs to the cloud or Web. 

Need word processing? Use Google Docs. Need storage space? No need to rely on your hard-drive, use Google Drive. Want to manage your company or customers? Use Google Apps or Salesforce or Basecamp. If you move everything to the cloud, you are no longer tied to a single device or a set of devices, but can access your data and work from anywhere. 

The fact is just as server side Cloud Computing paradigm at Amazon, Google and other companies was unfolding, Network speed and processing power were increasing and with that the culture of “moving to cloud”. So both happened at the same time – 

  • a particular way of utilizing servers just like other utilities and
  • moving all your computational needs to the Web.

Both the technical concept (servers as utility) and popular parlance concept (“moving to cloud”) were termed as “Cloud Computing”. A more important fact is that each helped the other grow.

It’s up to you to understand the right meaning from the context.   

Cloud Computing Platforms