Home Articles FAQs XREF Games Software Instant Books BBS About FOLDOC RFCs Feedback Sitemap
irt.Org

Related items

Light Relief

Links Want To Be Links

What are RFCs

Looking for Something? : Searching the Web

Are all Portals the same?

Intranets in Education

Representation of Japanese Language Characters on the WWW

Rendering Chinese Language Characters on the World-Wide Web

Where is the Web heading?

WWW - How It All Begun

You are here: irt.org | Articles | World Wide Web (WWW) | WWW - How It All Begun [ previous next ]

Published on: Friday 9th October 1998 By: Janus Boye

Introduction

Just after World War II was over, in July 1945, Dr. Vannevar Bush, Director of the Office of Scientific Research and Development in the US, who had been leading more than 6000 thousand American scientists in the application of science to warfare, wrote an article called As We May Think in The Atlantic Monthly.

He urged, that men of science should turn to the massive task, of making more accessible our bewildering store of knowledge, now when it was no longer necessary to build new amazing weapons.

"Now", said Dr. Bush, "instruments are at hand which, if properly developed, will give man access to and command over the inherited knowledge of the ages. The perfection of these pacific instruments should be the first objective of our scientists as they emerge from their war work."

More than 40 years later, in 1989, we had hypertext, personal computing, and the Internet TCP/IP protocol. These where the main building blocks of what Tim Berners-Lee and Robert Cailliau, while working at CERN (a High Energy Physics Lab in Switzerland), later decided to call the World Wide Web (It should only have been a temporary name, but it stuck).

In 1989 Tim Berners-Lee and Robert Cailliau independently made a proposal, for a networked hypertext project for International High-Energy Physics collaborations.

A common friend and colleague brought both together and in 1990 they made a joint, third, proposal for a hypertext project for CERN, which was then finally accepted by the management.

Prehistory

Many things had to happen, before we could have the World Wide Web. There are 3 main developments, that led to the invention of the World Wide Web:

Allow me slowly move through each;

Ted Nelson, famous for his Xanadu project, coined the term hypertext back in the sixties. Hypertext is a means of navigating in text, by using the computer to help you find things. The basic idea, is to make some of the text into links, where you are able, with your mouse, to move onto another hypertext document. HTML is build on this. You can read more in Michael Bednarek's article: An Introduction to HTML.

In personal computing, Doug Engelbart, did revolutionary work at the Stanford Research Institute back in the sixties. He was committed to "Augmenting Human Intelligence." Doug envisioned a networked system, that would become a human intelligence amplifier allowing knowledge workers to operate at a much higher level, not just as individuals, but as members of project teams grappling with complex, interrelated system and sub-systemdesigns over computer networks.

Out of his work grew the mouse (also known as an "X-Y Position Indicator for a Display System), the idea of multiple windows on the same screen, and also he created the first implementation of hypertext links and nodes, and the first working electronic mail system.

The Apple II in 1979 and Macintosh also played a big role in the development of personal computing, and perhaps still does?

The last important development is networking. Most important here is the Internet TCP/IP protocol invented by DARPA (the US Defense Advanced Research Projects Agency) in the mid 1970s. The idea was to have a protocol, a way of communicating, that could withstand a nuclear attack. If New York was blown off the face of this planet, the system needed to keep working. The decentralized network was born.

TCP/IP is lightweight, cheap, and also it supports many different types of hardware and operating systems.

Despite all the hype about rapid change, all these 3 technologies, hypertext, the PC, and TCP/IP, are very old, especially in the computer world, where almost everything you do, should have been done yesterday. I'll return to this later.

In the early eighties people were very enthusiastic of electronic communication. But the problem was, that we had many different incompatible systems. If you wanted to access a few different networks, you would probably need a few different machines, with different operating systems, just to gain access to information. This caused inefficiancy and frustration. Our networks were heavily fragmented, and build on closed systems.

This was fertile soil for the invention of the World Wide Web at CERN. Tim Berners-Lee wanted a media, where the scientists working at CERN could exchange information with other HEP (High Energy Physics) Labs in one gloabal open system.

The system he created, for the physics society, was totally unable to do math and even more unable to do any sort of physics, but instead got very popular in other parts of society.

A browser

Just before we move onto some browser history, it is important, to know, that the Internet does not equal the Web. The World Wide Web is a part of the Internet. Usenet, Gopher, FTP, and emails are other parts of the Internet...

Towards the end of 1990, Tim had the first WYSIWYG (What You See Is What You Get) browser/editor and server ready on the NeXTStep Operating System. His browser was reading HTML (Hypertext Markup Language) and his server was transporting it through HTTP (Hypertext Transport Protocol).

To promote their idea, Tim and Robert gave away their browser for free. It was distrubuted through the Internet and it got very popular. Perhaps it went too fast, as it became increasingly difficult to control how HTML matured.

An important aspect, is also, that the browser was an editor. You were able to edit the HTML documents directly, just as you are in the Amaya browser from W3C today.

The Web then became read-only when NSCA (National Center for Supercomputing Application, located at the University of Illionois, USA), released the NCSA X-Mosaic in 1993. X-Mosaic by Marc Andreessen and Eric Bina also introduced images, and basically it was the start of the fragmentation of HTML. Proprietary tags started being introduced. On a side note, in 1993 the Web had 0.1% of all Internet traffic, and in early 1993 the world had 50 web sites!

Not much more than a year later, Andreessen left NCSA and joined Mosaic Communcations Corporation, which was later renamed to Netscape. Mosaic Corp. was founded by Jim Clark from Silicon Graphics.

Netscape, as you know, went on and became a big company just by selling their Navigator browser. Another big software company called Microsoft, was very passive for a long time, as they considered the Web to be a place for computer geeks, and the elite of American college students. They considered Usenet to be a nerdy place, and the Web to be even worse...They changed their mind, and the rest is history.

Woodstock of the Web

More than 25 years ago, in upstate New York, a very famous Music and Arts Fair was held, with many famous artists playing music.

More than 4 years ago, at CERN, a very famous Web conference was held. It attracted more than 600 people, but only 400 could be admitted to the conference site.

1994, when the first World Wide Web conference was held, was indeed the "Year of the Web". WWW finally overtook Gopher trafic, 2500 servers existed, and more important Tim Berners-Lee left CERN to establish a vendor-neutral Consortium to keep the Web standards open and non-proprietary.

The level of activity was much higher in the US, as it still is, so this Consortium was setup with MIT's Laboratory for Computer Science in Boston, USA. The World Wide Web Consortium (W3C) was born.

The year earlier, due to the enourmous succes of X-Mosaic (that thing with embedded images got popular), CERN abondoned the development of browsers and the quest for better HTML. This, very important, work was now picked up by the W3C.

Everybody is a publisher

When the Web was invented, it was very important to exceed the critical mass of data quickly. This was done, by making the first many browsers support already existing systems via the protocols that already existed (e.g. FTP, NNTP, Gopher).

Providing information on the Web was, and still is, very simple, and this has played a very big role in putting all the current content (both good & bad, serious & unserious) onto the Web.

Another very important aspect of HTML is that, it is created in a way, so that it ignores everything it does not understand. E.g. if you created your own browser, that, to make things worse, supported your own proprietary FOOTER tag, this would be ignored by any other browser, since it would not recognize it. This helps insure backwards-compatibility. Other exciting possibilities that the Web brought is efficient document caching and the reduction of redundant out-of-date copies.

These many issues has led to the increasing popularity of the Web. It has indeed become the greatest library on this planet. Everything is virtually online, and perhaps even twice. You can find extremely detailed maps, legal information, read the Starr report, listen to music, trade stocks, chat and I'm sure you can come up with even more bizarre things.

In 1996, the first problems arose with content on the Web, when Germany had the first court case on decency of content, and also recently the Web has become the center stage in a Belgian case on child-pornography.

To insure high quality content, the W3C has in the last 2 years been pushing technologies such as CSS, XML, XSL and RDF. These technologies, along with DHTML, has perhaps made it a bit more complicated to publish content on the Web, but instead these technologies focus on the production of high quality content.

Open Source on the Web

The very first browsers, created at CERN, were given away for free, with the source code and everything. This allowed other to improve on the code, and it made the Web expand extremely rapidly. X-Mosaic and the first versions of Netscape was also given away for free, though without the source code. The Web tsunami had begun.

When you browse the Web, you're always able to view the HTML code behind it all. This, mixed with the simplicity of HTML, has made it very easy to learn HTML.

The first HTML specification was developed by Dan Connolly, who had joined the Web project in 1991. HTML was developed as a subset of SGML. X-Mosaic (1992), which introduced images, used this HTML standard. HTML then moved forward in November 1995 with an HTML 2.0 standard, which introduced forms and better image handling. This version is currently regarded as the "lowest common denominator."

HTML 3.0 from 1995 never really did anything, since it was so different from what was currently being used. So the W3C soon after in July 1996 announced HTML 3.2, which is basically an update to HTML 2.0. For the first time, the Web now saw tables, applets and floating images with wrapped text. HTML 3.2 was approved in January 1997.

The newest version of HTML, is HTML 4.0 which became a W3C recommendation in December 1997. HTML 4.0 extends HTML with mechanisms for style sheets, scripting, frames, embedding objects, and also it offers improved accessibility for people with disabilities.

HTML and the Web is also platform independent, and you can now acces the Web in many different ways: From your home Mac, from your office PC, from the Unix server, from your PalmPilot, from Webkiosks and much more.

The Web has indeed become an universal information space.

Open Questions

The Web is still far from at is full potential.

"We've only just begun,"

as Tim Berners-Lee said the other day.

Open source has, as mentioned earlier, played a very big role in shaping the Web, as we know it today. With all the current fuss about Linux and Netscape's release of the source code behind Mozilla, it will be interesting to see what impact this will have on the Web. And also vice versa: What impact will the Web have on Linux and Mozilla?

Is it possible to ever have too much content on the Web? Doesn't the enormous load of content currently on the Web devalue the Web user experience and content in general?

Another interesting point, made by Robert Cailliau at WWW7 in Brisbane, Australia, is, that when the train was invented in 1833, in England, it played a very big role in the industrial revolution less than 10 years later. The very first plane, by the Wright brothers, took off in 1903, and planes played a very big role in World Wide I only 11 years later. The Web is now 9 years old, the Internet is in it's thirties and what has happened?

More than 10 years ago, I was unable to use a Mac WordStar document on my PC. Today, I'm still unable to take a HTML document optimized for Microsoft Internet Explorer 4, perhaps using DHTML, and JavaScript (and perhaps even ActiveX and VBScript), onto my Netscape Navigator. Where is the improvement? Have we gotten just a little bit more clever in the last decade?

References

The original proposal of the WWW, HTMLized - Information Management: A proposal:
http://www.w3.org/History/1989/proposal.htm

An executive summary of the World-Wide Web initiative:
http://www.w3.org/summary.html

HTML 4.0 Specification
http://www.w3.org/TR/REC-html40

HTML 3.2 Recommendation
http://www.w3.org/TR/REC-html32.html

HTML 2.0 Specification
http://www.w3.org/MarkUp/html-spec/

CERN
http://www.cern.ch

Related items

Light Relief

Links Want To Be Links

What are RFCs

Looking for Something? : Searching the Web

Are all Portals the same?

Intranets in Education

Representation of Japanese Language Characters on the WWW

Rendering Chinese Language Characters on the World-Wide Web

Where is the Web heading?

©2018 Martin Webb