AN INTERNET PRIMER
Originally published in The Florida Bar Journal, November 1996
Copyright 1996, Florida Bar Journal, Robert Craig Waters
By Robert Craig Waters
The Internet is an entirely new medium, just as radio and television were new media earlier in the Twentieth Century. Its growth has been phenomenal. Yet even with the tremendous attention and study the Internet is receiving, the time is still too early to say exactly what form this new medium finally will take. As a resource accessible by the general public, the Internet really only dates to the early 1990s, though it has existed in more limited forms since 1969. Many people now are investing tremendous amounts of money exploring ways to exploit the Internet for profit, but even in this regard the Internet is proving to be somewhat unpredictable. The portion of the Internet most easily used by everyday people — called the World-Wide Web (also known as the "Web" or "WWW") — to this day remains flavored by a raucous pioneering spirit that arose from the Internet's unusual history. In this unique environment, novelty and entertainment are more potent forces than the desire to make money. Over time this situation undoubtedly will change to some degree, but it is the reality today. And it perhaps explains why the Internet is often dismissed as an "unserious" medium by those who focus only on its more frivolous aspects.
In my opinion the Internet at its best is a serious medium, and it also is here to stay. I say so because the economic necessities that brought it into being are very powerful, and they also are incapable of being addressed in any other way. These economic forces very likely will impel a rapid evolution that has the potential to forever alter the way information and entertainment is distributed around the globe. This is because of several qualities the Internet bundles into a single unique package: The Internet can deliver information rapidly and inexpensively, in great detail, to virtually any computer on a world-wide basis. No other medium can do this. In fact, the medium we call the "Internet" can be defined quite simply as an international supernetwork linking all varieties of computers and computer networks together to exchange information. It is a "network of networks." The power of the Internet arises from the fact that, by linking so many computers together, the capacity of each to obtain and manipulate information in useful ways is greatly enhanced. The whole is far greater than the sum of its parts.
It is very nearly a truism that we now live in a world in which information is a highly valuable commodity. More importantly, some kinds of information — though not all — are "perishable" in the sense that they quickly lose their value. As a result, any medium that can deliver information as far, as rapidly, and as cheaply as the World-Wide Web will continue to thrive. For example, some programmers already have developed prototype stock market "tickers" that can monitor specific stocks over the Internet and automatically alert you from your desk-top computer whenever the prices change. This would free an investor from having to repeatedly check on these stocks as the trading day continues. In a very real sense the Internet has become a kind of global computerized bulletin board capable of instantaneously delivering even the most esoteric information to the hardest-to-reach audience anywhere on Earth. More to the point, it can deliver information intelligently and interactively, tailoring it to individual needs.
And the means needed to "publish" through this new global medium are incredibly cheap compared to all other media. For a few hundred dollars a year, most people can rent space on computer devices that tap directly into the Internet. Nothing else is needed. Unlike broadcast media, there is none of the expense of buying and maintaining costly transmitting devices. And unlike print media, an Internet "publisher" does not even need a printing press or photocopying machine (or the cost of borrowing someone else's). Information can be quickly designed inside a personal computer and placed on the Internet, where it then can be redisplayed by any other computer with a World-Wide Web hookup. This can be done without ever once reproducing the information on a sheet of paper or recording it on a diskette.
The potential of this new medium is so far-reaching that all of its possibilities certainly have not yet been imagined. Concrete examples from the here-and-now, however, may suffice to show its potential. Some physicians around the world, for instance, have begun to use the Internet as a means of sharing and gathering information about unusual or emerging diseases. This can greatly speed the diagnosis of rare ailments while also allowing public health officers to better monitor the outbreak of new strains of infection. A doctor might place a description of the patient's symptoms in an area of the Internet frequently used by other physicians. Any doctor with pertinent information then can read this material and quickly E-mail a response that might help in a medical diagnosis or disease tracking, which could save lives. Health care providers in highly developed nations such as the United States could even use this type of communication to provide an "electronic consultation" with doctors in less developed countries. Try to imagine something similar happening via any of the other forms of mass communication that exist today.
This example also shows another aspect of the Internet that is unique: It serves a vital purpose even if it is used in a way that does not itself generate a profit. For this very reason the Internet has not been viewed as a profit-making entity for the bulk of its history. Indeed, its very existence today would not have been possible without enormous subsidies from the governmental, educational, and research organizations that shaped it. These same groups will continue to have a strong incentive to keep the Internet alive whether or not they make any money. Profit-making is not likely to dominate the Internet nearly so much as it does other media, at least for the next few years.
The use of the Internet by the legal community presently is still more of a novelty than a useful tool. Yet the economic realities of practicing law eventually will force lawyers, judges, and legal scholars onto the Web. Consider how expensive legal research is today. Law books alone cost a fortune, and computer-assisted legal research comes at a very dear price. More than a few lawyers have stumbled into malpractice because they could not afford the cost of state-of-the-art legal research technology. In other words, they could not afford access to the medium that contained the information they needed. But imagine what the practice of law would be like if all of this same information and sophisticated research technology were available on the Internet at minimal cost. Even the most cash-strapped attorney and quite a few skilled laypersons would be able to do the same kind of research a wealthy law firm could afford.
And this example shows the economic pressures that will force law-related resources onto the World-Wide Web in ever greater numbers. The costs of legal research today arise not so much from the information itself, but from the media used to distribute it. This is a point that deserves great emphasis as we consider the future of legal research. In theory the most fundamental types of legal information — court opinions and statutes, for example — are public material available to all. But in practice, the costs of duplicating or transmitting that information have created a substantial financial hurdle. For instance, courts spend enormous sums buying and maintaining photocopying machines. As a result, they typically charge the public for the expense of creating photocopies of court documents. In 1996 the Supreme Court of Florida was asking a dollar per page, which translates into a hefty outlay for a "public" document like the 450-page Florida Family Law Rules the Court issued that same year.
Yet the Court now has placed that same document on its World-Wide Web site, where it can be obtained instantaneously almost anywhere in the world at no cost. Even with the relatively new state of Web technology today, the Florida Family Law Rules can be viewed and searched directly on the Internet, can be saved to any personal computer, or can be quickly printed on the end-user's own printer. And it will appear on screen in exactly the same format as the official $450 version available at the Supreme Court Clerk's Office.
As a legal research device, the Internet clearly is in an embryonic stage and has not yet supplanted the "primary" resources lawyers use on a daily basis. But do not count on this situation continuing forever. Simple economics will force law onto the Web, and lawyers necessarily will be required to learn how to use this new medium. This Primer is intended as a simple introduction to the Internet for legal researchers who have used it either very little or not at all. The focus of this Primer is on the needs of persons using the Internet in Florida's legal system, though much of what will be discussed below can be useful for any newcomer to the World-Wide Web.
Anyone hoping to master a new technology must first be able to use and understand the specialized language of its creators. For that reason I am beginning this Primer with a kind of walking tour through today's Internet, using pieces of Web jargon as stopping points along the way. The language is English, of course, but a special version described by computer programmers as "hackish." All items of jargon have been chosen with considerable thought, and they are arranged — not alphabetically — but in an order designed to aid a better understanding of the World-Wide Web. The overall goal is to show you where the Internet came from, how it works today, and where it may be heading. All three are necessary for anyone to fully understand why the Internet works the way it does.
This tour is aimed at novices — "newbies" in the Internet's "hackish slang" — and those who perhaps remember their Selectric typewriters with sentimental longing. These are precisely the ones who suffer the most serious jolt when they beam themselves onto the World-Wide Web. The futurist Alvin Toffler called it "future shock" — the feeling so aptly described by Dorothy in the Wizard of Oz when she observed, "Oh, Toto. I don't think we're in Kansas anymore." Something very similar happens whenever a newbie first ventures into the Looking-Glass world of the Web.
Nothing on Earth today better exemplifies Toffler's future shock than the World-Wide Web. Not only is the Internet a completely synthetic environment, but the technology underlying it changes on a daily basis. Yesterday's innovation is today's ho-hum. The Web's "digerati" (digital literati or trend-setters) constantly strive to outpace each other both in technology and style, resulting in a milieu in which science and fashion overlap. When you enter the vast reaches of "Cyberspace" (the artificial environment of the Internet), brace yourself for a takeoff shock comparable to a ride on the Space Shuttle. The Internet is growing at breakneck speed, which means change is the only dependable constant. So keep in mind: The World-Wide Web is not a place where you can expect to learn the rules of the game once, and once only. Rules change each day. Even the computer programs used on the Internet show an unnerving habit of becoming obsolete after only a few months' use, sometimes even a few days'.
Nevertheless, there are basic aspects of the 'Net solid enough to support a little education. As a starting point, new Web explorers need to understand something of the Internet's peculiar history. It is a curious tale of a secretive Cold War military project that rocketed utterly out of control. Within years, this project not only lost any semblance of secrecy; it also came under the control of a group of eccentric geniuses proudly calling themselves "geeks" and "hackers." This strange chronicle begins in the tumultuous 1960s with a novel conglomeration of computers called "ARPANet," a fledgling network destined to become today's Internet — whose unwitting parent was none other than Uncle Sam.
ARPANet. After the Cuban missile crisis, United States military planners faced a grim prospect: that America some day might be attacked with nuclear weapons. The question on everyone's mind was how the national government and its military could continue operating effectively after a nuclear assault. One problem was clear: Even a few nuclear blasts over key cities would wipe out the American telecommunications system as it then existed. And without the ability to communicate, the central government would be powerless. Pentagon officials presented this problem to the Free World's premier think tank, the Rand Corporation, which in the early 1960s developed a startling solution.
The trouble with the American communications system, Rand concluded, was the very thing that made it work so well in peace-time — its centralization. A few well placed bombs could cripple the entire grid simply by destroying crucial relay junctions. This meant the system was unreliable in a postnuclear environment, a conclusion Rand believed would also apply to any other conceivable system. Because bombing sites could not be predicted, big chunks of the communications network might vanish no matter how it was built. It was entirely possible that only a few components of the grid would remain intact, but even these would be useless if the central switching stations were gone.
Realizing this, Rand began its proposal with three assumptions. First, any communications system designed at least in part to survive nuclear war must be completely decentralized. Second, even a decentralized system must be presumed unreliable in the aftermath of the attack, because portions of it would be destroyed. And third, all remaining portions must be able to transcend the system's unreliability by restoring links with each other. Using these assumptions, Rand laid down a communications blueprint that, however unintentionally, was the foundation of today's Internet. America's postnuclear grid would be an "anarchic" or decentralized system, Rand decided. In other words, there would be no central controller.
This was a decidedly unmilitary structure, everyone conceded, but necessity dictated it. In this new anarchic system, each and every "node" (computerized communications link) would be equal to all others, with independent ability to send and receive information. The grid eventually could be as widespread as the existing telecommunications system, but centralized relay switches would give way to something quite new: Each and every node would be able to act as a switching point. Moreover, all messages would be encapsulated in a kind of "electronic envelope" with an "address" on it. And any node would be able to read the address and send the message more-or-less toward its eventual destination. As a result, the system might be somewhat slower, but it at least could reroute messages to bypass any segment of itself that had been destroyed. By the same token, a message literally could pass through dozens or even hundreds of nodes before reaching its final stop. It might even travel in circles for a while. "Anarchic" indeed.
There is an irony here. Little thought was given to the implications of the word "anarchy" — a word often used to describe today's Internet.
Military strategists throughout the West were persuaded by Rand's ideas, and the first experimental network on the Rand model was built in England in 1968. But it was not until 1969 that today's Internet actually was born. From its first day, it followed the Rand formula to a tee. The place was the University of California at Los Angeles in late 1969 where a tiny network of four nodes was placed in service. It was called ARPANet — "Advanced Research Projects Agency Network." The name came from a subdivision of the Pentagon called the Defense Advanced Research Projects Agency, which supervised the initiative.
For its day ARPANet was a technological marvel, requiring the use of what then were considered "supercomputers." All together they had less processing power than today's smallish Pentium-based desktop models, but at the time they were a very rare commodity indeed. And for that reason, scientists and other researchers were very eager to use ARPANet's computers.
This perhaps explains why ARPANet's military origins became obscured almost from the get-go. Since the initial network was only a tiny prototype, it could not feasibly be used for military communications even in the event of war. Rather than let it sit idle, the government provided access to researchers — especially those working on military contracts. The original idea was to keep adding nodes so scientists and others could take advantage of ARPANet's computers from remote locations. Such growth also had a benefit for the military: It eventually would result in exactly the type of expansive postnuclear network the Rand study had envisioned. Or so everyone thought.
At least in the early years, the Pentagon seemed oblivious to any hint that ARPANet's "anarchic" structure gave it serious potential to run amok. Perhaps the network was not altogether a Frankenstein's monster, but it certainly had the capacity to develop "a mind of its own." ARPANet had been planned that way; and evidence of this potential was available almost from the beginning. One early study, for example, greatly disturbed the Pentagon when it showed that a large portion of network traffic consisted of personal messages we today would call E-mail. Network supervisors were never able to quell this practice, and eventually came to view it as a necessary evil. After all, the scientific uses of ARPANet were important enough that no one seriously considered shutting it down just because researchers also were using it to exchange gossip and football scores.
But E-mail was only the start. Almost imperceptibly, control of ARPANet slipped from the Pentagon's hands, spurred on by snowballing technology. By the late 1970s, ARPANet's overseers had developed ever more efficient methods of packing information inside electronic "envelopes" with "addresses" that could be quickly read and rerouted by any node on the system. These "envelopes" also had a unique feature that gave them universal appeal: They worked on just about any kind of computer. Here again, Cold War logic was at work. As was obvious to all, military planners could not afford to lose the services of even a single computer in postnuclear America. So the methods for creating electronic "envelopes" — called "protocols" by programmers — had to work "across platforms" (on any computer).
For a variety of reasons, these protocols were never patented or copyrighted. Anyone in the world could use them for free — a fact that made ARPANet even more attractive to programmers throughout the world. And they came in droves. With the passing years, ARPANet expanded well beyond America's shores, following the military and its ever-present swarm of contractors. These contractors, of course, worked for private companies that had many interests beyond military concerns. So in time, an awful lot of nonmilitary types found their way onto ARPANet.
It was during this stage that a group of brilliant computer jocks placed a very distinctive cultural stamp on the evolving network. They largely came from the Cold War generation that had absorbed something of the 1960s irreverence, though many might have been dismissed as "nerds" by their longhaired contemporaries. These emerging digerati were fond of the books of J.R.R. Tolkien and George Orwell's 1984, as well as the absurdist skits of the British comic group Monty Python — facts later evident in the Internet jargon they created. And in time they proudly gave themselves labels such as "geeks" and "hackers" that once had been used to shun more than a few of their number.
"Geek," for example, had been a college-campus label for any student whose fondness for studies was balanced by obliviousness to current fashion. Yet among these elite network programmers, "geek" came to mean anyone whose nose seemed umbilically attached to a computer monitor — a condition not necessarily frowned upon by their peers. Likewise, "hacker" originally meant anyone skillful enough to "hack through" the security features of a computer or its programming. But on ARPANet it took on a tone of praise. Even today programmers worldwide talk about a mythic figure named J. Random Hacker who is the quintessence of a demanding ethical standard called the Hacker's Code.
All through the 1970s the Pentagon was generous to these clever if unconventional computer programmers. Why shouldn't it be? Anyone who wanted to plug in to ARPANet could do so simply by using the protocols for creating electronic "envelopes," and the demand for access was not onerous at the time. In the 1970s, computers and their networks still were expensive enough to be confined largely to research-oriented facilities. Most were used only by the brightest and the best. Besides, the military remained intent on creating a larger and larger postnuclear network. So much the better if private research facilities helped foot part of the bill by expanding the grid a little more each year. If war ever broke out, the military would simply step in and take charge. So it seemed.
But in the early 1980s, everything changed all at once. A sudden explosion of personal computers and inexpensive networking devices sent the demand for ARPANet through the roof. Network after network joined the grid. Because of the ARPANet's decentralized nature, Washington found itself increasingly unable to control its own Cold War "military" communications system. This was because each node, being coequal with all others, could link up with other computers and networks without necessarily getting the Pentagon's permission. ARPANet's growth had become uncontrollable, and its military-related traffic quickly dropped into a minority. Worse still, cheap computer equipment was drawing in a variety of "fringe" groups whose philosophies hardly corresponded with that of the military establishment. Discussions of punk-rock lyrics now flowed over the same lines that once had carried military secrets. ARPANet was not quite a Frankenstein's monster, but the fictional computer "Hal" from the film 2001: A Space Odyssey sometimes came to mind.
Increasingly frustrated, the Pentagon in 1983 moved its own network communications into an entirely new, more secure system. But doing so by no means ended the network's life. After all, ARPANet had been designed to survive even a nuclear attack that sheared away large chunks of itself. So it treated the loss of its military component exactly the same, simply rerouting affected transmissions. The remainder of the networked hummed on, just as it had been designed to do.
Something quite remarkable had happened: ARPANet had become a freestanding network capable of growing in an almost organic sense, under no one's effective control, held together by a system of computer protocols anyone could use for free. Computers could link up by connecting to an existing node anywhere on Earth, and there already were hundreds. Though ARPANet was not formally dissolved until 1990, it de facto had become the Internet as we know it today.
With the military no longer present to object, the Internet grew rapidly in the late 1980s. Large science foundations, government research facilities, and educational institutions quickly moved in to take advantage of this new method of communication and research. Their vast computer resources quickly filled the technological vacuum created when the Pentagon left. Biggest of all was the National Science Foundation, which in 1985 constructed a new "information backbone" linking American colleges to supercomputing resources. Called NSFNet, this new facility quickly expanded and diversified until it made the old ARPANet facilities obsolete. One Web commentator said no one on the Internet even noticed when ARPANet's staff "signed off" (shut down) for the last time in June 1990.
By this time, the Internet already formed a massive grid reaching across the globe. It especially had become popular as a means of sending E-mail, a relatively simple task most computer users could learn. But E-mail was only one small part of the Internet. Its most useful features included accessing remote computers to perform calculations, exchanging information on request, and searching through large amounts of information quickly. Yet, the vast majority of everyday users could not be expected to develop some of the more sophisticated skills needed to perform these tasks.
Bothered by this problem, Internet programmers associated with a physics research facility called CERN in Switzerland began experimenting with ways to make the Internet more "user friendly" (easy to use). What they developed, with help from the Massachusetts Institute of Technology, was a brand new feature of the Internet called the World-Wide Web. The Web is not a physical component of the Internet; it quite simply is a new set of "protocols" for transferring information. Its goal was to convert the Internet into something that could be used by anyone able to switch on a computer.
By any standards, the World-Wide Web was a mind-boggling success. Traffic on the Web mushroomed so fast in the 1990s that many people were caught by surprise. Those oblivious to the Internet's history thought the Web must have exploded onto the scene from nowhere. Companies, governmental agencies, and even grade school systems rushed to hook up to this phenomenal new communications system. Some publishing companies disclosed their own surprise with the first messages they posted on the Web: a desperate plea for authors who could write intelligently about the Internet. Even the gargantuan Microsoft Corporation found itself in a hasty game of catch-up as it watched smaller and newer companies quickly dominate the entirely new information industry centered around the World-Wide Web.
What the Web does is really very simple: It sends information over the Internet in a way that can be displayed in an eye-pleasing, easy-to-use format on computer screens. Much of the "techspeak" (programmers' jargon) is eliminated. The user then can manipulate or exchange information over the Internet using control devices displayed directly on-screen. If a computer is using the Windows operating system, for example, the controls are displayed using the standard Windows format most users already understand. Even better, any kind of computer anywhere in the world can join the Web. How? The answer to that question involves a piece of equipment called the "server."
Server. Servers are computer networking devices that connect individual personal computers or networks to the Internet, usually over phone lines. Servers typically can both send and receive information. Any exchange of information over the Internet involves at least two servers: one that is requesting information, and another that is sending the information back. And this information may well pass through scores of nodes around the globe before reaching its destination. The act of requesting information from a server via the Internet is called a "hit." System administrators often brag about how many "hits" their servers have taken on a particular day.
Problems can develop whenever one or both of the servers in the exchange are too busy with other Internet requests to respond, which often happens on the crowded "information superhighway." Other problems occur if a server has "crashed" (quit working), has been programmed not to send the type of information requested, or where the connection between the two servers is bad. It is also possible for information to simply vanish on the Internet. Your message, for example, could enter a node that crashes before it can relay the electronic "envelope" on toward its destination. It also is possible for someone to intercept an Internet message before it reaches its destination, which is why companies now are pouring millions into "encryption" technology that converts information into complex secret codes.
When information you have requested fails to reach you, your computer usually will display an error message containing strange acronyms and cryptic requests. My own computer often flashes something like, "Cannot locate DNS entry. Please check URL." Messages of this type may sound rather frightening, but they really are only your computer's way of grumbling. Whatever you see on your own screen, just remember one thing: Don't panic. People who write computer programs are an odd bunch who get a little lazy (or perhaps mischievous) when it comes time to actually write the words for error messages. So they simply develop a single message that is sufficiently disquieting to cover just about any mishap short of nuclear holocaust. It is always a good idea to follow a "Rule of Three" when requesting information over the Internet: Try at least three times before you give up. The Internet is still very new and a lot of things can go wrong, but usually it's only temporary.
Finally, each and every server must do one thing before it can join the Internet — register its "address."
Address. We mentioned the subject of "addresses" earlier in reviewing the Internet's history. One of the key features of the Internet is the ability to stamp each packet of electronic information with an address that can be read by any node in the grid, for forwarding to its final stop. You, the user, obviously must provide the address if this journey is to be successful — which of course means you must actually know the address beforehand. In simple terms, an address is nothing more than a series of keyboard characters containing no spaces that tell Internet computers where a particular server is located. The address also can include more detailed information that tells exactly where specific information is located within a server's various directories. The address is sometimes called a "URL" ("uniform resource locator") or, more loosely, a "site" or "Web site."
All addresses must follow an international standard called the Domain Name System ("DNS") established by computer programmers world-wide and maintained by a coordinating authority called InterNic. Most addresses begin with the prefix "http://," meaning hypertext transfer protocol (the international standard developed by CERN for transmitting information on the World-Wide Web). Other prefixes include "ftp://" ("file transfer protocol," referring to one of several approved methods of transmitting computer files between computers over the Internet).
Apart from the prefix, addresses can contain many separate pieces of information, which always are separated by a period (often called a "dot") or by a forward slash ("/"). Each piece of information tells the computer a little bit more about the location of information on the Internet. For example, the piece ".us" means that the information is in the United States, and ".fl" means it is in Florida. The Florida Supreme Court can be accessed at the following address:
You can easily parse this address into its individual components. The segment "http://" lets the computer know it is looking for a server capable of transferring information using the hypertext transfer protocol. Next, the segment "www.flcourts.org/" provides the unique DNS address for the Florida Supreme Court's server, which is located in the Supreme Court Building in Tallahassee. And the segments "courts/" and "supct/" respectively identify a directory and subdirectory located on the server. By using this full address on the Internet, you will load the Florida Supreme Court's "home page" onto your computer.
Home page (or simply "page"). Home pages are computer programs containing information that, when stored inside a server, can be accessed over the Internet and displayed on a remote video monitor as though they were documents. They are the most characteristic feature of the World-Wide Web, having been invented by CERN to permit a more user-friendly environment for Web users. "Home page" sometimes is used more loosely to mean the same thing as "address," "site," or "URL." If a person or business has posted an entire collection of pages on the Internet, the phrase "home page" commonly is used to refer only to the main page, with all others being termed simply "pages."
The information in a home page is written in computer language that can be transmitted over the Internet in electronic form. When that information reaches its destination, it is interpreted by the end user's computer and displayed on the monitor. Because different computers may interpret the graphic aspects of the information differently, these documents can look quite different from computer to computer. However, the basic text of the electronic document remains the same because it is written in a special computer language called HTML.
Hypertext Markup Language (or "HTML"). The Hypertext Markup Language is a relatively simple computer language designed to reproduce words on a computer video screen no matter what type of hardware or software the computer is using. It was invented by CERN. All home pages are written in HTML. Like its Internet predecessors, HTML was specifically designed to function "across platforms" (between different types of computers). Today's World-Wide Web would not have been possible without HTML because so many different kinds of computers are used throughout the world, and not all talk the same language. HTML can be deciphered by any computer that has the appropriate "helper software," though again, with variation from computer to computer.
Variation occurs because the people who invented HTML did not care if the words looked different on different computers: The original idea was simply to transmit information and nothing more. In other words, HTML originally was concerned only with substance, not with style. Some of the most annoying problems of HTML arise from the fact that it was never designed to do anything other than reproduce words on a monitor, but the Internet now is being used for many other things. For this reason, you may notice that fancy graphics, animations, or other decorations make everything slow down on your computer, whereas pages containing only text work very quickly. This is because HTML text "downloads" very fast over the Internet, but space-hogging graphic files do not — a problem that nevertheless is on the verge of being solved.
Download (or simply "load"). Downloading information is the act of receiving a home page or any other kind of electronic information from the Internet into your computer. "Loading" can mean somewhat different things in different contexts. You have "loaded" a home page into your computer's temporary memory when it appears on your video monitor. The computer coding for this home page also is often stored on your hard drive in a special temporary file called a "cache," where it can be quickly accessed if you decide to return to it later. However, you also can "load" that same home page onto your computer's hard drive by "saving" it to one of your own files. This puts it in a more permanent form.
Today, Internet technology has advanced so significantly that you now can "load" whole programs directly from the Internet, which then will operate on your computer like any other program. At present some of the most popular kinds of "downloadable" programs are simple animated graphics that appear to be "embedded into" (part of) a home page you are viewing. A brand new "cross-platform" programming language called Java is making animations and other similar programs more common on the Internet. Other companies are rushing into this market at break-neck speed, and some already offer a way of transmitting "multimedia" programs (those including both sound and visual elements) that display a kind of interactive movie.
Some people think that in the future you will be able to access almost any type of program you want, including word processors and spread sheets, directly from the Internet without having to store them on your own hard disk. A big advantage here would be that the program you are using could be automatically updated and you would not need to store bulky computer programs on your own hard drive. One possible profit-making use of the Internet would be to sell access to a package of software that the user's computer automatically would load over the Internet.
With some exceptions, all the information you download from the Internet will be interpreted by a "browser." It is your browser that decides exactly how the electronic document will appear on your video monitor.
Browser (or "Web browser"). Browsers are computer programs that, in their earliest forms, could interpret HTML to correctly display text on a video monitor. Today browsers have become much more sophisticated and often can also download and display graphics and can load programs into the computer. One of the most popular and sophisticated Web browsers in the world is Netscape. Netscape is by no means the only browser now available, though it is the most popular. After being stunned by the World-Wide Web's sudden emergency, Microsoft Corporation now has distributed its own competing browser called the Internet Explorer. It is clear that Netscape and Microsoft will engage in a global battle to dominate the information markets created by the Internet.
Keep in mind that different browsers will tend to display electronic documents differently, especially graphic or stylistic elements of those documents. Thus, what you see on your screen may not be exactly the same thing someone else will see. But the one thing all browsers share in common is the ability to display "hypertext links."
Hypertext link (or simply "link"). Perhaps the most unique feature of the World-Wide Web is the hypertext link. This is underlined text in a home page that usually (though not always) is highlighted in a color different from the surrounding text. Clicking on any such link with your mouse arrow automatically tells your computer to find yet another home page on the Internet and load it into your computer. Whoever designed the page can program the link to load any other page located anywhere on the World-Wide Web. Such documents are "linked."
The ability to create hypertext links in home pages is one of the special features that make HTML and the Internet unique. Hypertext links allow you to "browse" (follow links) through many different home pages. A hypertext link is something like an automated footnote that takes you directly to the entire document that has been referenced. Hypertext links thus provide an important way of performing research on the Internet. When using the Internet, you may not be able to find the particular home page or information you want, but you often can find a home page dealing with the same or a similar subject. If this page contains more hypertext links on that same subject, you can "follow them" (click on them) to see if they might lead you to more pertinent information.
A special kind of hypertext link in documents is called a "mail-to link." When you click on such a link, a special computer form pops up on your screen for you to send Internet E-mail to a person designated by that link. Not all browsers have this feature. Beware, however, how freely you allow such links to be used. The Internet has a wonderful "global village" atmosphere, but there also are those who "surf the 'Net" (use the Internet) for mischief. For example, if you had a "mail-to link" on your own personal home page, someone might use it to "flame" you.
Flaming. "Flaming" means using the Internet to heartily insult someone, which perhaps understates the more flamboyant forms of this art. If you receive E-mail comparing your finer qualities to those of Beelzebub and your mother's virtue to that of a lower life form — though typically in less genteel language — you can rest assured you have been "flamed." The content of that message is "flamage." While you might be inclined toward indignation, any Internet geek would see this as merely a wonderful opportunity for an insult duel.
Though the word "flaming" is new, the actual practice has ancient and respectable roots. Many emerging civilizations in world history have developed similar traditions of a "match" involving an exchange of insults. The ancient Irish, for example, had such a tradition and developed it into a kind of literary art form quite popular in its day. Congress presently seems bent on reviving the custom, at least within its own halls. Something similar also is now happening on the Internet, which perhaps reflects how young this medium is. I recently stumbled across a master's thesis on this subject, which unsurprisingly found a causal link between flaming and testosterone. You need not worry about being flamed, however, just because you are using the Internet — unless you publicize your E-mail address to people who might not like your ilk. (And remember: Lawyers are not the most loved folk these days.) One of the most common ways this might happen is if you take part in a "chat group" and someone takes offense at something you have said — or perhaps even the simple fact that you exist.
Chat group. This is a special kind of exchange on the Internet in which people can send messages back and forth to anyone who has been given access to the group. Barring any special problem, these exchanges can occur in real time, so that an actual discussion takes place. Although the name is somewhat deceptive, "chat" lines to date have worked chiefly by exchanging written messages, not by using live audio — though this also is now changing. Netscape and Microsoft, for example, already have added new features to allow live audio-conferencing over the Internet. The best thing about Internet audio-conferencing is its cost — nothing, beyond the price of hooking up to the Internet. This feature could even be used as a means of conducting audio classes or seminars over the Internet, and at very little cost compared to conferences held live in a meeting room. There also are now a variety of new programs that allow you to use the Internet as though it were a telephone line, though one that still is not very reliable. Recently I was able to exchange a few words over the Internet with a woman from South Africa. Though our line quickly went dead, I still marvel at the fact that this exchange of greetings spanned half the globe, at no cost.
Other chat lines are very specialized, and there already are chat groups confined to legal issues. Moderated chat groups (those in which irrelevant messages are weeded out by the "owner" of the chat group) tend to remain on point better, though at the cost of censorship. It also is possible to set up a chat line, written or in audio, so that users can access it only if they know the proper password. For example, anyone conducting a legal seminar on a chat line easily could restrict access by using password protection.
Unmoderated chat groups and those without passwords have a pronounced tendency to get way out of control in today's rambunctious Internet environment. Being flamed is a distinct possibility. A serious discussion about the intricacies of stare decisis, for example, could only digress when suddenly barraged with a flurry of nouns and adjectives mother never uttered. Who might do such a thing? — Lawyer-hating "cyberpunks" (hackers described by the Hacker's Dictionary as "trendoids in black leather") or "weenies" (adolescent geeks or the adult equivalent). Worse still, the chat group or some of its members might be subjected to a "virtual" (computer generated) attack by people who simply don't like your kind of folk. Lawyers once again come to mind. In other words, the entire chat group might become the victim of a "spam."
Spamming. This is the act of sending E-mail or other Internet transmissions abusively. Spamming originally meant simply sending junk E-mail over the Web, especially commercial solicitations. Predictably, the most notorious early case of spamming involved lawyers who inundated servers in the southwestern United States with thousands of unwanted solicitations. But the meaning of "spam" has broadened in time so that it can include "flaming" aimed at a group, or the act of bombarding a single server with repetitious E-mail (also called "mailbombing").
If you are wondering where the name "spam" originated, look no further than your grocer's shelf. Though I am sure the makers of Spam brand potted meats are writhing in their boardroom chairs, the Internet's use of "spam" reportedly began as a kind of analogy: The informational value of spam E-mail was thought to be rather poor on the nutrition scale. Internet lore holds that this usage was inspired by a comic skit of Monty Python, a group of British comedians popular among early Internet geeks and hackers. All spamming is widely thought to violate "netiquette" (a dimly defined standard for behavior on the Internet). There presently is no recognized "Miss Manners" of the Web, so Netiquette largely is in the eye of the beholder.
Spamming is one of several new problems created or worsened by the Internet, which is only to be expected when a new medium is being created. Two other significant problems are the spread of computer "viruses" over the Internet and the possibility of someone using an Internet server to obtain restricted information. These problems now are being addressed by new technologies such as "firewalls" and "antivirus software."
Firewall. A firewall is a method of configuring a server and its software to restrict Internet access going in or out of a computer or its "local area network" or LAN. (LANs often link personal computers inside a particular office or company). As noted above, a person who might try to break through your firewall originally would have been called a "hacker." Today, a hacker who intends to breach security for malicious reasons is sometimes called a "cracker" (one who cracks security). I have even stumbled across Web pages discussing the fine distinction between "hackerdom" and "crackerdom," and how far "hackish" misbehavior must go before it is "crackerly." (And they honestly think they're different from lawyers.)
Incidentally, there actually are benevolent reasons why someone might try to hack through a firewall. System administrators often challenge their distinguished colleagues (i.e. "competitors") to break through a firewall, which of course helps identify weak points in the security arrangement. I understand administrators win bragging rights if their firewalls prove the bigger and the stronger. There is something Freudian here.
Firewalls can be used not only to keep certain people out of your computer or LAN, but they also can work in reverse: to prevent legitimate computer users from obtaining prohibited material over the Internet. The government of China recently carried the art of firewall creation to new heights by building one around their entire nation, presumably to prevent the billion or so Chinese from being contaminated with Western ideas. But the same idea also works on a smaller scale. Employers no doubt would have legitimate concern if a worker was spending an inordinate amount of time on-line to a Swedish Web site with far too many X's in its address. Firewalls can prevent such transactions.
They also can provide some protection from the spread of computer viruses, though anti-virus software is far more effective in this regard.
Virus. Yes, even computers catch cold. We call them "viruses" — small computer programs designed to "reproduce" by inserting exact copies of themselves into other benign programs without the user's knowledge. Viruses usually come to life only if a user runs the "Trojan horse" program in which they reside. This can be a special problem if the program is one used frequently, such as a word processor. Another less common variety of computer ailment called a "worm" does not attach itself to programs, but instead is designed to propagate itself over an entire network. The so-called "Great Worm of 1988," for example, clogged memory in ten percent of all Internet hosts, leaving administrators the distinctly peculiar task of "deworming" their computers. We can only hope computers do not develop fleas. The term "virus" often is used in a generic form to include "worm." (The word "worm" probably came from Tolkien's books, where it meant "dragon.")
Viruses can range from the merely annoying to the grotesquely destructive. For example, a virus might simply flash a message describing you as a species of toad. Or it could wipe out every last speck of information on your computer. All viruses are created by people — geeks who apparently have nothing better to do with their time. We fortunately have not yet reached the point where computer viruses evolve in the manner that human viruses do. When that day comes, we really ought to start looking for a new planet.
Viruses often are given fanciful names, a fact that reflects the use (or misuse) of creative energy on the Internet. One of the most famous is the Michelangelo virus that was designed to destroy computer memory only on the day of the Renaissance artist's birthday. Others have military monikers, like the "Stealth" family of viruses. Movie buffs are well represented among the authors of computerized pestilence, as demonstrated by viruses called Terminator, Raptor, Psyco, Alien.1356, and Arachnophobia. A few viruses come from people who have some familiarity with the mental health profession, like the Psychosis virus. Among the largest family of viruses is one called "Silly," which of course is quite distinct from another named "Sirius." And lastly, there are viruses given names that are appealing to anyone fixated at a more-or-less adolescent level (i.e. a classic "weenie"), such as the Puke and StinkFoot viruses.
The emergence of computer viruses has created an entirely new class of programs that can detect and eradicate them, much as antibodies do with biological viruses. The medical analogy works at another level, too. Like antibodies, antivirus software must be programmed to recognize each virus individually. As a result, antivirus software needs to be updated constantly. Viruses can be fabricated in a few minutes by anyone familiar enough with computers, so new ones are constantly emerging. Today's antivirus programs typically are designed to be updated with new data files posted for free on the Internet, which allow the programs to detect new kinds of viruses. Most browsers include at least some antivirus capabilities, but no one seriously thinks that this alone provides sufficient protection.
Viruses are now so common and widespread that anyone operating a LAN or freestanding computer without up-to-date antivirus software is taking a big risk. Even if the LAN or computer has no link to the Internet, an automatic check for viruses should be conducted at least every day. Many automatic programs are available, and they generally are not expensive.
A whole new class of antivirus programs now is being developed that adds yet another layer of protection for persons using the Internet: They automatically check certain kinds of information for viruses before they are downloaded from the Web. As a general rule, the greatest risk of "catching a virus" on the Internet comes from downloading software, not from simply browsing through home pages. Thus, LANs or computers that will be used to download software should be protected with the type of software described above.
A word of caution is in order about overreacting to the threat of viruses. Some may be tempted simply to prohibit the downloading of any computer program from the Internet; it is possible to configure a firewall to achieve this effect. However, the Internet is evolving in such a way that this type of solution will not be workable in the long run. As noted earlier, there is a general movement toward using the Internet as a place where large amounts of information — including computer programs — are stored on a more-or-less permanent basis. This frees up memory on LANs and personal computers.
This trend probably will continue as computer programs become ever more sophisticated and ever larger. One manufacturer already is marketing a "stripped down" and rather inexpensive computer that is specifically designed to access the programming it needs over the Internet, allowing for a smaller and less costly memory system on the computer itself. Of more immediate importance, however, is the fact that there are many kinds of Internet programs now available for downloading that actually enhance a computer. Many of these are free. Of special importance to lawyers are "portable document viewers."
Portable document viewers. One of the biggest problems on the Internet today involves the transmission of documents created with word processors. As a general rule, such documents cannot be displayed as "home pages" because they are not written in HTML. One solution is a new type of program called a "portable document viewer," which can display a word-processor document exactly the way it originally was written.
There is a pressing need for portable document technology today, especially for legal documents. HTML was never written with lawyers in mind, and it presently is incapable of reproducing all legal documents in a manner that meets the needs of daily law practice. Portable documents, however, typically are displayed on the video screen exactly as if a snapshot had been taken of the original paper version or "hard copy." Moreover, people using portable document viewers can print all or part of the text, can save it to disk, can conduct a word search, or can even cut and paste pieces of the document into another text file.
The two most widely used portable document programs today are Adobe's Acrobat and Corel Corporation's Envoy, both of which may be downloaded for free over the Web. Both have strengths and weaknesses, and neither can be considered the perfect answer to the needs of the legal profession. Acrobat is the one most scrupulous about preserving a document's original appearance. But this result is possible only because Acrobat documents — and the Acrobat viewer program itself — take up more memory and are slower. Envoy creates smaller and faster files, but this speed is achieved only because Envoy does not reproduce all typefaces or "fonts" exactly the way they originally appeared. As a general rule, the reproduction of fonts only becomes a problem in Envoy if the document contains a font that is not the standard alphabet used by European languages such as English. Thus, with some forethought, an Envoy author can work around the problem. Envoy preserves all other features, including footnotes and page breaks, very faithfully.
Many people wonder why companies just give away portable document viewers, but there is a very real and potent economic reason for doing so: Portable documents are useless — i.e. they produce no profits for the company — unless the general public has free and ready access to the means of viewing them. The leader in this area was Adobe, which wisely realized it still could make money by selling the program needed to convert normal text files into Acrobat format. In other words, you can view the documents for free, but you have to pay to make them. This puts the financial burden on the one who wants to publish over the Internet.
Portable document viewers are growing increasingly sophisticated. Today, both Acrobat and Envoy are available either as freestanding programs or as Web browser "plug-ins."
Web browser plug-ins. A plug-in is a computer program that allows a browser to do something it normally would not be able to do. Both Acrobat and Envoy now have plug-ins that allow users to view portable documents directly inside the Netscape browser, for example. This can save considerable time. Previously, portable documents had to be downloaded onto disk from Netscape and then viewed by leaving the browser and opening the freestanding viewer program. In other words, you needed one program to download the document, but another to view or print it. The plug-in lets you do all of these things inside the browser.
Plug-ins also can do many other things. There now are plug-ins that allow Netscape to display movies on the monitor, to play audio recordings on computers equipped with speakers, and to display complex interactive animations. The day is fast approaching when courts can "broadcast" their proceedings directly on the Internet to anyone who cares to watch. Some states, notably Washington, already are experimenting with this new technology. Continuing legal education seminars might be conducted "on-line" (over the Internet) in much the same manner, eliminating both time wasted on travel and — sadly — the associated tax write-offs. With the use of proper security software, lawyers and judges could even interact with one another over the Internet for depositions or court hearings.
In the more distant future we may see the Internet and other forms of mass media begin to overlap each other, a process likely to be hastened by recent changes in federal telecommunications law. While cable companies today boast a hundred or so television channels, it is entirely possible that the Internet could offer tens of thousands of "channels" that consist of Web pages displaying video with accompanying audio. People would watch them just like television. Eventually it will be possible for people to "rent" a home video directly over the Internet for viewing at leisure. The Internet also can do many things previously conducted by telephone, as demonstrated by the sophisticated audioconferencing now emerging on the World-Wide Web. A day may come when the media we today know as television, the Internet, and the telephone are indistinguishable from one another, forming a single gigantic medium through which entertainment and communications are distributed on a global basis. This explains why some very large telecommunications companies such as AT&T have jumped into the Internet in a big way, even though the immediate prospects for profit are not all that great.
Two hurdles remain, however, before these futuristic ideas can be fulfilled. First, not all browsers can handle such sophisticated programming. High-tech home pages, in other words, can limit your audience in today's world. For that reason, many people who create home pages generally avoid them — at least for now. Obviously, the continuing advances in browser technology mean this hurdle soon will topple. Even today it is possible to create "intelligent" Web pages that automatically conform themselves to the browser actually in use. For example, programmers can create a page that checks out the software of the end user's computer and then writes a page specifically designed for the needs of that software.
The second problem is something called "bandwidth," which poses a somewhat more expensive dilemma.
Bandwidth. Bandwidth refers to the capacity of telecommunications cable and connectors to transmit electronic information. Think of bandwidth as the volume of a pipeline. If the pipe is narrow, anything passing through it will move more slowly. Moreover, even if the pipe is wide, you still have a problem if the valves at each end are so small they can only let a little bit through at a time. Information traveling over the Internet works much the same way. The server sending information over the Internet acts as a valve that will slow everything else down if it cannot let information pass quickly. The same applies to the server that is receiving the information. If it is slow, then the display of information on the end-user's video monitor will also be slow.
Telecommunications cables developed earlier in this century obviously were never designed to transmit large amounts of "digitized information" (information converted into the binary dots and blips used by computers). However, new technologies are likely to overcome current problems. Perhaps the biggest hurdle today is the "information valve" located at your home or office, also known as the traditional telephone. In simple terms, today's telephones and their related hardware handle digitized information very poorly. This is because many are still using the basic technology developed by Alexander Graham Bell.
One method of solving this problem is a new technology called Integrated Services Digital Network ("ISDN"), which allows a single telecommunications cable to act as though it actually consists of several cables. Using ISDN, a phone line can transmit enormous volumes of information both backward and forward every second. It is even possible for different people in a household to conduct two separate phone conversations over the exact same cable without having to install a second phone line (and without having to listen to each other). This opens the "information valve" much wider and thus can speed all Internet transactions.
Although ISDN has the advantage of using existing cables, it generally requires end users to buy new hardware and special phone sets. This can be a considerable expense for Internet users. As a result, some have looked for other solutions. One idea would actually move the Internet onto cable television wires, which can transmit huge amounts of information quickly. However, commercial and legal obstacles may make television cable unrealistic in the immediate future. This is because the cable industry consists of many different companies that each own their own wiring systems, which in turn are regulated to some degree by hundreds of local governments. Stringing all of these systems into a single network could be a daunting task. Other possible solutions presently are too expensive, such as converting the entire national communications system over to fiber-optic cable.
The current bandwidth limits explain why Internet connections sometimes are painfully slow or falter altogether. Compounding the problem are still other factors: Servers may be overloaded at certain times of the day, and Web pages may be designed poorly. For example, you and your firm may be especially proud of that fifty megabyte (i.e. BIG) logo your graphic artist created for you. But do not expect Internet users to sit around for the thirty odd minutes it may take to download. A home page bloated with graphics can be the proverbial camel squeezing through the eye of a needle. That is why good Web page designers restrict the number of graphics they use on any single page. It is also why Web pages should never be too long.
Even with limited bandwidth, the Internet today still is capable of remarkable back-and-forth exchanges of information. You ask for information, which then is sent back to you from a distant server. Because the collective "memory" of the Internet is potentially limitless, that most arcane and detailed kinds of information can be stored on it in retrievable form, vastly increasing the world-wide ability to gather knowledge. Archivists throughout the world clearly have gotten the message. They now are rushing to post information that previously could not be widely distributed because the cost overwhelmingly outweighed the benefits. Repositories of knowledge as lofty as the Vatican and the Library of Congress already are on-line, with countless others following suit.
Look to the future, then, and you will see law moving onto the Web at a rising pace. Growing traffic is likely to bring still more technological innovation to the field of legal research. One day in the not too distant future, the "global village" of the World-Wide Web may well offer the most sophisticated kinds of legal research any place there is a telephone, even one of the cellular variety. And the costs may be only a fraction of what they are today.
So brush up on your Netiquette and always remember: Be hackish in all your Web dealings. Avoid the temptation of Crackerdom, and turn the other cheek when you're flamed by a weenie. Pledge never to suborn a spam. As J. Random Hacker would say, "Live up to the Code, man."
This article was written by Robert Craig Waters, the Florida Supreme Court's Director of Public Information. Mr. Waters graduated with honors from the University of Florida College of Law in 1986, and is a 1979 honors graduate of Brown University. Before attending law school, he was an award-winning journalist with the Gannett newspapers.
All inquiries about this page: