[Reader-list] The Internet's Invisible Hand

Harsh Kapoor aiindex at mnet.fr
Wed Jan 16 04:44:18 IST 2002


The New York Times
January 10, 2002

The Internet's Invisible Hand

By KATIE HAFNER

NO one owns it. And no one in particular actually runs it. Yet more 
than half a billion people rely on it as they do a light switch.

The Internet is a network whose many incarnations - as obscure 
academic playpen, information superhighway, vast marketplace, sci- 
fi-inspired matrix - have seen it through more than three decades of 
ceaseless evolution.

In the mid-1990's, a handful of doomsayers predicted that the 
Internet would melt down under the strain of increased volume. They 
proved to be false prophets, yet now, as it enters its 33rd year, the 
Net faces other challenges.

The demands and dangers - sudden, news- driven traffic, security 
holes, and a clamor for high-speed access to homes - are concerns 
that bear no resemblance to those that preoccupied the Internet's 
creators. For all their genius, they failed to see what the Net would 
become once it left the confines of the university and entered the 
free market.

Those perils are inextricably linked to what experts consider the 
Internet's big promise: evolving into an information utility as 
ubiquitous and accessible as electricity. That, too, was not foreseen 
by most of the engineers and computer scientists who built the Net in 
the 1960's and 70's.

Ten years ago, at the end of 1991, the same year that the World Wide 
Web was put in place but a good two or three years before the term 
Web browser became part of everyday speech, the Net was home to some 
727,000 hosts, or computers with unique Internet Protocol, or I.P., 
addresses. By the end of 2001, that number had soared to 175 million, 
according to estimates by Matrix Net Systems, a network measurement 
business in Austin, Tex.

For all that growth, the Net operates with surprisingly few hiccups, 
24 hours a day - and with few visible signs of who is responsible for 
keeping it that way. There are no vans with Internet Inc. logos at 
the roadside, no workers in Cyberspace hard hats hovering over 
manholes.

Such is yet another of the Internet's glorious mysteries. No one 
really owns the Net, which, as most people know by now, is actually a 
sprawling collection of networks owned by various telecommunications 
carriers. The largest, known as backbone providers, include WorldCom 
(news/quote), Verizon, Sprint and Cable & Wireless (news/quote) USA.

What, then, is the future of this vital public utility? Who 
determines it? And who is charged with carrying it out?

For the Internet's first 25 years, the United States government ran 
parts of it, financed network research and in some cases paid 
companies to build custom equipment to run the network. But in the 
mid-1990's the Net became a commercial enterprise, and its operation 
was transferred to private carriers. In the process, most of the 
government's control evaporated.

Now the network depends on the cooperation and mutual interests of 
the telecommunications companies. Those so-called backbone providers 
adhere to what are known as peering arrangements, which are 
essentially agreements to exchange traffic at no charge.

"Peering fits right in with the overly loose way the Internet is 
provided," said Scott Bradner, a senior technical consultant at 
Harvard University, "which is unrelated commercial interests doing 
their own thing." Mr. Bradner, co-director of the Internet 
Engineering Task Force, an international self-organized group of 
network designers, operators and researchers who have set technical 
standards for the Internet since the late 1980's, said that peering 
remains a remarkably robust mechanism.

And for now, capacity is not a particularly pressing problem because 
the backbone providers have been laying high-speed lines at 
prodigious rates over the last few years.

"We've got a lot of long-distance fiber in the ground, a lot of which 
isn't being used, but it's available," said Craig Partridge, a chief 
scientist at BBN Technologies, an engineering company that oversaw 
the building of the first network switches in the late 1960's and is 
now owned by Verizon.

Still, the fear that the Net is not up to its unforeseen role still 
gnaws at prognosticators. Consider the gigalapse prediction.

In December 1995, Robert Metcalfe, who invented the office network 
technology known as Ethernet, wrote in his column in the industry 
weekly Infoworld that the Internet was in danger of a vast meltdown.

More specifically, Dr. Metcalfe predicted what he called a gigalapse, 
or one billion lost user hours resulting from a severed link - for 
instance, a ruptured connection between a service provider and the 
rest of the Internet, a backhoe's cutting a cable by mistake or the 
failure of a router.

The disaster would come by the end of 1996, he said, or he would eat his words.

The gigalapse did not occur, and while delivering the keynote address 
at an industry conference in 1997, Dr. Metcalfe literally ate his 
column. "I reached under the podium and pulled out a blender, poured 
a glass of water, and blended it with the column, poured it into a 
bowl and ate it with a spoon," he recalled recently.

The failure of Dr. Metcalfe's prediction apparently stemmed from the 
success of the Net's basic architecture. It was designed as a 
distributed network rather than a centralized one, with data taking 
any number of different paths to its destination.

That deceptively simple principle has, time and again, saved the 
network from failure. When a communications line important to the 
network's operation goes down, as one did last summer when a 
freight-train fire in Baltimore damaged a fiber-optic loop, data 
works its way around the trouble.

It took a far greater crisis to make the Internet's vulnerabilities clearer.

On Sept. 11, within minutes of the terrorist attacks on the World 
Trade Center, the question was not whether the Internet could handle 
the sudden wave of traffic, but whether the servers - the computers 
that deliver content to anyone who requests it by clicking on a Web 
link - were up to the task.

Executives at CNN.com were among the first to notice the Internet's 
true Achilles' heel: the communications link to individual sites that 
become deluged with traffic. CNN.com fixed the problem within a few 
hours by adding server capacity and moving some of its content to 
servers operated by Akamai, a company providing distributed network 
service.

Mr. Bradner said that most large companies have active mirror sites 
to allow quick downloading of the information on their servers. And 
as with so many things about the Net, responsibility lies with the 
service provider. "Whether it's CNN.com or nytimes.com or anyone 
offering services, they have to design their service to be reliable," 
he said. "This can never be centralized."
Guidelines can help. Mr. Bradner belongs to a Federal Communications 
Commission advisory group called the Network Reliability and 
Operability Council, which just published a set of recommended 
practices for service providers, including advice on redundant 
servers, backup generators and reliable power. "Still, there are no 
requirements," Mr. Bradner said.

If the government is not running things, exactly, at least it is 
taking a close look.

Dr. Partridge of BBN Technologies recently served on a National 
Research Council committee that published a report on the Internet. 
One of the group's main concerns was supplying households with 
high-speed Internet service, known as broadband.

Some 10.7 million of the nation's households now have such access, or 
about 16 percent of all households online, according to the Yankee 
Group, a research firm.

Only when full high-speed access is established nationwide, Mr. 
Partridge and others say, will the Internet and its multimedia 
component, the Web, enter the next phase of their evolution.

"We need to make it a normal thing that everyone has high-speed 
bandwidth," said Brian Carpenter, an engineer at I.B.M. (news/quote) 
and chairman of the Internet Society, a nonprofit group that 
coordinates Internet-related projects around the world.

Yet there is no central coordination of broadband deployment. Where, 
when and how much access is available is up to the individual 
provider - typically, the phone or cable company. As a result, 
availability varies widely.

Control falls to the marketplace. And in light of recent bankruptcies 
and mergers among providers, like Excite at Home's failure and AT&T 
(news/quote) Broadband's sale to Comcast (news/quote) late last year, 
universal broadband deployment may be moving further into the future.

The one prominent element of centralized management in Internet 
operations - the assignment of addresses and top domain names, like 
.com or .edu - reflects the tricky politics of what is essentially a 
libertarian arena. That is the task of the Internet Corporation for 
Assigned Names and Numbers, or Icann, which operates under the 
auspices of the Commerce Department. Its efforts to establish an open 
decision-making process became mired in disputes over who the 
Internet's stakeholders actually were.

And even as Icann and its authorized registrars take over 
administration of the Internet's naming system, a different problem 
nags at computer scientists: the finite number of underlying I.P. 
addresses.

In the current version of Internet Protocol, the software for the 
routers that direct Internet traffic, there is a theoretical limit of 
four billion addresses. Some 25 percent are already spoken for.

The solution, Mr. Carpenter said, is bigger addresses. "This means 
rolling out a whole new version of I.P.," he said.

Although the assignment of I.P. addresses falls to Icann, inventing a 
new protocol is essentially a research problem that falls to the 
Internet Engineering Task Force.

As the Internet continues to grow and sprawl, security is also a 
nagging concern. The Internet was not built to be secure in the first 
place: its openness is its core strength and its most conspicuous 
weakness.

"Security is hard - not only for the designers, to make sure a system 
is secure, but for users, because it gets in the way of making things 
easy," Mr. Bradner said.

There is no centralized or even far-flung security management for the 
Internet. The Computer Emergency Response Team at Carnegie Mellon 
University is mainly a voluntary clearinghouse for information about 
security problems in Internet software.

The lack of a central security mechanism "is a mixed bag," Mr. 
Bradner said. A centralized system that could authenticate the origin 
of all traffic would be useful in tracing the source of an attack, he 
said.

That is where a delicate balance must be struck: between the ability 
to trace traffic and the desire to protect an individual's privacy or 
a corporation's data. "It's not at all clear that there's a 
centralizable role, or that there's a role government could play 
without posing a severe threat to individuals," Mr. Bradner said.

Past plans for identity verification have failed because of the 
complexity of making them work on a global scale, he said.

Such are the challenges that face the Internet as it continues its march.

"The really interesting question to ask is whether we can build a 
next generation of applications," Mr. Carpenter said. "Can we move 
from what we have now, which is an information source, to a network 
that's really an information utility, used for entertainment, 
education and commercial activities? There's tremendous potential 
here, but we've got a lot of work to do."

As that work progresses, another question centers on what role the 
government should play. Many carriers who bear the cost of expanding 
the infrastructure favor federal incentives for carriers to invest in 
new broadband technology. The Federal Communications Commission is 
also mulling policy changes, soliciting suggestions from the 
communications industry for making broadband access more widely 
available.

Dr. Metcalfe predicts that the next big step is what he calls the 
video Internet. "We're done with just voice and text," he said. "No 
one is quite sure what the killer app will be, but we want to see 
stuff move, and we want it to be better than television."

Despite his joke about eating his words, Dr. Metcalfe said he was 
unrepentant about his forecast of a gigalapse.

"There's a gigalapse in our future," he said. "The Net's getting 
bigger all the time and there are basic fragilities." Since there is 
no formal tracking mechanism for connection failures, he argues, his 
gigalapse may very well have happened already without anyone noticing.

"I'm sure there are outages every day, but because of the Internet's 
robust nature they are generally not noticed," he said. "We do 
control-alt-delete and chant, and eventually the connection comes 
back."

Indeed it does.



-- 



More information about the reader-list mailing list