Experienced gridlock on the Internet lately? Here's why.
What we have here on the Internet today is a failure to communicate. There's not only traffic gridlock, there are failures in the Domain Name Service (DNS) system. Some Internet providers have set their routers to drop packets with long addresses. The upshot? If your agency counts on the Internet for business or public communications, you're less reachable than you think.
What we have here on the Internet today is a failure to
communicate.
There's not only traffic gridlock, there are failures in the Domain Name Service (DNS)
system. Some Internet providers have set their routers to drop packets with long
addresses.
The upshot? If your agency counts on the Internet for business or public
communications, you're less reachable than you think.
Maj. Mike St. Johns, program manager at the Advanced Research Projects Agency's
Information Technology Office, is among those who have spotted the deterioration in
Internet performance. "It's tough to say what the cause is," St. John said.
He and other federal Net experts said three distinct problems seem to be coming to a
head. The most perplexing problem concerns the way individual Internet service providers
interact with root domain name servers.
To resolve addresses, service providers' domain name servers must communicate with one
of 11 redundant root DNS servers spread around the Internet. The smaller DNS machines keep
tables of most-used addresses in cache. If tables aren't updated often enough and an
address doesn't appear, they must consult one of the root servers.
That's where the second problem comes into play. Demand for Net bandwidth has grown
dramatically, and heavy traffic slows response. A requested connection slows down even
more when the provider's server must poll a root server that already has hundreds of
requests in queue.
"In the old days of NSFnet, domain name access was about 3 percent to 5 percent of
total traffic," said George Strawn, director of the National Science Foundation's
division of research and infrastructure for networking and communications.
"I don't know if that statistic still holds, but if 5 percent of network traffic
is attempting to access domain name servers to set up packets that need to be sent on
their way and the network backbones get full of traffic, a clog in one slows down the
other," he said.
"The Internet has grown so large that DNS has suddenly become a more important
part of it, and that's bogging everything down," said Marcel Schlapfer, a network
engineer who manages a root DNS server at NASA Ames Research Center in California.
Although connections usually work, there will be occasional spikes in the number of
host systems that refuse to answer, e-mail that's undeliverable and servers that suddenly
say they've never heard of a common location.
Depending on how a provider connects to the Internet, its domain name server might be
separated from a root server by other DNS machines, Schlapfer said. If the provider's name
server goes down, or parts of a network go down so it can't reach its usual DNS server,
there's trouble.
"That's why you want at least one secondary DNS server available," Schlapfer
said. "Service providers should spread out their own DNS work. It's important that
more than one of their machines provide name services for the whole domain. They should
delegate different name servers for each subdomain."
Another stress factor: Net-accessing PCs and Macintoshes, unlike Unix workstations,
tend to lose any local address cache they've built up when they're turned off. St. Johns
said the growing number of PCs creates extra daily traffic when they come back up and
start looking for addresses again.
Some networks, including SprintLink, now filter out what they consider long or vanity
addresses that establish personal IP numbers for several machines in an organization.
In September, SprintLink started cutting off any address exceeding 18 bits, to control
explosive growth in its router tables. SprintLink agreed with InterNIC, which registers
domain names, to apply the filter only to registered IP addresses that start with 206 and
higher numbers.
Paul Archpetras, director of engineering for @Home, Palo Alto, Calif., said he
considers today's Internet only "arbitrarily reliable." He was one of the
original architects of the Internet domain name system when he worked for the former
Defense Advanced Research Projects Agency.
Archpetras said he did a test a few years back and found that half of Internet service
providers had configuration problems ranging from mild to disastrous. He suspects the
problem has grown worse with the growth in startup providers.
"You're supposed to have at least two redundant servers," Archpetras said.
"If you put them both on the same Ethernet with only one link to the outside, that
link can go down and you're basically off line." Depending on configuration, he said,
users may receive a message saying an address doesn't exist, whereas it should say
something like, "This address couldn't be resolved at this time."
Archpetras believes routing problems are going to get worse. "People need to start
voting with their feet, walking away from providers" that don't give good service, he
said.