Source
Automatically imported from: http://commons.somewhere.com:80/rre/1997/Evolving.Internet.Infras.html
Content
This web service brought to you by Somewhere.Com, LLC.
Evolving Internet Infrastructure
``` ---
This message was forwarded through the Red Rock Eater News Service (RRE). Send any replies to the original author, listed in the From: field below. You are welcome to send the message along to others but please do not use the "redirect" command. For information on RRE, including instructions for (un)subscribing, send an empty message to rre-help@weber.ucsd.edu
---
Date: Tue, 7 Jan 1997 10:57:47 -0600
From: Gordon Cook
EVOLVING INTERNET INFRASTRUCTURE:
Volume Two of a Continuing Handbook on the Commercial Internet's Business, Technology and Structural Issues
An Anthology of Recent Articles from The COOK Report and Interviews from the 37th IETF December 9 -13, 1996
Our publication last April 17th entitled: Tracking Internet Infrastructure, an Anthology of Articles from the COOK Report on Internet, was very successful. The Tracking report provided an indexed Handbook that those who are trying to evaluate the complex and fast changing phenomenon known as the Internet could use as a reference work. For subscribers it became a means of organizing material from past newsletters for reference and review. For many who became new subscribers, it served as a worthwhile introduction to the COOK Report and summary of what they had missed before joining us.
Our new Handbook: "Evolving Internet Infrastructure" is divided into seven sections covering the following seven critical issues:
relationship of network topology to cost of doing business
complexity of business arrangements governing positioning within the topology
provider business models and case studies
routers and routing tables
Quality of Service as a part of the Internet business model
Internet governance - the Internet Law and Policy Form, ISCOC and the DNS issue
local and state infrastructure and the future - public interest problems
The reorganization of the articles into topical threads that unfold over time, and the addition of a detailed index makes it easier to track otherwise complex developments. While this new volume covers 10 months instead of 8, it is, at 222 pages, fully a third longer than the Tracking anthology.
The rest of this message contains:
1. Reflections from the San Jose IETF Meeting -- a summary of our understanding of the current state of the Internet.
2. The complete Table of Contents and partial Section Summaries of the full report.
3. Description of the Handbook's Audience
4. Pricing and Ordering Information
REFLECTIONS FROM THE SAN JOSE IETF MEETING
With considerable awe, we watched in early December nearly two thousand people come together for five days at the San Jose, California IETF. This meeting, which takes place three times a year, serves as a kind of "super-brain" for the network. What happens there is not only the design of new protocols but a cross fertilization of ideas among people and companies involved in all aspects of the Internet.
Despite some earlier predictions of doom and recent reports that a major cable operator like TCI is pulling back from the network, the Internet is coping amazingly well with its continued growth. In talking with a couple of dozen key figures, we can confirm that, among those running the network, there is a general awareness of how the pieces of the Internet "engine rooms" have developed and how they have assisted in the continued inter-operability and expansion of other aspects of the network. And although the pieces of the Internet may appear to an outsider at first glance to be unrelated, they definitely fit together as distinct parts of an unwritten master plan.
While the stratification of the network continues, there are encouraging signs that technology is being developed that will give the Internet a viable business model through differentiated levels of service. Furthermore there are signs that these levels of service do not necessarily mean that the low end of the market will be priced out of reach of the millions who currently enjoy access to it. Currently, the big issues are: (1) a potential shortage of fiber; (2) efforts to build backbones and become tier one providers; (3) attempts through Quality of Service to offer a more lucrative business model than the current one of best effort service; (4) creation of both the tools to assess ISP service and organizations to facilitate ISP coordination.
Coming: a Fiber Crunch Fueled by Bandwidth Demand?
The most troublesome sign on the horizon is a potential bandwidth crunch. Given an estimated 500% percent increase in bandwidth consumption in 1996, we may need to seriously reexamine an Internet business model that implies the existence of inexhaustible bandwidth and a reality where bandwidth sufficient to fill demand can be squeezed from deployed fiber only with the greatest difficulty. We have had a generally held view that the bandwidth obtainable from the current supply of fiber is inexhaustible. Signs are emerging that this may not be the case.
We offer four examples. First, in communications with some of the companies building national backbones, we have found them talking about large gaps in SONET coverage where the available capacity is either non-existent (Southwest) or already committed that is to say: not actually in use but committed to someone else in the North East.
They say that no suppliers exist that can provide OC3 and OC12 consistently throughout the US. As a result, often times the only way to get to OC-3 is to go to multiple vendors and end up taking temporary DS3 circuits. They then find that they must tie up the DS3 circuits to get equivalent bandwidth to OC3c fiber. The cost of adding new fiber to the current supply will be phenomenal. Trenching 96 strand single-mode fiber bundles currently costs between $125 and $350 a foot in metropolitan areas. It is much less in open areas, provided you have right-of-way.
Second, in November, we were approached by a major consulting company with questions about Internet bandwidth availability and acquisition. It turned out that they are under contract to produce a feasibility study for an entity that is thinking very seriously about installing new fiber from coast to coast. The consulting firm was employed to find out whether a market for new fiber existed. It was beginning to seem likely that the market was indeed there.
Then, in his December 2, 1996 Network World (p. 34) column Scott Bradner wrote: speaking of the problems facing those trying to build Internet backbones or increase the robustness of existing ones, "many . . . are quite worried about where the bandwidth will come from in the next few years. The existing fiber plant is becoming exhausted, and new fiber builds take planning, time and money.
On top of that there may be a problem in getting the fiber itself. The New York Times reported on November 4 that the worldwide demand for fiber used in telecommunications will reach 16.25 million miles of fiber this year, a market worth about $6 billion. But production has been running behind demand and the shortfall could be more than 1.25 million miles this year. The demand has been fueled by the global telecommunications deregulation movement. In addition, the advent of competition from direct broadcast satellites is forcing US cable TV companies to undertake large scale infrastructure upgrades."
Finally, in the December 24 issue of the Wall Street Journal Jarred Sandberg wrote about a new company called Qwest. "Only a few years ago, many industry watchers believed there was more than enough network capacity to handle the nation's telephone calls. "Well, that's all changed with the Internet," said Mr. Grubman [a telecom analyst at Salomon Brothers Inc.]. A year ago, he noted, telecommunications' capacity requirements were doubling every 12 months; today they are doubling every 4 months.
Qwest intends to become a "carrier's carrier," leasing its network to other telecommunications companies. Its new fiber optic artery will be 13,000 miles long, reaching 100 cities nationwide when completed in 1998. It also will be about 16 times faster than today's most common high-speed networks."
Other industry sources believe that only the larger scope networks will see the fiber-bandwidth crunch directly. For example, they say that those who buy cell-relay services at 10Mbps will not see the same crunch as those who buy raw SONET. The cell relay networks will simply have their new 10Mbps PVCs added into an existing service cloud. Since they are sharing bandwidth on an existing cloud, they will just experience a greater traffic aggregation level. (However, if they have no service guarantees in their contracts, they could be in for some unpleasant surprises since their bandwidth can't be manufactured from nothing.)
Those large networks building with raw SONET require that idle fiber strands first be located (and in some cases actually re-terminated for higher speed) and then attached to new SONET drop and insert interface cards. The actual shortage right now is not so much in fiber strands but manufacturing backlog on the interface cards and availability of spare slots in the shelves of existing SONET multiplexer chassis.
Some see the likely short term crunch periods as a result of failure to anticipate leased line growth. The common practice of avoiding shelf-inventory excesses by IXC carriers and manufacturers also contributes to the shortage. With 60 day ordering cycles, one can expect any specific regional crunch to resolve itself within 60 to 90 days.
These crunches are expected to occur in different places and with different providers randomly over the next year. Therefore, expect to see more 90 and 120 day lead times as are now being experienced in San Francisco and in New York. Some claim that these temporary crunches are a bit artificial because the transmission industry and the manufacturers could easily stock sufficient chassis and interface cards to meet the well established upward trend in demand.
When we showed the preceding paragraphs on the additional details of the problems of fiber infrastructure to Scott Bradner, he had the following comment: "It is not just planning failure. I think that the shortage will be felt down the stack in the form of higher costs."
The problem is exacerbated by the fact that no one really knows the amount of available fiber. The FCC does publish in July, for the year ending the preceding December, annual fiber deployment reports at
http://www.fcc.gov/Bureaus/Common_Carrier/Reports/FCC- State_Link/infra.html.
The reports cover the telephone industry (IXCs, LECs, and CAPs) but fail to include utilities and the CATV industry. Furthermore, many local governments are not 100% sure what fiber has been laid in their communities and where.
The Backbone Building Rush
Even though it is not at all clear how peering and interconnect arrangements with the majors will work out, 1996 has been a year of backbone building for new companies who want to join the majors at the top of the Internet hierarchy. In the Silicon Valley area CRL and Geonet have gone T-3 from coast to coast. In Arizona Goodnet and Genuity have done the same. Genuity is interesting because it has Bechtel, the multi-billion dollar construction firm, behind it and John Postel (the IANA) on its Board. @Home has completed its backbone. On the East Coast ICONet, has started from the New York City suburbs and built to the Pacific. DIGEX, on the strength of an IPO, has done the same from Washington, DC while from the same area, a Virginia suburban company named Netrail and backed by a wealthy Atlanta investor is also completing a buildout. Worldnet Access, in alliance with Brooks Fiber, is yet one more effort - one which combines a new backbone with a chain of acquired ISPs.
When plans for most of these backbones were initiated at the beginning of 1996, it made sense to build them because the rules were that a new player could expect to go to three of the five major public exchange at 45 megabits or above and obtain peering with the majors there. In other words, it seemed that a start-up could, for the expenditure of a few million dollars, become a Tier One provider with peering partners and no one who would be both upstream of it and in a position to raise the prices of its network connections and thus the cost of doing business.
So it was thought. But while these build-outs were taking place, conditions changed again and the top five providers (MCI, Sprint, BBN, UUNET, and ANS) began to execute private exchanges and to say that they would re-evaluate their existing peering agreements. The effort to avoid being a customer of one of these five and hence subject to price increases was becoming very expensive and perhaps even impossible to accomplish. As a result, some may begin to wonder about the comparative economic security of connecting to one of the newer backbones as opposed to those of the top five. Why? Because the second tier national providers are at the mercy of any price increases that might be imposed by the big five. This is a subject that we shall watch closely in 1997.
Routing, Switching and Quality of Service
In 1997 we will see new router announcements from Cisco. Engineers there, and in other companies, are working hard to find ways to combine the benefits of routing and switching since the ISP's dream is to have a box that can do both. Tag switching, use of precedence bits in the headers of IP packets, and RSVP are some of the numerous efforts designed to produce something that the industry is beginning to call Quality of Service (QoS) for customers who demand real time, or close to real time, performance for business applications that are strategically important.
Scott Bradner touched on QoS development in a significant way when he wrote in his December 2, 1996 Network World column: "Undifferentiated products of this type [best effort delivery] leave little room for ISPs to compete other than in price. If (or better, when) the network infrastructure can support multiple levels of quality of service, the ISPs can start to compete on a basis of quality of data delivery not just cost or how fast they answer the phone. It also may provide a way for the ISPs to moderate the growth in their bandwidth requirements enough so that the production of fiber and the installation of new facilities has a chance to catch up." (In both this report and our February 1997 newsletter we explore the quality of service issue in new interviews with Scott Bradner, Fred Baker, Noel Chiappa and Bob Moskowitz.)
Certainly, technologies for implementing Quality of Service [QoS] differentials are at the top of many agendas. Some seem to think this may well provide the Internet with its long sought business model -- one of being able to sell differentiated levels of service quality. But others disagree -- saying that such a view can trap the holder into an unwarranted reliance solely on technology. In their opinion, the prerequisites of the sound business model are excellent technical performance, the ability to serve defined markets better than competitors and distribution channels that would allow penetration into those markets. For example, in a large corporation, the principal buyer of integrated Internet services (such as security, network outsourcing, and advanced web hosting) is not the telecom manager, but the general manager. This is a very different sale. Companies want integrated results, not one stop shopping for voice, long distance and Internet. Integrated results mean providing high- speed access, integration, complex operation, application development, and transaction services.
End User Quality Monitoring
Efforts are underway to provide performance monitoring tools to Internet customers. In 1997 the Automotive Network eXchange will work through its "overseer" to certify National Service Providers that meet acceptable level of quality and performance. Merit's Routing Arbiter has survived its transition. In addition to route servers it has developed an network outage monitoring system that over 30 large ISPs have agreed to use. At the San Diego Supercomputer Center a group called NLANR is working on the development of performance measurement tools. Kim Claffy is working on forming a another group called CAIDA that will consolidate these tools for use by end users.
According to http://www.nlanr.net/COLL/caidance.html, "CAIDA (Cooperative Association for Internet Data Analysis) is a proposed effort to promote greater industry cooperation in architecting and managing the Internet. It seeks to address global engineering concerns that are highly dependent on cross-ISP coordination. This includes efforts to: (1) identify, develop and deploy measurement tools across the Internet; (2) work with commercial providers to provide them with a neutral, confidential vehicle for data sharing and analysis; (3) provide networking researchers and the general Internet community with current data on Internet traffic flow patterns; (4) assist in the introduction / deployment of emerging internet technologies such as multicast, IP v.6, web caching, bandwidth reservation protocols, etc.; and (5) enhance communications among commercial Internet service providers and the broader Internet communities."
Intel, meanwhile has developed a software test suite that it can use to measure how well a potential ISP can retrieve material from a few thousand web sites that its employees use. It is possible that this test suite might be used by other companies as a means of comparing service provider performance in advance of contract signing. By this time next year we shall probably see one or more systems for rating ISP performance.
Beginning with our March 1997 issue, we shall cover these end user oriented evaluative tools very carefully. We shall also follow efforts, such as CAIDA, that are designed to provide mechanisms for inter provider cooperation. We shall track the fiber shortage issue closely. Government may become an additional issue. Currently the FCC looks ISP-friendly. What may happen with Reid Hunt's departure is another matter. Bruce Lehman is making much mischief at the Patent Office with his efforts that seem designed to let content providers rule Cyberspace. These efforts were also the focus of material recently delivered to a contentious December 1996 meeting of the World Intellectual Property Organization. In the meantime it is becoming clear that an Internet with differing service quality levels does not have to be a broken Internet. Non broken in the sense that customers paying for different levels of QoS should still be able to communicate in a seamless fashion.
Conclusion
The Internet seems poised for continued major growth in 1997. The biggest imponderable that we see is both the question of access to SONET bandwidth at affordable prices and the ability to obtain interconnects with the top five on a basis that continues to be economically viable for those without their size and marketing muscle. On January 6 an announcement from MCI that it was beginning to deploy 40 gigabit per second technology on its network two years ahead of schedule further complicated the current situation. MCIUs use of Hitachi and Pirelli technology will likely buy some time in dealing with the bandwidth crunch. But with capacity demand doubling every four months avoiding problems entirely seems unlikely. Unfortunately, there are too many indications that prices will be increasing to permit us to have unrestrained optimism in evaluating the next stage of the Internet's growth.
January 7, 1997
---
CONTENTS AND SECTION SUMMARIES
Evolving Internet Infrastructure contains the full text of 46 articles appearing in the COOK Report between May 1996 and February 1997. They are organized into seven sections and presented either in chronological or subject matter order within each section. They are also indexed (about 250 subjects) on pages 215-217 of the report. Each section is introduced by an executive summary that explains where and how the material contained in that section fits into the "bigger picture" . The articles in each section are also preceded by their individual executive summaries. The report is structured to guide the reader, through a series of gradual steps starting with the summary and leading to increased levels of detailed description of the way in which the complex systems that make up the Internet fit together and interoperate.
Evolving Internet Infrastructure
Editor's Introduction and a Summary of the Operational Environment in January 1997 p. 1
Part One: Evolving Architecture & Pricing
The Inter-relationship Between Internet Architecture and the Cost of Doing Business
In order to understand the derivation of market share and hence the source of an Internet venture's staying power, it is necessary to understand the topology of the Internet as a series of inter connected networks. An ISP's position within this topology determines the conditions under which the ISP does business.
As the commercial Internet continues to grow, its pricing model is still evolving. Position within the Internet's topology determines the position of an ISP within the Internet food chain. [Remainder of Section summary is found in full report.] Spiraling Bandwidth Consumption and Flat Rate Pricing on Collision Course? Interview with Vint Cerf (Jan. 97) p. 7
Private Interconnects as New Internet Apex -- Network Engineering Demands Cause Five Largest Players to De-emphasize Public Exchange Points (Dec. 96) p. 12
How BBN Planet Handles Private Interconnects and Backbone Capacity Planning -- A Glimpse at How the Majors Keep Their Backbones Afloat (Jan. 97) p. 22
Part Two: Internet Architecture - Exchange Points, NAPs & MAEs, Peering and Transit
Summary - Understanding the Changing Relationships of the Largest Players
The Exchange Points are the "clover leafs" that tie together the backbones of the National Service Providers. They are increasing in number with some taking on regional functions while others are appearing in Europe and Asia. The five most important North American exchanges (MAE-East, MAE West, PacBell, Ameritech, and Sprint) have been populated by several dozen ISPs that either have built national backbones or thought that they could connect to the Internet by appearing at only one or two exchanges. [Remainder of summary in full report.]
Technology and Policy Complexity Behind Different Visions of Internet Architecture -- Huge Transit Backbones or a Fine Mesh of Smaller Links Connected by NAPs? (May 96) p. 27
Peering and Transit Issues -- Nanog Discussion Illuminates Some Current Practice and ISP Concerns (July 96) p. 29
Peering and the NAPs - An Update Rules for NAP Use and Top Tier Peering Evolving (Sept. 96) p. 33
Digital Enters Internet Exchange Business -- Innovative PAIX Combines Commercial Data Center Including Customer Web Servers with Level Two Exchange (Oct. 96) p. 38
Peering and Transit Issues -- Opportunity for Cheating May Be Negatively Impacting Spread of Level One Exchanges - Complex Issue Leaves Most Unhappy (Nov. 96) p. 42
Part Three: Internet from the Provider Side -- Business Model Analysis and Case Studies
Summary: Many Players in Search of Winning Strategies
As the commercial Internet matures everyone is getting "into the act," while the innovative, non conformist talents of the original start-up companies are being eliminated in a process of Tcorporatization' of an otherwise fiercely independent culture. At another level the process is being shaped by the entry of the largest telephone companies into the field. [Remainder of section summary found in full report.]
"Scaling" and "Corporatizing" the Internet -- For What It Does AT&T WorldNet Impressive as Big Three IXC's Commit to Nationwide Dial Up -- But Will Moves to Large Scale Operation Stiffle Innovation and Service? (Oct. 96) p. 56
Some Thoughts on the Significance of the MFS Take Over of UUNET (July 96) p. 61
World Net Access -- A VC Led Effort to Build a National ISP Network (Oct. 96) p. 62
Teleglobe Enters Internet Business -- Canadian Based International IXC Wants To Provision Internet Backbones World Wide (Oct. 96) p. 63
ATMnet: Dynamics and Technology of a Backbone Startup Can a New Company Leverage Technology to Drive Down Cost? (Sept. 96) p. 68
FastNet: A Regional ISP Business Model -- PA and NJ Provider Uses Quadruple Homing and Dual Platform Local Loops for High Reliability -- Discusses Telecom Deregulation Impact on ISP Market, Describes What ISPs Must Do to Prosper (Nov. 96) p. 75
Frontier Internet: A Rural ISP Business Model -- Dial-up Customer Support Critical to Start Up But Dedicated Lines and Telecommunications Systems Integration For Area Businesses Are Keys to Future (Dec. 96) p. 83
Maintaining Credibility in the Secondary Tier of National Service Providers: AGIS and CAIS (Dec. 96) p. 90
The Small Telco as Technology Innovator Northern Arkansas Telephone Company Offers Cheapest ISDN in U.S., SONET, & Internet Access -- Issues Facing Small Telcos Under Deregulation (Jan. 97) p. 93
Part Four: Routers and Routing Tables
A Summary of Some of the Major Issues
Not surprisingly, given the Internet's continued unabated growth, routing technologies are very much an issue in keeping the network running. In 1996 the advocates of ATM and switched ATM or frame relay fabrics made a major mark. As the leading switch maker, Cascade prospered. [Remainder of summary to be found in full report.]
Design of Ipsilon's IP Switch May Offer Possible Solution to IP/ATM Problems and to Overtaxed Backbone Routers But Start Up's First Products Focus on Campus WANS (July 96) p. 100
Can Bay Networks Compete with Cisco in Major Internet Backbones? ANS Deal Boosts Image But Cisco Maintains Lead in Software, and New Product Announcements (June 96) p.109
Renumbering Texas Instrument's IP Net -- Current Processes Inadequate to Avoid Future Trauma (Sept. 96) p. 112
Route Advertisement Bulge -- After a Year of Relative Stability, Routes Advertised to Defaultless Core Grow by 25% Over Summer (Oct. 96) p. 117
Sean Doran - Talks about Bridges at Interconnect Points, New Router Technology, Big Telco - ISP Competition & Bandwidth Acquisition (Oct. 96) p. 119
Sean Doran Describes Sprint's Filtering Policies (May 96) p. 122
Part Five: Quality of Service
Summary: Multiple Perspectives on QoS
Quality of Service (QoS) is likely to be the single most important business issue for the Internet in 1997. While the QoS concept does not seem to have a single definition, it may be loosely construed as the desire to offer a predictable and dependable service that is more reliable than the current best effort paradigm that says we try hard to deliver your packets and should they be dropped we will continue to retransmitt until they do arrive. [Remainder of summary found in full report.]
Sprint Executives Discuss ATM Plans, Backbone Upgrade, and Operating Structure Commitment To ATM Emphasized But Implementation Seems Cautious (May 96) p. 126
ATM: What Does the Customer Need? -- TeleStrategies Presentation by John Curran (Oct. 96) p. 130
BBN Pushing RSVP Commercial Availability for Early Next Year -- Success Would Deliver Strong Blow to Use of ATM for Bandwidth on Demand (May 96) p. 132
IP Provider Metrics Mail List Faces Difficult Topic -- How Can Performance of TCP/IP Internets Be Measured? (Oct. 96) p. 136
Quality of Service Issues Very Much Unresolved -- While Technology Questions for Packet Tagging , RSVP and ATM Not Solved, Administrative Barriers May Be Most Serious (Jan. 97) p. 138
Automotive Network Exchange Expected to Announce Certification Process Overseer (Feb. 97) p. 140
Fred Baker Explains Cisco's Use of Packet Labeling Algorithms Weighted RED Permits Specially Labeled Packets to be Shielded from Network Congestion (Feb. 97) p. 142
Single Provider Based VPNs May Be Best for Initial Use of Quality of Service in Public Internet --Scott Bradner Discusses Quality of Service Issues (Feb. 97) p. 146
Design Considerations for Quality of Service --Noel Chiappa Examines Scenarios for QoS Implementation & Emphasizes Difficulties Inherent in Cross Provider RSVP Use (Feb. 97) p. 150
Part Six: Who Will Rule? - Internet Law & Policy Forum, ISOC and the DNS Controversy
Summary: Will Money Destroy the Most Productive Qualities of Internet Governance?
The smell of money has created policy problems for the Internet in 1996. Some people have emerged who are ready to declare the cooperative nature of the Internet outmoded and who would question the open standards process of the IETF and the legitimacy of the Internet Society in conjunction with the IAB to govern the network. [Remainder of summary found in full report.]
Who Speaks for Internet Community? ISOC, IETF, IANA or Lawyers of ILPF ? -- Disputes Over Domain Names and Attacks on ISOC Overshadowed by Governance Aspirations of Lawyers Forum - ILPF Seeks to Make Policy for Net in Political Arena (Aug. 96) p. 157
Internet Governance: New ISOC Course Charted by Exec. Director Don Heath -- Local Chapters Grow and Foreign Members Now Outnumber US (Sept. 96) p. 184
Dave Farber Finds Role for ILPF if It Does Not Try To Assert Over-All Governing Authority -- Lauds ISOC's new Relationship with IETF, IAB, IANA (Sept. 96) p. 185
The Domain Name Service Wars -- We Point Our Readers to Two Web-Based Resources (Oct. 96) p. 187
Part Seven: The Local and State Level and the Future
Summary: Skeptical Looks at the Internet and Public Policy
We describe here some projects and points of view with which we are familiar. We heartily dislike what seems to be a national policy process to build a so called National Information Infrastructure (NII). Why? Because far too often the projects funded are those organized by a coalition of self-appointed community leaders, public officials, industry groups and major corporations. The process is fueled by government money and is more attuned to the special interest's definition of the public interest than anything most citizens would recognize as being in their interest. [Remainder of section summary found in full report.]
What's Wrong with National Information Infrastructure Policies? A Critique of the Alliance Between Public Officials, Community Gate Keepers and Industry by Jeff Michka (May 96) p. 190
Beware! An NTIA NII Award Could Be Coming to Your Backyard This Fall -- If It Does Not Come Consider Yourself Lucky (July 96) p. 193
Appropriate Technology and Public Policy for K-12 Education: Internet As Pork Barrel? -- An Examination of Some Recommendations of Universal Service Board & an Update on MercerNet as an Example of Our Concerns (Jan. 97) p. 199 Wired for Dollars -- Why is Maine Using its Schools and Libraries to Lure the Technology Giant? by Laura Conaway (Jan. 97) p. 203
The Lessons of Access Indiana -- State Policy Confounded by Top Down Planning for Centralized Purchase of Internet Services (May 96) p. 207
Academia Dissatisfied With Its Fate Under Commercial Internet: Internet II Solution Touted But Purpose and Focus of Internet II Is Blurred (Nov. 96) p. 210
Sorting out the Various New Internet Initiatives -- an IETF Interview with Scott Bradner (Feb. 97) p. 213
Index and Internet Topology Map p. 214
Contents p. 219
THE AUDIENCE FOR THE HANDBOOK
Within the national Internet service provider community, "Tracking Internet Infrastructure" is intended to educate strategists with the complexities facing their engineering and operations staff. Among smaller ISPs it should serve as a tool to bring owner-operators, who are busy 18 hours a day ordering lines, installing them and servicing their customers, up to speed on the changes going on in the environment in which they must operate. LECs and other phone companies will find it useful. Finally familiarity with the issues discussed within the Handbook will provide corporate MIS people with a valuable knowledge base from which to negotiate with their present or future internet service providers.
However, since these infrastructure issues are also critical to the continued growth and success of the industry, this "Handbook" is expected to be a tool for use by those in the banking and investment community. If those in the financial community understand the changing technical and power relationships in the industry, they will be able to improve the quality of their investment decision making. It should also be useful to corporate strategic planners who will be advising their companies' decision making in Internet applications for vertical industry markets.
HOW TO ORDER THE FULL REPORT:
The Report is priced at $375.00 per copy . The standard form is double sided xeroxed, GBC bound. A site license copy is shipped single sided laser printed.
Current site license subscribers to the COOK Report may have site license privileges for the desk top published copy at the same $375 cost. Those who are not subscribers to the COOK Report may save 5% on the cost of a subscription if they order both at the same time. The report will be available in acsii form for markup and posting on an employee only intranet for $1000. Those with web based site license subscriptions may order the report for adding to their webs at $800. Single use ascii copies at less than $800 are not available.
Inquiries by email welcomed. But orders should be placed by phone and confirmed by fax.
cook@cookreport.com 609 882-2572
---
The COOK Report on Internet For subsc. pricing & more than 431 Greenway Ave, Ewing, NJ 08618 USA ten megabytes of free material (609) 882-2572 (phone & fax) visit http://pobox.com/cook/ Internet: cook@cookreport.com For case study of MercerNet & TIIAP induced harm to local community http://pobox.com/cook/mercernet.html
--- ```
This web service brought to you by Somewhere.Com, LLC.