The University of Vermont vBNS Connection: Project Description

 

1. Introduction

The University of Vermont proposes a high performance connection to the vBNS. The University's participation in the NSF vBNS and its membership in coalitions such as Internet2 are undertaken to develop the University's research infrastructure and to increase its participation in and contribution to national research and educational goals.

The University of Vermont and State Agricultural College blends the academic heritage of a private university with service missions in the land grant tradition. UVM, from the Latin Universitas Viridis Montis, is located in Burlington, Vermont's largest city. During the fall 1997 semester, 7514 students were enrolled in the eight undergraduate colleges and schools, 1164 in the Graduate College, and 383 in the College of Medicine.

UVM is a Research II University, according to the Carnegie criteria and nationally ranks among the 100 leading universities receiving sponsored research support from such agencies as the National Institutes of Health and the National Science Foundation. The University received over $38 million exclusively for sponsored research funded during fiscal year 1997. Research is viewed as central to the intellectual integrity and educational mission of the university.

Shortly after inauguration in 1997, President Judith Ramaley restated encouragement for research collaborations between faculty and students. "My hope is that every student at UVM will have the opportunity to participate in the generation of knowledge and understanding. ... In this way, our students can experience the challenges and joys of scholarship and can learn more deeply through their direct participation in the creation, interpretation and application of knowledge to issues they care about." Participation in vBNS is an initiative that supports these institutional goals.

 

1.1. Networking at the University of Vermont

Originally built in 1984 to support collaboration between the new Engineering and Mathematics Computing Facility (EM-CF) and the Academic Computing Center, the UVM campus network used internet standards over a CATV backbone. With the introduction of a 128 Kbps NSFnet sponsored connection to NEARnet in 1989, and a desire to provide network access to every department on campus, the campus backbone was converted to fiber beginning in 1990. By 1995, the backbone technology was upgraded to 100 Mbps FDDI. Although not a founding member, the University joined the Internet2 consortium in 1996.

The Division of Computing and Information Technology's (CIT) Network and Telecommunication Services (NTS) operates the university telephone system, the campus academic and residential video networks, as well as managing the underlying fiber networks. NTS manages the campus internet service which is provided by 2 T1's with an upgrade to 3 T1's planned for the fall; a special T1 circuit connects to a local internet service provider for dial-in service to off-campus users. NTS also manages special services such as a connection to the Vermont Interactive Television (VIT) digital video-conferencing network. NTS staff does all fiber splicing, fusing, and termination on campus.

The campus maintains its own fiber plant which has grown significantly since its original installation in 1984. The most recent upgrade of the fiber plant occurred in the Spring and Summer of 1998 as part of an expansion of the residential video network. A new fiber core, consisting of 144 strands of single-mode fiber, connects three points on campus - the Waterman Building, the Aiken, and the Southwick Building. The remaining 90 or so buildings on campus are served by either 12 or 24 strands of multimode fiber, 12 or 24 strands of single mode fiber, or both.

Networking at UVM, however, is more than wiring. Like many campuses, networking is ubiquitous and is increasingly used for research, instruction, and university life in general. In the 1998 the University of Vermont ranked 32nd in the Yahoo Internet Life survey highlighting "America's 100 Most Wired Colleges". [2]

 

1.2. Networking Expertise

The current campus IP network offers 100 megabit service to every building on campus. A FDDI backbone incorporates 4 Cisco 7500 and 2 Cisco 7000 routers. It supports over 100 subnets and over 6300 Ethernet connections. The FDDI service has been in place since summer 1996.

Intra-building service is based on Cabletron intelligent hubs. Most public computing facilities such as those in the Arts and Science complex in Old Mill, the Lafayette Building with the Center for Distance Learning, the Pomeroy Building, and the Votey Engineering Building, as well as all of the residence halls have all been upgraded to Category 5 twisted pair. The bulk of office wiring is telephone twisted pair. Several local networks use internal FDDI rings to cluster high-end Unix servers. A digital media lab has an ATM based local area network to support a video server.

As part of this proposal, the campus backbone will be reengineered to support gigabit technologies, initially ethernet, with an option for migrating to IP over SONET. The campus network manager who will be responsible for the engineering design and day to day operation of this network is an active participant in national vBNS engineering meetings.

A schematic diagram of the existing network appears in figure 1.

 

Figure 1: The UVM Backbone Network (1998)

1.3. Planning Process

The planning process began in late 1996 when the University of Vermont applied for membership in the Internet2 consortium. This activity, led by Prof. Thomas Tritton, then UVM's Vice Provost for Research, now president of Haverford College, spawned a planning team containing members from CIT, the Department of Computer Science, and the University of Vermont's NSF EPSCoR program. In early 1997, teams from the University of Maine, the University of New Hampshire, and the University of Vermont met in Burlington with members of the Boston area gigaPOP proposal to discuss approaches to vBNS connectivity and the particular telecommunication infrastructure of rural New England.

At that meeting, a proposal writing team was identified at UVM, and discussions begun with specific researchers and the general university community. The discussions with the larger community were often framed with related issues. The newly appointed President had begun a search for a Provost and the retiring Vice President for Administration appointed an Information Technology Task Force (ITTF) charged with preparing a report for the new Provost who was expected to arrive in early 1998.

In addition to on-campus discussions, the writing team members participated in the NCSA / EPSCoR Alliance sponsored workshop held at the National Center for Supercomputing Applications in September of 1997.

The ITTF report, which recommends that the University appoint a Chief Learning and Information Officer (CLIO) and recognizes the strategic nature of university participation in the vBNS and Internet2, has been enthusiastically accepted by our new Provost, Geoffrey Gamble. Members of this writing team have participated in two search processes for vice-provost level officers - the Vice Provost for Research and the CLIO, both positions which are expected to be filled by the fall semester.

1.4. Planning Participants

The main participants in the planning effort have been

As is often the case at the University of Vermont, the vBNS planning process has been open and augmented by electronic mail and web sites. Meetings have been held with the Bailey Howe Library staff, the Computing and Information Technology staff, and the Department of Computer Science. In addition, we prepared an Internet2/vBNS booth for the Fall 1997 InfoFair.

Planning at UVM, and especially planning for information technology, aims for continuous improvement. This fall, the process will continue with the addition of both the CLIO and the Vice Provost for Research.

 

2. Meritorious Applications

The University of Vermont research community envisions a broad spectrum of vBNS applications. High performance networking provides a springboard to enhance ongoing activities as well as introduce new opportunities. The UVM network goal, moreover, is to make the resource available to every researcher - faculty, graduate student, undergraduate student, or staff member - whose activities fall within acceptable use policies.

Below is a list of applications requiring vBNS levels of connectivity divided into three application groups - Computer Science, Computational Science and Engineering, and Computer Mediated Collaboration and Education. These applications are chosen for their contribution to the national vBNS efforts as well as for their exemplary role at the University.

At the end of this section, we've included a brief summary of the identified vBNS requirements.

2.1 Applications in Computer Science

The Department of Computer Science plays a central role in the University's proposal. Networking serves as a focus for several researchers in the department, research that itself will benefit from access to the vBNS.

2.1.1 High Speed Network Performance and Reliability

Dr. Charles Colbourn, Dorothean Professor of Computer Science and Chair, Department of Computer Science:

Dr. Colbourn's research on algorithms to estimate network reliability and performability are computationally intensive, and can be parallelized over a network only with rapid sharing of current states between cooperating processes. His other research is on heuristic search for combinatorial objects such as error-correcting codes and erasure resilient codes. Current successful algorithms for these problems distribute computation in a loosely coupled manner as a result of the difficulty of communicating large volumes of search status information; high bandwidth to multiple high-speed computer servers would permit very effective sequential techniques to be parallelized.

As part of his research use of vBNS services, Professor Colbourn plans to sponsor high speed data networking projects for both demonstration and undergraduate and graduate thesis research. The inclusion of an ATM application switch as part of this proposal is intended to further these projects. The vBNS connection will be part of this network testbed operating in conjunction with network experiments arranged with the University of New Hampshire Interoperability Lab.

Professor Colbourn's research requires access to ATM level protocols, low latency, and bandwidths ranging from 10 Mbps to 1000 Mbps. He and his research will use these facilities several hours weekly.

 

2.1.2 Multidisciplinary Network of Workstations (NOW) Research Laboratory

Professor David E. Dougherty, Departments of Civil & Environmental Engineering and of Computer Science

Professor Guoliang Xue, Department of Computer Science

Professors Dougherty and Xue are pooling resources to construct a computational research laboratory based on MYRICOM's very high speed (1.2 gigabits per second per connection with full duplex) network. Two eight-port crossbar switches will link 400 MHz Pentium II computers in a network of workstations (NOW), forming UVM's fastest and largest computing cluster. Each computer will also be connected by fast ethernet switch to the EMBA network. The cluster hosts will be able to boot with Redhat Linux and Windows NT operating systems. Portland Group Fortran, HDF, C, and C++ compilers and message passing libraries will be used. We are following closely the open message passing (openMP) initiative. The new facility will initially be used for research on network optimization, three-dimensional data inversion, and global optimization methods for protein folding and environmental management.

The NOW Research Laboratory requires low latency network access at ethernet speeds and higher. Their work will use the network for short periods several times weekly for development, and for long, extended production runs several times a semester. This research is carried out in conjunction with the DOE National Laboratories at Argonne and Livermore.

 

2.1.3 Multicast Networks

Dr. Yuanyuan Yang, Department of Computer Science and Electrical Engineering

Dr. Yang's research efforts focus on multicast networks. Multicast or one-to-many communication is highly demanded in broadband integrated services digital networks (BISDN) and scalable parallel computers. Some examples are video conference calls and video-on-demand services in BISDN networks, and barrier synchronization and write update/invalidate in directory-based cache coherence protocols in parallel computers. It has become increasingly important to support multicast communication in parallel computers as well as in telecommunication environments. His ongoing research effort, which has been supported by the U.S. Army Research Office and the National Science Foundation, have focused on designing efficient multicast networks which can support arbitrary multicast communication patterns among the network nodes. By utilizing a network with multicast capability in a parallel computer or a telecommunication network, substantial improvement in the system performance can be achieved due to significantly shortened delays in data transfer and simplified synchronizations among network nodes.

Dr. Yang's multicast work requires guaranteed delivery at bandwidths upwards of 20 Mbps. He and his collaborators at Johns Hopkins University would use vBNS facilities a few times a year for several hours.

 

2.2. Computational Science and Engineering

2.2.1 Biomedical Optical Tomography

Margaret J. Eppstein, Department of Computer Science

David E. Dougherty, Departments of Civil & Environmental Engineering and of Computer Science

Collaborators: Eva M. Sevick-Muraca, Photon Migration Laboratory, Purdue University

We are initiating an NSF-funded collaborative project (the project has been approved and funding is expected imminently) between UVM and the Photon Migration Laboratory at Purdue. The purpose of the project is to develop 2-D and 3-D imaging technologies for biological tissues, using highly scattered near-infrared light transmission, with and without the addition of exogenous fluorescing dyes. The data will be inverted using an approximate extended Kalman filter with data-driven zonation, a method originally devised by the UVM group for geophysical imaging. The instrumentation and data collection will be performed at the Purdue photon migration laboratory, while the computational data inversion will be developed at UVM. This will necessitate frequent transmissions of large data and image files between UVM and Purdue. Hence, high bandwidth is required for guaranteed and timely transmissions. This emerging method of noninvasive investigation will be potentially useful in a variety of applications, including noninvasive breast cancer screening and real-time visualization of haemodynamic processes in the brain. Therefore, although the duration of this NSF project is 3 years, we have already started expanding this collaborative research effort. This project will requires 2 ñ 20 Mbps bandwidth with guaranteed delivery 3 times a week for half-day work sessions.

 

 

2.2.2 Remote Observations in Radio Astronomy

Professor Joanna Rankin, Department of Physics

Dr. Joanna Rankin carries out research in observational radio astronomy with primary interests in the areas of the pulsar radio-frequency emission problem, pulsars as probes of the interstellar medium, and feminist studies of science. Her current research focuses on the polarization properties of pulsar emission, both of average profiles and of trains of individual pulses. She actively collaborates with astronomers throughout the world including India and Russia.

She regularly makes observations using the Arecibo Observatory in Puerto Rico and other instruments, and has most recently completed observations on the Vela pulsar using the NRAO Very Large Array at Socorro.

More than enhancing her own observations, however, the possibilities of remote observing opens the horizon for graduate and undergraduate participation in research. The University of Massachusetts / Five College Astronomy Department, for example, allocates a significant fraction of the observing time at FCRAO to graduate and advanced undergraduate students and has been at the forefront of developing remote observing programs.

The availability of remote observing would greatly enhance Professor Rankin's research efforts. A typical observational stint would last 3 - 4 sessions with each session using 3 - 5 hours of telescope time. Each session generally produces data at the rate of a megabit per second - a five hour session producing just over 20 gigabytes. Participating as a remote observer, however, would require additional bandwidth - for communication with the telescope operators, operating computer programs, etc. The VLA itself provides observers with computer based stations running on a local area ethernet. Student observing programs would have similar requirements, except that a larger number of students, perhaps 2 or 3, would have fewer sessions.

 

2.2.3. Minimum Cost Interconnecting Network / Simulation of Particle Systems

Dr. Guoliang Xue, Department of Computer Science and Electrical Engineering

Minimum Cost Network Interconnecting a set of points: Currently, he and two graduate students are designing several efficient approximation algorithms and intend to run those on the supercomputers at the San Diego Supercomputer Center. With the current interconnection at UVM, it is impossible for the configurations of the interconnecting network to be displayed in real-time over the internet. Ideally, we would like to see the display at this end at the speed of one frame per second. This work is part of a larger collaboration with Professor Dingzhu Du, Department of Computer Science, University of Minnesota.

Faster simulation of particle systems: He and another graduate student are interested in computing a stable state of a cluster of n particles. Traditional methods uses O(n^2) time to compute the Newton direction and then move the particles along the Newton direction. We are performing O(n) time simulations of particles. As in the case of the previous project, they will carry this out on the supercomputers at the San Diego Supercomputer Center. They would also like the displays of the cluster shown at every second. This project has been supported by NSF.

Dr. Xue's access to the San Diego Supercomputer Center requires bandwidths comparable to or exceeding local area networking speeds for half day work sessions 2 or 3 times a week. The performance of the network should allow the remote access to appear local.

 

2.2.4 Protein Sequence and Structural Diversity

Jeffrey P. Bond, Dept. Microbiology and Molecular Genetics
Director, Molecular Modeling Facility of the Vermont Cancer Center

Dr. Bond conducts research on protein sequence and structural diversity. This research, as well as that of the Modeling Facility of the Vermont Cancer Center requires an integrated set of tools for protein and nucleic acid sequence and structure analysis. The Biology Workbench at the National Computational Science Alliance (NCSA) provides many of the required tools in a single integrated web product, allowing users on different platforms to be trained on a common program interface without requiring them to learn a multitude of difference application interfaces. UVM is an alliance partner with NCSA (via EPSCoR participation). vBNS connectivity could result in a dramatic expansion of the use of the NCSA Biology Workbench by the many biomedical researchers at UVM. This research also includes development of ways to describe and graphically display the structural diversity and correlations in large sets of protein structures. Collaborations with remote researchers generating large sets of protein structures using NMR spectroscopy would benefit from a collaborative link that would permit conferencing including 3D visualization. Low latency, moderate bandwidth (0.1 to 2 Mbps) connections would be used 4 to 8 hours daily with a 1 hour long 3-D-visualization enhanced conference per week.

 

2.3 Summary of Application Networking Requirements

The vBNS networking requirements for the identified applications are summarized in the table below.

Application

Bandwidth

QoS

Frequency

Collaborators

2.1.1 :
High Speed Network Performance and Reliability

10 - 1000 Mbps.

Protocol level access; low latency

Several hours weekly

University of New Hampshire

2.1.2 :
High Speed Network Performance and Reliability

10 Mbps and greater

Low latency

Several times weekly for code development and testing; once or twice a semester for production runs of several days

Livermore National Lab, Argonne National Lab.

2.1.3 : Multicast Networks

20 Mbps

Guaranteed delivery

Several hours a few times a week through out the year

Johns Hopkins University

2.2.1 :
Biomedical Optical Tomography

2 ñ 20 Mbps

Guaranteed delivery

Half-day sessions, 3 days per week

Purdue University

2.2.2 :
Remote Observations in Radio Astronomy

2 Mbps

Guaranteed delivery

2 - 3 day 8 - 10 hour observing periods once or twice a year; 3 - 4 instances of an 6 - 8 hour observing session each semester

National Radio Astronomy Observatory, University of Massachusetts

2.2.3 :
Minimum Cost Interconnecting Network / Simulation of Particle Systems

20 - 30 Mbps

Guaranteed delivery

Half-day sessions, 2 - 3 times per week

San Diego Supercomputer Center, University of Minnesota

2.2.4:
Protein Sequence and Structural Diversity

0.1 to 3 Mbs

Low latency

Frequency: 4 to 8 hour-long sessions daily, 1 hour-long; 3-D-visualization-enhanced conference per week

National Center for Supercomputing Applications

Table 2.1 : Summary of vBNS Requirements of Meritorious Applications

 

3 Contribution to Network Infrastructure

UVM's participation in the vBNS and Internet2 projects enhances both the regional and national infrastructures. From the regional viewpoint, much of the effort is developmental - enhancing the local services to perform at national levels. Participation in the formation of a regional gigaPOP, as discussed in the next section, will result from this vBNS project.

From the national perspective, the University and the state contribute unique and often refreshing examples. Often these solutions are scalable. Our policy of providing email and web access to every university affiliate led us to national prominence, at least for a short time. In 1996, our Zoo cluster hosted the largest number of academic users in an IBM AIX / DCE environment. This experience, requiring close collaboration with IBM developers, was featured in an advertising campaign and became useful to other institutions considering this technology. Early experience supporting a statewide K-12 BBS system led to widespread internet access in Vermont schools backed by a strong professional and curriculum development program - a year before NetDay became popular.

 

4. Network Plan

The University of Vermont vBNS networking plan consists of several components:

4.1 vBNS Connectivity

4.1.1. DS-3

UVM proposes to meet the needs of these identified meritorious projects by installing a DS-3 connection to the vBNS operating over ATM connectivity to an appropriate gigaPOP. UVM 's commodity internet service will be maintained as a separate service.

In planning the vBNS connection, we've discussed options with four carriers - AT&T, Bell-Atlantic, Interprise (a partnership of USWest and Adelphia/Hyperion Cable), and MCI. The discussions centered on providing DS-3 services to a University owned ATM switch from a variety of POPs. Each of the vendors discussed the solution on a pending availability basis. We've also explored with these vendors the possibilities of OC-3 services.

Three separate connectivity options emerged that should be available in the 1999 time frame - a "Northern New England Connector" located in Nashua, New Hampshire, the "New England gigaPOP" at Boston University, and an MCI POP in Charlton, Massachusetts. These three connections are roughly equal in cost (plus or minus 5%) but may not be equally available. In estimating the budget for this project, we've used a high average as a conservative planning tool.

4.1.2. Northern New England Connector

The original discussions between the Universities of Maine, New Hampshire, and Vermont have been strengthened by the creation of a University of New Hampshire System Network based on a Bell-Atlantic ATM service in New Hampshire. The discussion has expanded to include Dartmouth College. The UNH system network includes a node in Nashua, New Hampshire, adjacent to an MCI site. UNH is currently discussing connectivity options via MCI to either the New England gigaPOP in Boston or to MCI's site in Charlton, Massachusetts. One of the aims of the negotiation is to have OC-3 services available in Nashua.

UVM could join the connector in either Nashua or Hanover, New Hampshire. This connection has a slight cost advantage and a much larger advantage of building on long term working relationships between the participating institutions. This approach also has the advantage of encouraging carriers to think seriously about extending their OC-3 services into rural New England.

4.1.3. New England gigaPOP

Boston University has been leading an effort to create a New England gigaPOP. Despite several obstacles, progress is being made and a direct connection between the University of Vermont and Boston may be possible - especially if significant cost savings could be realized through, for example, a New England inter-lata ATM service.

4.1.4. Direct connection to MCI

Historically, vBNS connectivity has only been available in New England via the MCI POP in Charlton, Massachusetts. This service loses its unique position as the Boston gigaPOP and Northern Connector emerge from the planning stage.

 

4.2 Network Engineering for vBNS Connectivity

The University proposes to install a new campus gigabit backbone with ATM resources separately available. This backbone will use single mode fiber installed as part of recent construction activities in and around campus; three 144 strand fiber runs are available in the fiber core. The separate ATM services will use additional fiber.

As illustrated in Figure 2, an ATM switch in the Waterman Building will provide a gateway between a campus gigabit backbone router and the vBNS connection. This switch will also connect to an ATM switch in the Engineering and Mathematics Computing Facility in the Votey Building. The Votey application switch will provide an ATM testbed facility to researchers in the Department of Computer Science. The availability of fiber on the UVM campus allows other applications direct access to the gateway switch. Direct ATM access provides a route around the campus IP backbone.

The second component is a backbone network based on gigabit switched ethernet. This provides the campus research community a high speed campus network and, when the AUP allows, ready access to the vBNS. The AUP policy will be implemented in the gateway router - directing vBNS packets to the ATM switch and all other packets to the commodity internet provider.

The ATM switches being proposed are comparable to FORE 5000 series switches; the actual switches used in the project, however, will be selected to accommodate telecommunication vendor requirements. The routers will be Cisco 8500 class routers capable of being operated up to the multi-gigabit range.

 

 

Figure 2: The Proposed Gigabit Addition to the UVM Backbone Network

 

 

5. Quality of Service Plan

Delivering the quality of service required by researcher applications demands managing both the campus networking environment and the connection to the vBNS. The UVM strategy for ensuring quality of service is to evolve as policy and technical expertise develop. Initially, based on the identified applications, traffic will be essentially IP. Very soon thereafter, a demand for time sensitive interactions will arise - real-time instrumentations, video conferencing, distributed processing. That will likely be followed by a need to provide for a Cost-of-Service (CoS) versus QoS.

Our plan, then, is to ramp up QoS support in three phases:

First phase:
The first step to QoS is to provide bandwidth bandwidth, both on and off campus. On campus, we we segregate applications classes. The older FDDI backbone will continue to support administrative applications running over non-IP LAN protocols, the gigabit network will be available for IP based services, and ATM services will be available for using native ATM protocols. The off campus vBNS traffic will be separated from commodity internet traffic. During this startup phase, we will establish routine performance monitoring, analysis, evaluation, and review procedures to analyze and classify traffic. We will also develop a working knowledge of how these performance measurements relate to applications needs and researchers experiences.
Second phase:
Develop experience and expertise with "bandwidth on demand" mechanisms by participating in RSVP demonstrations, testbeds, and prototype management regimes. The "direct connect to ATM" option provided in this proposal is an ad-hoc means for meeting special bandwidth needs which will arise before more universal mechanisms are available.
Third phase - Beyond QoS:
Much of the work needed to develop QoS procedures remains to be explored. The testbed switch provided in this proposal allows students and faculty in research activities that will develop the next generation of quality of service solutions.

 

 

6. Institutional Commitments

In keeping with its Internet2 membership, the University is committed to participating in a broad range of high performance networking activities. Specifically the University of Vermont will -

This project will provide the University with a reliable, robust and scalable networking infrastructure to support advanced applications of telecommunications in research and education. As researchers across campus gain familiarity with these technologies, they will begin to use them in new arenas, strengthening the research and intellectual resources of both the University and the Nation. The vBNS connectivity will provide a platform upon which the University of Vermont can become an active and valuable partner in the national and global gigabit environment.