Abstract
The Department of Laboratory Medicine at the University of California, San Francisco (UCSF) has been split into widely separated facilities, leading to much time being spent traveling between facilities for meetings. We installed an open-source AccessGrid multi-media-conferencing system using (largely) consumer-grade equipment, connecting 6 sites at 5 separate facilities. The system was accepted rapidly and enthusiastically, and was inexpensive compared to alternative approaches. Security was addressed by aspects of the AG software and by local network administrative practices. The chief obstacles to deployment arose from security restrictions imposed by multiple independent network administration regimes, requiring a drastically reduced list of network ports employed by AG components.
Introduction
The University of California, San Francisco, is responsible for 694 patient beds at two sites in San Francisco (Moffitt-Long Hospitals, Mt. Zion Medical Center). In the 2006–7 academic year, it supported 679,341 outpatient visits, and the training of 603 medical students and 1,188 house officers. Faculty also staff two affiliated institutions: San Francisco General Hospital (SFGH, 639 beds), and the San Francisco Veterans Affairs Medical Center (SFVAMC, 124 beds).
The 39 faculty and 360 laboratory and administrative staff members of the Department of Laboratory Medicine are responsible for UCSF’s clinical laboratories (comprising sections in blood banking, clinical chemistry, cytogenetics, molecular diagnostics/molecular pathology, hematology, immunology, and microbiology) at 3 locations: UCSF Medical Center (Moffitt-Long Hospitals); UCSF Medical Center at Mt. Zion; and, the main Clinical Laboratories at China Basin. The Department trains 12 residents and 3 fellows (who rotate between UCSF Medical Center, China Basin, SFGH, and SFVAMC), and supports a vigorous grant-supported research program.
With institutional growth and consolidations, UCSF personnel have become increasingly dispersed, having to frequently employ shuttle buses to travel to meetings; we decided to install a video-conferencing facility to address this problem.
Our initial needs assessment centered on the most demanding envisaged task for the system: infectious disease plate rounds, in which a microbiologist based at China Basin discusses patient cases and laboratory results with infectious disease fellows and attending physicians at Long Hospital. The microbiologist required face-to-face visual contact with remote participants, and the ability to share stored images and lecture materials as well as real-time images of microscope slides and macroscopic items (such as culture plates). Given that all sites had excellent Internet connectivity, and that all participants were comfortable with using the World Wide Web, we decided that an Internet-based multi-media conferencing system would be appropriate. Several commercial systems were evaluated, but their costs were considered prohibitive: approximately $50,000 per site (or $300,000+ for our planned 6-site facility). We decided to create a system using open-source software and (as far as possible) consumer-grade hardware.
The AccessGrid
The AccessGrid (AG) arose from the fusion of research in several independent areas: high-speed computing networks, immersive virtual reality environments, computerized multimedia data representation, and multicast technology1. Standards governing or inspiring AG technologies are maintained by: the Internet Engineering Task Force (IETF); the International Telecommunications Union Telecommunication Standardization Sector (ITU-T); and, the World Wide Web Consortium (W3C). Work on AG began in the late 1990s, the result of a distributed collaboration centered on the Argonne National Laboratory (ANL), arising from earlier work on Internet multicast technology.
Technical Underpinnings & Precedents
Familiar Internet applications (e.g. web browsing) rely upon unicast, in which data is transmitted from one computer to another. Less widely known is multicast technology, which facilitates one-to-many and many-to-many communication. Under Internet Protocol v4, the Class D address space (comprising over 268 million addresses) is set aside for multicast traffic (RFC 3171). Under unicast, an IP address represents a network interface on a computer; under multicast, an address represents an abstract entity, a “session,” which can include (in principle) any number of participants. Establishing and maintaining the routing of required control and media data for multicasting in a way that scales gracefully to large numbers of participants is an active area of research. The earliest effort appeared in the 1991 Stanford doctoral thesis of Deering2, which introduced the first member of the family of “dense mode” routing protocols; more recent research has introduced more scalable “sparse mode” methods.
Early experimentation with multicast was performed on the Multicast Backbone (MBone), a virtual network3. Internet routers were then incapable of supporting multicast protocols; multicast packets were encapsulated within unicast traffic flowing between “tunnel” machines that ran multicast routing software. Managing the topology of this virtual network to optimize it for various activities was an onerous manual task. It was first used on the network in March 1992, when an audio-only feed from an Internet Engineering Task Force (IETF) meeting was broadcast to 28 locations, including Sweden and Australia.
Software tools employed on the MBone were primarily contributed by two centers of excellence, University College London and Lawrence Berkeley Laboratories, culminating in the video tool, vic4, and the audio tool, rat5.
One of the present authors (RPCR) employed the MBone to produce live interactive videocasts from SIGWAIS/SIGNIDR III (NLM, 1993) and the International World Wide Web Conferences 2 through 6 (1994–1997); WWW5 (Paris, 1996) included the world's first distributed expert panel, with five of the ten panelists coming in live from four venues in California6. In 1997, in a pilot project at UCSF, he demonstrated the feasibility of using these tools to perform the microscopic evaluation of bone marrow smears by a remote pathologist. The need for the MBone withered as Internet routers became multicast-capable, and it had disappeared by 1997. However, the software tools vic and rat endured, and are still key components of current AG software.
Commonly used streaming media systems such as RealMedia and QuickTime employ buffering of audio/video packets prior to playing them out, to allow for the re-sending of packets; real-time conferencing tools can not tolerate the time delay (“latency”) which this entails; therefore, tools such as vic and rat employ the simple (non-error-correcting) UDP protocol. By contrast, most unicast Internet traffic relies upon the error-correcting TCP protocol, in which lost or corrupted data packets are resent.
Hardware
An AG “node” is the collection of software and hardware employed to create a conferencing facility at a given site, which may support a single individual or a large conference room. Intel-based Dell XPS Gen 4 PCs were selected for most node workstations. Two types of cameras were employed: the Canon VC-C50i pan-tilt-zoom (PTZ) communication camera (operated by a Canon WL-V5 remote control, for venues with multiple participants), and the ClearOne FlexCam for sites with one participant. Video signals from these cameras are captured using an Osprey 100XP PCI card. The microbiology station includes a Canon RE-450X visualizer (overhead video projector, for the sharing of culture plates and other macroscopic items), and an Olympus microscope fitted with an Olympus DP70 12.5 megapixel digital video capture system. The microbiology station and several of the group conference machines used ATI FireMV 2200 PCI Express cards (to support dual monitors). An Intel-based PowerMac system was installed at the SFVAMC, employing an Apple iSight camera. Using a standard microphone with an AccessGrid node causes crippling echo effects for other participating sites. The use of either a headset or good echo-cancelling microphone is critically important. We employ Plantronix DSP-400 headsets for single-participant nodes, the ClearOne Chat 50 microphone for sites with 1–3 participants, and the ClearOne AccuMic PC for larger groups. Several of the larger meeting rooms employ LCD projectors (NEC MT1065) in lieu of monitors, and basic desktop speakers (Dell A425).
Software
The Dell computers were operating under Microsoft Windows XP SP2; the PowerMac was running MacOSX 10.4.8. The AG toolkit can be downloaded freely from a host at ANL7, along with the required X11 windowing system, Python interpreter, and wxPython GUI toolkit. At the beginning of the project, we employed AccessGrid version 2; midway we migrated to version 3, to which all further remarks apply. Several important software packages interoperate with the AG framework: WestGrid's “Shared Desktop” tool8 (required for sharing the microscope images) requires installation of RealVNC (for the PC) and “Chicken of the VNC” and OSXvnc (for the iMac); Microsoft Powerpoint (installed as part of Office 2003) enables the AG “Shared Presentation” tool. AG supports shared web browsing. All of these tools are accessed and controlled from the AG client, which also provides shared text chat and access to vic and rat. We ran an AG server on a Dell PC, and, to support nodes for which multicast traffic is not allowed, we installed a unicast bridge server (bundled with AG Toolkit). The majority of work with software was spent not in installing it, but rather in configuring it.
Implementation
AccessGrid nodes were set up at six locations (moving from east to west across San Francisco): the conference room and microbiology laboratory at UCSF China Basin (the main clinical laboratory); SFGH; Mt. Zion Hospital clinical laboratory; UCSF Moffitt-Long Hospitals; and, SFVAMC. Deployment impediments fell into three categories:
Local network problems. Contrary to claims of local administrators, multicast traffic had been disabled on the China Basin network.
Inter-network problems. The networks at the UCSF Medical Center (China Basin, Moffitt-Long Hospitals, Mt. Zion), SFGH and the SFVAMC are all independently administered. At UCSF, the Medical Center and academic campus administer their networks independently. Access to SFGH is handled via the UCSF campus IT group. Considerable time was spent contacting local administrators, and figuring out whether to use multicast or bridged unicast technology.
Patient data security. This was the single most difficult aspect of implementation
Security Issues & Port Minimization
To address security, we relied upon the following:
Security intrinsic to our tightly managed participating networks. The UCSF Medical Center is protected by an aggressively managed firewall; all meetings involving patient care are currently confined to sites on the UCSF Medical Center network.
AG’s built-in encryption: all control and data connections for the AG 3 client, as well as for shared presentations, web browsing, and the shared application portions of the shared desktop, are encrypted using SSL; text chat is accomplished over an SSL connection to a Jabber server; vic and rat can be optionally configured to employ AES/Rijndael encryption. However, the VNC application employs unencrypted out-of-band connections to transfer screen images between server and clients (we are comparing the use of ssh tunneling vs. the use of a virtual private network to address this issue).
Port usage minimization. Network services access a computer's operating system via a software fiction known as a “port.” By default, AG 3 assumes the availability of tens of thousands of ports to support all of its activities. Network administrators strongly prefer to limit the number of ports that are open to the outside world, to minimize exposure to port scanners and other security risks. Restrictions were particularly severe at SFVAMC. Based upon limited documentation, we devised a minimal list of 270 ports required to support our needed capabilities.
Evaluation of Impact
The total cost of hardware and software for the system was approximately $50,000, versus the approximate $300,000 to outfit 6 nodes using an acceptable equivalent commercial alternative. The majority of the cost involved outfitting the microbiology station. The cost for a typical conference room with a PC, LCD projector, echo canceling microphone, and camera can be as low as $5,000 to $8,000. The system was taken up as a production service immediately upon installation. In spite of our initial fears that user interface complexity would pose a problem, the system was accepted and used enthusiastically. It is currently in use 6-12 hours per week, for 10 distinct scheduled and ad hoc meetings, involving 60–80 individuals; 75% of meetings involved two sites, 25% involved three or more; 10% of meetings are devoted to clinical care, 60% to teaching, and 30% to research or administration; 24% of personnel in the department have now been involved in its use, including 25/39 faculty, 60/360 technical or administrative staff, and 15/15 trainees. The system helped satisfy a licensing requirement of the Accreditation Council for Graduate Medical Education (ACGME) by enabling infectious disease plate rounds.
Five months after deployment, department users were asked to complete an anonymous web-based survey. Responses were received from 17 faculty, 17 lab and administrative staff, and 9 trainees. Trainees have attended the largest number of video-conferences (range 1–50, median 25) followed by staff (range 2–26, median 6) and faculty (range 1–25, median 5). The median of the estimated number of virtual meetings per month was 5 for trainees (range 1–8), and 1 for faculty (range 1–5) and staff (range 0.5–4). Average hours of usage per week are: faculty (1–2 h), technical or administrative staff (1–4 h), and trainees (4–6 h). The 30 respondents to the question of travel time saved estimated a total of 103 man-hours per month (on average, 3.43h per individual). Responding “yes” to the question “has use of this system allowed you to attend meetings that you would have otherwise missed?” were 10/17 faculty, 14/17 staff, and 4/7 trainees. Those having exposure to other videoconferencing systems included 3/17 faculty, 3/17 staff, and 2/9 trainees; of those offering a comparison, 4/6 found our system comparable to the other systems they had seen, 1 found it better, and 1 found it much better.
Asked to rank their overall level of satisfaction with their virtual meeting experiences, faculty had the highest level of satisfaction, and trainees the lowest (see Table 1). When asked “ignoring all other factors associated with the use of remote conferencing (such as time or money saved, the ability to record and archive, etc.), and focusing purely on comparing the experience of a virtual meeting to that of a face-to-face meeting, how would you say that virtual meetings compare to face-to-face meetings?” using a scale of 1 (“strongly prefer virtual meetings to face-to-face meetings”) to 5 (“strongly prefer face-to-face meetings to virtual meetings”), responses indicated an (unsurprising) preference for face-to-face meetings, as shown in Table 2. When asked to compare the two forms of meeting taking into consideration the ancillary factors, comparisons shifted in favor of virtual meetings by an average (median) of 1 ranking (range: -1 to 3 for faculty and staff, and 0 to 4 for trainees).
Table 1.
Satisfaction with System.
| Rating | Faculty n=17 | Staff n=16 | Trainees n=7 |
|---|---|---|---|
| 5:highly satisfied | 3 | 3 | 0 |
| 4 | 7 | 4 | 3 |
| 3:adequately satisfied | 7 | 8 | 3 |
| 2 | 0 | 1 | 0 |
| 1:not at all satisfied | 0 | 0 | 1 |
Table 2.
Effect of Ancillary Factors on Perceived Value of Face-to-Face vs. Virtual Meetings (numbers in parentheses are ratings when considering ancillary benefits).
| Rating | Faculty n=17 | Staff n=17 | Trainees n=7 |
|---|---|---|---|
| 5:prefer face-to-face | 3(1) | 1(2) | 3(1) |
| 4 | 9(6) | 12(5) | 3(2) |
| 3:equivalent | 5(3) | 2(2) | 1(2) |
| 2 | 0(5) | 1(5) | 0(2) |
| 1:prefer virtual | 0(2) | 1(3) | 0(0) |
In separate survey questions, we solicited both positive and negative written comments about the system. Among the positive comments (and the number of times they were mentioned) were those related to the reduced need to travel and/or time saved (16), ability to record and archive (3), ability to attend meetings that would otherwise be missed (3), enabling meetings that otherwise would not be held (2), mention of high quality audio and/or video, and/or low latency (2), low cost (2), ability to present to larger group than would otherwise be possible (2), expandability and flexibility (2), satisfaction of training license requirements (see above), and ease of use (1). Among the negative comments were mention of audio drop-outs (10), other problems with audio that may represent speaker location or microphone placement (5), problems due to insufficient camera resolution or poor placement (4), excessive time or complexity involved in setting-up for a meeting (3), the need for larger projection screens (2), complexity of use (2), the need for better written operating instructions (1), inability to get into a conference room due to scheduling issues (1), and excessive shyness of participants (1). Others commented on the inability to safely show mycology/AFB specimens on the microbiology lab video equipment in its current location, and on various limitations in the AG user interface (including the inability to resize vic video windows, the difficulty in following the mouse as a pointer, occasional asynchrony of audio and video, and the lack of an “intuitive” user interface).
Discussion
The UCSF AG facility has been rapidly and successfully integrated into the daily operations of its host department. The single most difficult aspect of installing it was paring down the network ports required, to satisfy network safety concerns. The single most notable difficulty in operation has been the frequent (1–2 per meeting) audio drop-outs that require restarting rat (which is easily done using a single button on the AG client interface).
The most elaborate AccessGrid nodes, such as those at Lawrence Berkeley Laboratories, provide an immersive experience in a large meeting space, with entire walls devoted to rear-projection screens, multiple robotic video cameras, multiple echo-canceling microphones, and high-quality audio systems. Such facilities are expensive and complex to keep running properly, often requiring a full-time operator when in use. The AccessGrid nodes created here are modest by comparison, relying on at most two displays, and depending upon participants to control the node equipment and software. Several user comments suggest that this facility should consider adding multiple projection screens to its conference rooms in future.
An alternative approach to supporting sites such as SFVAMC would be to use virtual private network software, such as OpenVPN9, reducing port usage and bypassing NAT and firewall issues by multiplexing traffic through a single port, securely. We are currently evaluating the performance of OpenVPN at the SFVAMC node.
The UCSF Department of Pathology has expressed interest in using the system. Independently, four collaborating universities, including UCSF, are preparing to deploy AG within their electronic Primary Care Research Network (ePCRN)10. The goals and implementation details of this project relating to AG are not publicly available as of this writing.
AG has been applied primarily to research or educational11 ends; to the best of our knowledge, this is the first AG facility to be placed into routine operational clinical use within a major institution.
Conclusion
We have created a clinical conferencing system connecting 6 conferencing areas at 5 different sites within a single university system, employing the open-source AccessGrid software and (primarily) consumer-grade hardware. The positive impact of the system was gratifyingly rapid and quantitatively documented. Major impediments to implementation arose from tight network management practices in place at the multiple participating facilities, and in particular at the SFVAMC hospital. This required close collaboration with network administrators, and the minimization of the number of network “ports” required by the software.
Acknowledgments
We gratefully thank: MBone pioneers for aid with enabling work (Steve Casner, Steve Deering, Ron Frederic, Van Jacobson, Steven McCanne); UCL colleagues (Jon Crowcroft, Mark Handley, Orion Hodson, Piers O’Hanlon, Colin Perkins); AccessGrid colleagues (Hubert Daugherty [Rice Univ.], Eric Olson, Robert Olson, and Tom Uram [all of ANL]); NLM colleagues (Michael Ackerman, Wei-Li Liu, Craig Locatis); Lawrence Berkeley Labs personnel, for advice and access to their AccessGrid facility (Deb Agarwal, Mike Elmore); UCSF and SFVAMC colleagues (Joan Etzell, Howard Leong, Mark Lu, Binh Nguyen, Alvin Young).
References
- 1.Stevens R, Papka ME, Disz T. Prototyping the Workspaces of the Future. IEEE Internet Computing. 2003;7:51–58. [Google Scholar]
- 2.Deering SE.Multicast Routing in a Datagram Internetwork Stanford CA: Stanford University Department of Computer Science, 1991Ph.D. dissertation. [Google Scholar]
- 3.Kumar V. MBone: Interactive Multimedia on the Internet. Indianapolis: New Riders; 1996. [Google Scholar]
- 4.McCanne S, Jacobson V. Vic: A flexible framework for packet video ACM Multimedia 95. San Francisco: ACM; 1995. pp. 511–522. [Google Scholar]
- 5.Hodson O, Varakliotis S, Hardman V.A software platform for multiway audio distribution over the InternetIEE Colloquium on Audio and Music Technology: The Challenge of Creative DSP (Ref. No. 1998/470). London: 19984/1–4/6. [Google Scholar]
- 6.Multicasting and Real Time Applications and the Future of the Web: a Network-Distributed Panel. Fifth International World Wide Web Conference. Paris, France: 6–10 May 1996. http://iw3c2.cs.ust.hk/WWW5/www5conf.inria.fr-webcast/panel3/Welcome.html
- 7..Argonne National Laboratory. AccessGrid Toolkit. 2007. http://www.accessgrid.org/
- 8.WestGrid. Access Grid Shared Desktop. 2007 http://www.westgrid.ca/collabvis/research-agshareddesktop.php [Google Scholar]
- 9.OpenVPN Solutions LLC. OpenVPN. 2007. http://openvpn.net/
- 10.electronic Primary Care Research Network (ePCRN) http://www.epcrn.org/
- 11.Kim H, Moore LA, Fox G, Whalin RW. An Experience on a Distance Education Course over the Access Grid Nodes. Proc. 4th Int. Conf. on Education and Information Systems, Technologies and Applications (EISTA 2006); Orlando. 20–23 July 2006. [Google Scholar]
