Elsevier

Computer Networks

Volume 61, 14 March 2014, Pages 51-74
Computer Networks

The GpENI testbed: Network infrastructure, implementation experience, and experimentation

https://doi.org/10.1016/j.bjp.2013.12.027Get rights and content

Abstract

The Great Plains Environment for Network Innovation (GpENI) is an international programmable network testbed centered initially in the Midwest US with the goal to provide programmability across the entire protocol stack. In this paper, we present the overall GpENI framework and our implementation experience for the programmable routing environment and the dynamic circuit network (DCN). GpENI is built to provide a collaborative research infrastructure enabling the research community to conduct experiments in Future Internet architecture. We present illustrative examples of our experimentation in the GpENI platform.

Introduction

Deploying large-scale network testbeds brings significant benefits to researchers to conduct scalable network experiments and evaluate the performance. Those testbeds provide fundamental capabilities such as high-speed infrastructure, programmable network nodes, and open access permission to registered researchers so that they are allowed to use the network resources through the open API provided by the testbed. Two prominent network research programs in the past five years are GENI (pronounced as ‘genie’) [12], [41] and FIRE [9], and both focus on the Future Internet architecture design and relevant technological development, while supporting creation of experimental testbeds. However, it is important to remember that the idea of large-scale testbeds on which to conduct networking research is not new. In this section, we summarize a few of the most relevant previous efforts on network research testbeds.

  • Gigabit Testbeds: A set of testbeds was constructed in the early 1990s to further the state of high-speed networking research, funded by the US NSF and DARPA (Defense Advanced Research Projects Agency), managed by CNRI (Corporation for National Research Initiatives). Five separate testbeds were constructed, Aurora, Blanca, Casa, Nectar, and Vistanet [49], later supplemented by MAGIC [20]. The Gigabit Testbeds were a platform for research in high-speed networking, new bandwidth-enabled applications [58], and networked supercomputing.

  • Active Network Testbeds: In the late 1990s, testbeds were constructed in the US and Europe to support active networks research. Active networks are programmable networks in which one of the programming modalities includes capsules of mobile code that can dynamically program network nodes. In the US, the ABone [42] was constructed as part of the DARPA-funded Active Networks Program [1], to permit experimentation on programmable network languages, management and control [54], node operating systems [65], and security mechanisms [50]. The ABone had the goal of open access to the research community. In Europe, the EU FP5 FAIN (Future Active IP Networks) [52] and related projects (e.g., LARA++ [47]) also investigated active and programmable networks, with testbeds constructed for experimentation. These active network architectures and testbeds permitted sharing infrastructure by the simultaneous execution of active applications (AAs) in execution environments (EEs) on a network node operating system (NodeOS). While not generally recognized in this manner, the active network testbeds had many goals similar to that of GENI, and should be considered a conceptual precursor.

  • Modern Large-Scale Testbeds: More recently, two large scale testbed infrastructures have been constructed with the explicit goal of permitting open access for networking research. PlanetLab [26] is a worldwide infrastructure that permits users to run networked experiments on a large scale. The infrastructure is shared using the slice paradigm. It is important to note that while PlanetLab permits experimentation in networked applications and end-to-end protocols, the network itself is not programmable, and experiments in lower-layer protocols can only be performed on overlays. VINI [29] provides a virtual network infrastructure that is built on PlanetLab. VINI allows researchers full control to create virtualized arbitrary network topologies where routing software can be invoked for experimentation. Emulab [6] is a network testbed consisting of a cluster of computing nodes interconnected by flexible network infrastructure, which permits researchers to experiment with network protocols and applications with complete root access to the systems. A number of Emulab facilities are located throughout the world, some of which provide access to external researchers in addition to the main facility at the University of Utah. Both PlanetLab and EmuLab are the basis for GENI control frameworks; GpENI uses the PlanetLab control framework.

  • Current Future Internet Initiatives: While a number of researchers proposed alternatives to the Internet architecture as early as the 1980s (including research programs such as DARPA Next Generation Internet – NGI), there is now a general consensus in the research community that the current architecture is limiting in scale and support for emerging application paradigms such as mobile and nomadic computing and communications. Recent research initiatives include NSF FIND (Future Internet Design) [23] in the US, EU FP6 SAC (Situated and Autonomic Communications) [11], and the research component of FP7 FIRE (Future Internet Research and Experimentation) [9]. These research initiatives aim to investigate clean slate (greenfield) as well as incremental (brownfield) architectures to evolve the Future Global Internet architecture. A key problem remains how to experiment with Future Internet architectures on a reasonable scale. For this reason, the NSF GENI (Global Environments for Network Innovation) program [12], the experimental component of the EU FP7 FIRE programme [9], and the Japanese JGN2plus [18] testbeds plan to deploy large-scale programmable testbeds for experimentation of the Future Internet research.

The scope of this paper is to give a comprehensive presentation on the GpENI testbed from three different aspects: network infrastructure, implementation experiences, and experimentation. This comprehensive work is built on our earlier conference and workshop papers [36], [48], [60], [68]. The rest of the paper is organized as follows. In Section 2, we present the motivation and overview about the GpENI testbed. In Section 3, we present the physical topology of the GpENI testbed over the United States, Europe and Asia, as well as the infrastructure design. In Section 4, we give a high level description on the GpENI node cluster architecture. In Section 5 and Section 6, we present a detailed discussion on the network layer and the optical layer programmability of the GpENI testbed, including both architecture design and preliminary results. We discuss our current federation status in Section 7. In Section 8, we discuss the experimentation work we have done with the GpENI testbed. A recent GpENI extension to KanREN-GENI is briefly described in Section 9. We summarize this paper in Section 10.

Section snippets

GpENI testbed: Motivations and overview

The Great Plains Environment for Network Innovation – GpENI (pronounced as ‘japini’, rhyming with GENI) is an international programmable network testbed centered on a regional optical network between The University of Kansas (KU) in Lawrence, the University of Missouri–Kansas City (UMKC), the University of Nebraska–Lincoln (UNL), and Kansas State University (KSU) in Manhattan associated with the Great Plains Network, in collaboration with the Kansas Research and Education Network (KanREN) and

GpENI network infrastructure and topology

The core of GpENI is the regional optical backbone centered around Kansas City. This is extended by KanREN (Kansas Research and Education Network) to various GPN (Great Plains Network) institutions located in the Midwest region of the US. Connectivity in Kansas City to Internet2 provides tunneling access to the European GpENI infrastructure. GpENI is growing, currently with about 38 node clusters in 17 nations, including KanREN, G-Lab, and NorNet. Institutions may connect to GpENI if they are

GpENI node cluster architecture

Each GpENI node cluster consists of several components, physically interconnected by a managed Netgear Gigabit-Ethernet switch to allow arbitrary and flexible experiments. GpENI uses a KanREN 198.248.240.0/21 IP address block within the gpeni.net domain; management access to the facility is via dual-homing of the Node Management and Experiment Control Processor. The node cluster is designed to be as flexible as possible at every layer of the protocol stack, and consists of the following

GpENI-VINI: Architecture and implementation

In previous sections, we have discussed the GpENI network infrastructure from the backbone topology to node cluster architecture. In this section, we focus on details about network layer programmability in the GpENI network testbed through GpENI-VINI. In short, GpENI-VINI is a virtual network resource provisioning testbed to support programmable routing experiments. The core architecture of GpENI-VINI is a customized private instance of VINI [29] by extending the flexibility of conducting

Dynamic circuit creation in the regional network testbed

The GpENI testbed also supports dynamic circuit creation at the optical layer. To enable dynamic circuit network (DCN) in a regional network testbed, we need to make necessary changes to the current infrastructure. In this section, we first introduce some background knowledge on the DCN, then discuss how to establish DCN across the GpENI testbed and how to establish DCN across two testbed domains.

GpENI federation deployment

The GpENI testbed has achieved federation on three sub-aggregates: the MyPLC sub-aggregate, the GpENI-VINI sub-aggregate, and the DCN sub-aggregate.

Currently, the federation on the MyPLC sub-aggregate and the GpENI-VINI sub-aggregate are running the PlanetLab implementation of the slice facility architecture (SFA). There are three interfaces: registry, slice manager (SM), and aggregate manager (AM). These GENI interfaces are accessible via the slice facility interface (SFI) implementing

Experimentations on GpENI Testbed

The GpENI infrastructure [68] is in the process of expanding to 38 to 40 clusters with 200 nodes worldwide, federated with the larger GENI PlanetLab control framework and interconnected to several ProtoGENI facilities. This enables users to perform resilience and survivability experiments at scale, both in terms of node count and with the geographic scope needed to emulate area-based challenges such as large-scale disasters.

GpENI extension: KanREN-GENI deployment plans and topology

KanREN-GENI (Fig. 18) is a GENI mesoscale OpenFlow deployment underway in KanREN (the Kansas Research and Education Network) as well as selected deployment into GPN (the Great Plains Network). This deployment heavily leverages on the existing GpENI infrastructure. We are deploying Brocade OpenFlow-enabled switches co-located with the production KanREN switches that will provide full opt-in for any users accessing KanREN infrastructure at its PoPs (Kansas City, Lawrence, Manhattan, Ft. Hays,

Summary

The GpENI testbed is an international Future Internet research testbed centered in the Midwest region of the United States and in Europe. We are now making an effort to expand to Asian countries as well. The main goal of the GpENI testbed is to provide all-layer programmability in the network. In particular, as a part of the GENI control framework cluster B, GpENI runs a private instance of PlanetLab for application-layer and transport-layer programmability, and runs a customized private

Acknowledgments

This work is funded in part by the US National Science Foundation GENI program (GPO Contract No. 9500009441), by the EU FP7 FIRE programme ResumeNet project (Grant Agreement No. 224619), and in large part, by the participating institutions. GpENI is made possible by the involvement of many people in the participating institutions. We particularly acknowledge the following people: Tricha Anjali, Torsten Braun, Richard Becker, Baek-Young Choi, Kent G. Christensen, Riddhiman Das, Joseph Evans,

Deep Medhi is a Curators’ Professor in the Department of Computer Science & Electrical Engineering at the University of Missouri–Kansas City, USA, and a honorary professor in the Department of Computer Science & Engineering at the Indian Institute of Technology–Guwahati, India. He received B.Sc. in Mathematics from Cotton College, Gauhati University, India, M.Sc. in Mathematics from the University of Delhi, India, and his Ph.D. in Computer Sciences from the University of Wisconsin–Madison, USA.

References (68)

  • Active Networks Program....
  • Cacti....
  • The Click Modular Router Project....
  • CoDeploy....
  • DRAGON – Dynamic Resource Allocation via GMPLS Optical Networks....
  • Emulab: Network Emulation Testbed....
  • ESnet’s On-Demand Secure Circuits and Advance Reservation System (OSCARS)....
  • eXtensible Open Router Platform (XORP)....
  • FIRE: Future Internet Research Experiment....
  • Firestarter....
  • FP6 Situated Autonomic Communications....
  • GENI: Global Environment for Network Innovations....
  • GpENI-VINI....
  • GpENI wiki...
  • IIAS: Internet In A Slice....
  • Internet2 DCN/ION Software Suite....
  • Internet2 ION....
  • JGN2plus Testbed....
  • Linux-VServer....
  • Magic Gigabit Testbed....
  • Nagios....
  • NetNS: Network NameSpace....
  • NSF NeTS FIND Initiative....
  • OneLab: Future Internet test beds....
  • OpenFlow Switch Consortium....
  • PlanetLab....
  • Quagga Routing Suite Software....
  • Supercharged Planetlab Platform (SPP) Hardware Components....
  • VINI: Virtual Network Infrastructure....
  • XORP.CT Branch....
  • Zenoss....
  • Tinc wiki, 2010....
  • ANTP Visualizer, January 2011....
  • M.J. Alenazi, E.K. Çetinkaya, J.P. Rohrer, J.P.G. Sterbenz. Implementation of the AeroRP and AeroNP protocols in...
  • Cited by (0)

    Deep Medhi is a Curators’ Professor in the Department of Computer Science & Electrical Engineering at the University of Missouri–Kansas City, USA, and a honorary professor in the Department of Computer Science & Engineering at the Indian Institute of Technology–Guwahati, India. He received B.Sc. in Mathematics from Cotton College, Gauhati University, India, M.Sc. in Mathematics from the University of Delhi, India, and his Ph.D. in Computer Sciences from the University of Wisconsin–Madison, USA. Prior to joining UMKC in 1989, he was a member of the technical staff at AT&T Bell Laboratories. He served as an invited visiting professor at the Technical University of Denmark, a visiting research fellow at Lund Institute of Technology, Sweden, and State University of Campinas, Brazil. As a Fulbright Senior Specialist, he was a visitor at Bilkent University, Turkey, and Kurukshetra University, India. He is the Editor-in-Chief of Springer’s Journal of Network and Systems Management, and is on the editorial board of IEEE/ACM Transactions on Networking, IEEE Transactions on Network and Service Management, and IEEE Communications Surveys & Tutorials. He has published over 125 papers, and is co-author of the books, Routing, Flow, and Capacity Design in Communication and Computer Networks (2004) and Network Routing: Algorithms, Protocols, and Architectures (2007), both published by Morgan Kaufmann Publishers, an imprint of Elsevier Science. His research interests are multi-layer networking, network virtualization, data center optimization, and network routing, design, and survivability. His research has been funded by NSF, DARPA, and industries.

    Byrav Ramamurthy is currently a Professor and Graduate Chair in the Department of Computer Science and Engineering at the University of Nebraska–Lincoln (UNL). He has held visiting positions at the Indian Institute of Technology–Madras (IITM), in Chennai, India and at the AT&T Labs-Research, New Jersey, U.S.A. He is author of the book Design of Optical WDM Networks – LAN, MAN and WAN Architectures and a co-author of the book Secure Group Communications over Data Networks, published by Springer in 2000 and 2004, respectively. He has authored over 125 peer-reviewed journal and conference publications. He serves as Editor-in-Chief for the Springer Photonic Network Communications journal. He was Chair of the IEEE ComSoc Optical Networking Technical Committee (ONTC) during 2009–2011. Dr. Ramamurthy served as the TPC Co-Chair for the IEEE INFOCOM 2011 conference to be held in Shanghai, China. He is a recipient of the College of Engineering Faculty Research Award for 2000 and the UNL CSE Dept. Student Choice Outstanding Teaching Award for Graduate-level Courses for 2002–2003 and 2006–2007. He has graduated 10 Ph.D. and 40 M.S. students under his research supervision. His research has been supported by the U.S. National Science Foundation (NSF), the U.S. Department of Energy (DOE), the U.S. Department of Agriculture (USDA), National Aeronautics and Space Administration (NASA), AT&T Corp., Agilent Tech., HP, OPNET Inc. and the University of Nebraska–Lincoln (UNL).

    Caterina M. Scoglio is Professor of Electrical and Computer Engineering at Kansas State University. Her main research interests include modeling, analysis, and design of networked systems, with applications to epidemic spreading and power grids. Caterina received the Dr. Eng. degree from the “Sapienza” Rome University, Italy, in 1987. Before joining Kansas State University, she worked at the Fondazione Ugo Bordoni from 1987 to 2000, and at the Georgia Institute of Technology from 2000 to 2005.

    Justin P. Rohrer is currently a Research Associate of Computer Science at the Naval Postgraduate School (NPS) and an Adjunct Assistant Professor of Electrical Engineering and Computer Science at the KU Information & Telecommunication Technology Center (ITTC). He received his Ph.D. in Electrical Engineering from the University of Kansas in 2011 with honors. He received his B.S. degree in Electrical Engineering from Rensselaer Polytechnic Institute, Troy, NY, in 2004. From 1999 to 2004, he was with the Adirondack Area Network, Castleton, NY as a network engineer. He was also an ITTC Graduate Fellow from 2004 to 2006. He received the best paper award at the International Telemetering Conference in 2008 and the best graduate student paper award at the same conference in 2011. His research focus is on resilient and survivable transport and routing protocols. Interests also include highly-dynamic mobile networks, and simulating network disruptions. Previous research has included weather disruption-tolerant mesh networks and free-space optical metropolitan networks. He is a member of the IEEE Communications and Computer Societies, ACM SIGCOMM, Eta Kappa Nu, and was an officer of the Kansas City section of the IEEE Computer Society for several years.

    Egemen K. Çetinkaya is Assistant Professor of Electrical and Computer Engineering at Missouri University of Science and Technology (formerly known as University of Missouri–Rolla). He received the B.S. degree in Electronics Engineering from Uludağ University (Bursa, Turkey) in 1999, the M.S. degree in Electrical Engineering from University of Missouri–Rolla in 2001, and Ph.D. degree in Electrical Engineering from the University of Kansas in 2013. He held various positions at Sprint as a support, system, and design engineer from 2001 until 2008. He is a graduate research assistant in the ResiliNets research group at the KU Information & Telecommunication Technology Center (ITTC). His research interests are in resilient networks. He is a member of the IEEE Communications Society, ACM SIGCOMM, and Sigma Xi.

    Ramkumar Cherukuri is a software engineer at CGI. He designs software for living and specializes in the field of Network Systems. His Research interests include Network Routing, Protocol Development, Software Design and Cloud Computing. His outside interests include attending professional meetups, brainstorming on new ideas with peers and visiting foreign places. He obtained his M.S. in Computer Science from the University of Missouri–Kansas City and his B.E. in Electronics and Communication Engineering from Andhra University College of Engineering, Visakhapatnam, India.

    Xuan Liu is a Ph.D. student at the University of Missouri–Kansas City. She received B.S. in Communication Engineering from China University of Geosciences (CUG) in June 2007 and M.S. in Computer Science from the University of Missouri–Kansas City in December 2010. Her research interests include network virtualization, information centric networking, computer networking modeling and optimization.

    Pragatheeswaran Angu is currently an R&D Software Developer at Epic. He received his M.S. degree in Computer Science from the University of Nebraska–Lincoln in May 2011. His research interests include optical networking, scheduling and optimization.

    Andy Bavier is a Research Scholar at Princeton University. He has been building research testbeds since 2002. He is a designer and core developer of the PlanetLab, VINI, and VICCI testbeds among others. He also actively participates in the NSF GENI project and serves on the GENI Architects board.

    Cort Buffington is the Executive Director of KanREN, Inc., The high-speed research and education network in Kansas. Cort joined the KanREN team in 1999 and had served the organization in several different technical capacities before accepting the directorship in 2008. Cort was the principle architect and engineer of the current and previous generations of the KanREN network, and still takes an active role in engineering/architecture along with the administrative aspects of the organization. Cort is an active participant in the state, regional and national R&E networking community, participating actively in Internet2, The Great Plains Network, The Quilt, and Kan-ed.

    James P.G. Sterbenz is Associate Professor of Electrical Engineering & Computer Science and on staff at the Information & Telecommunication Technology Center at The University of Kansas, and is a Visiting Professor of Computing in InfoLab 21 at Lancaster University in the UK. He received a doctorate in computer science from Washington University in St. Louis in 1991, with undergraduate degrees in electrical engineering, computer science, and economics. He is director of the ResiliNets research group at KU, PI for the NSF-funded FIND Postmodern Internet Architecture project, PI for the NSF Multilayer Network Resilience Analysis and Experimentation on GENI project, lead PI for the GpENI (Great Plains Environment for Network Innovation) international GENI and FIRE testbed, co-I in the EU-funded FIRE ResumeNet project, and PI for the US DoD-funded highly-mobile airborne networking project. He has previously held senior staff and research management positions at BBN Technologies, GTE Laboratories, and IBM Research, where he has lead DARPA- and internally-funded research in mobile, wireless, active, and high-speed networks. He has been program chair for IEEE GI, GBN, and HotI; IFIP IWSOS, PfHSN, and IWAN; and is on the editorial board of IEEE Network. He has been active in Science and Engineering Fair organization and judging in Massachusetts and Kansas for middle and high-school students. He is principal author of the book High-Speed Networking: A Systematic Approach to High-Bandwidth Low-Latency Communication. He is a member of the IEEE, ACM, IET/IEE, and IEICE. His research interests include resilient, survivable, and disruption tolerant networking, future Internet architectures, active and programmable networks, and high-speed networking and systems.

    View full text