Skip navigation and jump directly to page content

 IU Trident Indiana University

Visit the IU Booth, #2239, November 14-17!

Comic Book

The Data Superconductor: An HPC cloud using data-intensive scientific applications, Lustre-WAN, and OpenFlow over 100Gb Ethernet

Indiana University pioneered the use of the Lustre filesystem to distribute and compute large volumes of data from scientific instruments and simulations across wide area networks. This experience has helped IU deploy two Lustre filesystems at the ends of a 100Gb network running between Indianapolis, Indiana, and Seattle, Washington (approximately 2,300 miles). IU will use compute and storage resources at both ends of the connection to execute real-world scientific applications. Throughout the demonstration, IU researchers will use Software Defined Networking to dynamically route application traffic over a separate shared network powered by OpenFlow. This will allow the tuning of application traffic based on need, priority, and capacity.

Demonstration times

Enabling Scientific Discovery with Application Driven Networks
Monday: 8 p.m.
Tuesday: 4 p.m. and 4:30 p.m.
Wednesday: 11:20 a.m., 11:50 a.m., 5:20 p.m., 5:50 p.m.
Thursday: 12:30 p.m.

Utilizing Centralized Storage for High Bandwidth Application Workflows
Monday: 8:20 p.m.
Tuesday: 4:20 p.m. and 4:50 p.m.
Wednesday: 11:20 a.m., 11:50 a.m., 5:20 p.m., 5:50 p.m.
Thursday: 12:50 p.m.

Wide Area Lustre: Cross country file systems at 100 Gb/s
Monday: 8:10 p.m.
Tuesday: 4:10 p.m. and 4:40 p.m.
Wednesday: 11:10 a.m., 11:40 a.m., 5:10 p.m., 5:40 p.m.
Thursday: 12:40 p.m.

FutureGrid

The FutureGrid test-bed provides computing capabilities that enable researchers to tackle complex challenges related to grid, cloud, and high performance computing. The test-bed supports virtual machine-based environments and operating systems on native hardware for experiments aimed at minimizing overhead and maximizing performance. Features include:

  • Geographically-distributed set of heterogeneous computing systems (approx. 4,300 cores)
  • Data management system for metadata and a software image library
  • Dedicated network that allows isolated, secure experiments

This demonstration describes the FutureGrid architecture, and surveys user projects on topics such as: education, interoperability, computer science, domain science, and technology evaluation.

Performance study of a molecular dynamics code using the Vampir Toolchain

The Vampir toolchain analyzes the serial and parallel performance of a molecular dynamics code currently under development at Indiana University. It runs on a Cray XT5m system, which is part of the National Science Foundation FutureGrid project.

The Pervasive Technology Institute at Indiana University collaborated with the Center for Information Services and High Performance Computing at Technische Universität Dresden on this project.

InCommon Roadmap for NSF Cyberinfrastructure

The InCommon Roadmap provides guidance and practical how-to information for using the InCommon identity federation to enable researchers to access National Science Foundation cyberinfrastructure, and advance its science and engineering research. It tackles the problem of federated identity from the perspectives of researchers, administrators and policy makers, and technologists, providing each with a guide and relevant information for implementing and using federated identities.

GlobalNOC WorldView

GlobalNOC WorldView is an interactive three-dimensional, real-time network visualization system that eases understanding of complex network topologies and operational statistics. WorldView can show any number of networks at the same time, and features:

  • Real-time network data on an intuitive multi-touch interface
  • Pan, tilt, and zoom controls
  • Simultaneous display of each network's topology and operational status
  • Earthquake and weather data to improve situational awareness during natural disasters
  • Historical data to observe trends and disruptions caused by natural disasters and cybersecurity incidents
Superhero

HathiTrust Research Center: Enabling Computational Access to 10 Million Volume HathiTrust Repository

Algorithms using high performance computing machines and noSQL stores perform secure, large-scale text mining of the HathiTrust corpus, a shared digital repository with more than 9.7 million public domain and copyrighted volumes from research libraries. Funding of the HathiTrust Research Center through a grant from the Alfred P. Sloan Foundation allows for the development of computational infrastructure for non-consumptive research.

National Center for Genome Analysis Support

Indiana University partnered with the Texas Advanced Computing Center and the San Diego Supercomputing Center to found the National Center for Genome Analysis Support (NCGAS). Equipped with a core of experts, software tools, and a large memory cluster, the NCGAS will develop innovative solutions to current needs in genome assembly and analysis by exploring new modalities of provisioning computational resources, experts to assist biologists in genome analysis, and improved software that improves reliability.

RouteFlow: Virtualized IP routing services in OpenFlow networks

RouteFlow is a commodity routing architecture based on the combination of open-source routing stacks and OpenFlow-enabled networking gear. This demonstration uses multi-vendor, multi-controller (Beacon and NOX), multi-source protocol stacks (Quagga and XORP) and virtualization environments (LXC and QEMU) to help attendees:

  • Visualize how the control plane converges in times comparable to and potentially better than traditional setups
  • See the interoperability between traditional L2/L3 switches and OpenFlow switches controlled by RouteFlow
  • Understand how software-defined networking allows for innovation in the provision of virtual IP routing and forwarding services

Metadata Capture, Metadata Query: The Value of the XMC Cat Metadata Catalog

Metadata for scientific data provides critical information necessary for decisions on archiving, sharing, and use. This demonstration complements the "M13: Big data means your metadata must work" tutorial by showcasing the functionality of the XMC Cat metadata catalog under the following applications:

  • Astronomy – a faceted search of an image repository from the One Degree Imager
  • Real-time atmospheric observational data – a repository that stores continuously-generated and refreshed data from atmospheric instruments

Scalable storm surge forecasting with Windows Azure

One of the more immediate and dangerous impacts of climate change could be a change in the strength of storms that form over the oceans. There have already been indications that even modest changes in ocean surface temperature can have a disproportionate effect on hurricane strength and the damage inflicted by these storms. To understand these effects modelers turn to predictions generated by hydrodynamic coastal ocean models such as the Sea, Lake and Overland Surges from Hurricanes (SLOSH) model. SLOSH is run as an ensemble of often up to 15,000 instances. This high throughput application with small data I/O is well suited to Windows Azure execution. We discuss progress towards executing SLOSH on Windows Azure using the Trident Scientific Workflow Workbench and the Sigiri resource manager.
Superhero

Computational photography: Real-time plenoptic rendering with GPU

Computational photography uses plenoptic cameras to capture and process samples of all the light rays in the 3D space of a scene. Processing and rendering data requires significant computational power and bandwidth because a single plenoptic image can render an infinite variety of images.

This demonstration showcases a GPU-based (graphics processing unit) approach for lightfield processing and rendering that enables interactive performance for tasks such as refocusing and novel-view generation. This approach enables rendering of 39Mpixel plenoptic data to 2Mpixel images with rates exceeding 500 frames/second.

ParalleX: Paradigm shifts in high performance computing

The ParalleX execution model is an experimental methodology that changes the fundamental model of parallel computation from communicating sequential processes to an innovative synthesis of concepts in a global address space.

This demonstration features a ParalleX-enabled, adaptive mesh refinement algorithm that runs on HPX-3 runtime system software. The ParalleX model will suggest solutions for parallel computing challenges such as efficiency, scalability, and energy use.

Open Gateway Computing Environment

The Open Gateway Computing Environment (OGCE) project develops software to build science gateways, including several that use resources from the National Science Foundation (NSF) XSEDE program. The NSF-funded OGCE project also goes beyond open source to foster open community development through the Apache Software Foundation.

This demonstration reviews OGCE software for building OpenSocial-compatible science gateways and executing scientific workflows on XSEDE, and shows it in action with the following gateway applications:

  • Biophysics
  • Computational chemistry
  • Geospatial applications
  • Astronomy

Visitors will also see live demonstrations on:

  • Apache Rave: Building a simple web gateway
  • Apache Airavata: Reusable gadget components and grid workflows