Research

My general research interests include operating systems, compilers, and programming languages. I am especially interested in the above topics as they relate to distributed and parallel computing. I direct several Masters and PhD students under the banner of the Operating System Research Group.

Several projects involve HPCC Systems. In 2011, Lexis Nexis Risk Systems open sourced its data analytics supercomputer, known as HPCC Systems. It provides a complete stack, including a distributed file system, a data refinery cluster (called Thor), and an indexing engine (called Roxie). The data refinery transforms data in parallel at scales of 100s of nodes. The indexing engine is a cluster that supports highly concurrent, low latency queries using indexes created by the refinery.

HPCC clusters are similar to Hadoop (refer to the entire Hadoop ecosystem, especially MapReduce and Spark) in architecture—both provide reliable, scalable, distributed computing on COTS machines. While at a high level these two solutions look similar, there are important differences in the details. In particular, HPCC applications are written in an implicitly parallel dataflow language, which provides a high-level solution for application development.

Projects

Periodic Thor Workloads in HPCC in AWS seeks to build an AWS CloudFormation template for workloads that are both periodic and do not need frequent reconfiguration.

Women’s basketball heart-rate monitoring The student athletes in NCSU women’s basketball program wear heart-rate monitors during practices, games, and training sessions. A large amount of information is collected. However, the existing tool set to store, manage, and analyze the data is primitive, which impedes sport scientists from leveraging this information consequently, little value is extracted from the information collected. We are creating a software tool set that easily collects and validates the data from each of the athletes. The tool set will store, organize, and manage information, allowing collation along any dimension, such as, player, date, and session type. Lastly, it will provide sophisticated data analytics in order to improve outcomes for our student athletes.

Elastic Computing Support for HPCC This research proposes to add elasticity to a Roxie cluster deployed in Amazon Web Services (AWS). Because the cluster capacity can grow and shrink with dynamic demand, an elastic cluster is properly sized. Elasticity is preferred to a statically-sized cluster that under utilizes resources during light demand and fails to meet service during peak demand. This research creates mechanisms to efficiently expand and contract an active Roxie cluster. It tracks load to identify hot spots in order to support targeted replication that leads to greater efficiencies. The project seeks to determine the predictability of demand, to quantify the cost/benefit of replication, and to optimize AWS configuration.

Older Projects

There many previous projects, which I describe in no particular order. One is High-Performance, Power-Aware Computing. The project will development a modified MPI runtime system that will allow for significant energy savings with little or no increase in execution time. In developing this system, we expect that we will develop novel algorithms and/or measurement techniques for dynamic analysis. The net result of this project will be a publicly-available MPI runtime that can be used in a nearly transparent way to significantly reduce energy consumption in supercomputing centers. A secondary aspect of this project is the development of a system for commercial servers. It consists of a general framework for boosting throughput at a local level while load-balancing the available aggregate power under a set of operating constraints. This is funded by an IBM UPP award. David Lowenthal and I co-founded the Workshop on High-Performance, Power-Aware Computing.

Another project is Runtime/Operating System Synergy to Exploit Simultaneous Multithreading. This proposal focuses on a synergistic approach combining runtime and operating system support to fully unfold the capabilities of SMTs. This project is funded by an NSF DS award.

FreeLoader. A great many machines, from personal workstations to large clusters, are under utilized. Meanwhile, for the fear of slowing down the native tasks, resource scavenging systems hesitate to aggressively harness idle resources. We have developed a quantitative approach for fine-grained scavenging that can effectively utilizes very small slack periods without adversely impacting the native workload, and automatically adapts to the native workload’s changes in resource consumption. The fourth project is the Governor, which is a performance impact control framework, that seamlessly and autonomically restricts a scavenging application’s resource consumption, and adapts to the ever-changing native workload. The main idea is to characterize the performance impact of a given scavenging application on a set of micro-benchmarks, each of which intensively uses one type of system resource, such as CPU and network. The Governor throttles resource utilization by periodically inserting slack into the scavenging process, thereby reducing the time the scavenger competes with the native workload for resources.

Profile-based optimization is used to tune the production NetApp ONTAP build. The PBO build uses profile data from a training run. As a result, a problem with the training run can cause a PBO regression. Additionally, the current deployment infrastructure for ONTAP (which
uses a single training benchmark and produces a single production build) limits the benefits of PBO in two ways. First, because the training run is SFS2008, the builds are biased towards such client workloads. A client workload that is dissimilar from SFS2008 is in a sense disadvantaged. Second, a single production build necessarily means that only a subset of client workloads will significant benefit from PBO. Built a harness that automatically creates and executes code across multiple PBO profiles in order to determine the best optimizations. The research also evaluated selectively merging multiple profiles in order to extract benefits from each.

I completed a project in Understanding Open Source Software Development. This multi-disciplinary project studies the open source software (OSS) phenomenon, developing empirical models and simulations of the interactions between developers and software projects. This project in by an NSF DST award.

I co-developed parasitic computing. See these press releases: one and two. Appeared on some radio shows: NPR’s All Things Considered (synopsis, ram) and A(ustralia)BC’s The Buzz. Here is a magazine article.

Professor David_Lowenthal and I created the Filaments package for architecture-independent parallel programming, with a little help from our advisor: Professor Greg Andrews. Additionally, I was once a contributor to the SR Project. Today, I just have fond memories of when I had time to program. This is a summary of my research as it stood in 2002.

Sponsors

HPCC logo IBM NSF logo NetAppLogo LexisNexis Logo Sandia National Lab Logo DOE Logo NOAA Logo DARPA logo JPL logo Lockhead Martin logo