Loading…
LUG17 has ended
Lustre User Group
OpenSFS - https://www.opensfs.org/
Tuesday, May 30
 

7:00am EDT

Transportation to Lustre Developers Day - Sponsored by UT-Battelle/ORNL
Developer Day Shuttle:
Leave Springhill at 7:15am
Leave Hyatt Place at 7:25am
Leave IMU (near stopsign at corner of 7th and IMU circle) at 7:40am
Drop-off at Innovation Center

Leave Springhill at 8:15am
Leave Hyatt Place at 8:25am
Leave IMU (near stopsign at corner of 7th and IMU circle) at 8:40am
Drop-off at Innovation Center

Parking at the Innovation Center (IC) and Cyberinfrastructure Building (CIB) is free in spaces marked EM-P, EM-S, and ST.  Do not park in visitor or handicap spaces.

8:00am EDT

Registration and Breakfast - Lustre Developers Day - Sponsored by UT-Battelle/ORNL
Pick-up your conference badge and program.

Chairs
avatar for Robert Ping

Robert Ping

Program Management Specialist, Indiana University
In 2024, he accepted program management responsibilities for Jetstream2 and the Midwest Research Computing and Data Consortium where he will facilitate the success of the National Science Foundation-sponsored programs.As RDA-US Program Manager, he oversees the multiple projects within... Read More →


Tuesday May 30, 2017 8:00am - 9:00am EDT
Innovation Center Lobby 2719 E 10th St, Bloomington, IN, 47408

9:00am EDT

Lustre Developers Day 2017 - Sponsored by UT-Battelle/ORNL
This invite-only event is happening before the Opening Reception and the kick-off of LUG17.  Everyone is invited to take part in the hackathon starting at 5pm in the same space.

Tuesday May 30, 2017 9:00am - 5:00pm EDT
Innovation Center Rooms 105 and 120 2719 E 10th St, Bloomington, IN, 47408

10:30am EDT

Break - Lustre Developers Day - Sponsored by UT-Battelle/ORNL
Tuesday May 30, 2017 10:30am - 11:00am EDT
Innovation Center Rooms 105 and 120 2719 E 10th St, Bloomington, IN, 47408

12:30pm EDT

Lunch - Lustre Developers Day - Sponsored by UT-Battelle/ORNL
Tuesday May 30, 2017 12:30pm - 1:30pm EDT
Innovation Center Rooms 105 and 120 2719 E 10th St, Bloomington, IN, 47408

1:00pm EDT

Registration
Pick up your badge and program.

Chairs
avatar for Robert Ping

Robert Ping

Program Management Specialist, Indiana University
In 2024, he accepted program management responsibilities for Jetstream2 and the Midwest Research Computing and Data Consortium where he will facilitate the success of the National Science Foundation-sponsored programs.As RDA-US Program Manager, he oversees the multiple projects within... Read More →

Sponsors

Tuesday May 30, 2017 1:00pm - 7:00pm EDT
Indiana Memorial Union (IMU) 900 E 7th St, Bloomington, IN, 47405

3:00pm EDT

Break - Lustre Developers Day - Sponsored by UT-Battelle/ORNL
Tuesday May 30, 2017 3:00pm - 3:30pm EDT
Innovation Center Rooms 105 and 120 2719 E 10th St, Bloomington, IN, 47408

4:00pm EDT

Transportation to/from Hackathon and Opening Reception
Hackathon Shuttle
Leave Springhill at 4:30pm
Leave Hyatt Place at 4:45pm
Leave IMU (near stopsign at corner of 7th and IMU circle) at 5pm
Drop-off at Innovation Center

Parking at the Innovation Center (IC) and Cyberinfrastructure Building (CIB) is free in spaces marked EM-P, EM-S, and ST.  Do not park in visitor or handicap spaces.

5:00pm EDT

Hackathon - all welcome
Tuesday May 30, 2017 5:00pm - 7:30pm EDT
Innovation Center Rooms 105 and 120 2719 E 10th St, Bloomington, IN, 47408

6:30pm EDT

Hackathon Dinner
Pizza and beverages will be provided before heading across the street to the CIB for the Opening Reception, sponsored by DDN.

Tuesday May 30, 2017 6:30pm - 7:30pm EDT
Innovation Center Rooms 105 and 120 2719 E 10th St, Bloomington, IN, 47408

6:30pm EDT

Transportation to/from Opening Reception
Opening Reception Shuttle
Leave Springhill at 6:30pm
Leave Hyatt Place at 6:45pm
Leave IMU (near stopsign at corner of 7th and IMU circle) at 7:00pm

Shuttles will leave the CIB after the Opening Reception to return participants to their hotels.

Parking at the Innovation Center (IC) and Cyberinfrastructure Building (CIB) is free in spaces marked EM-P, EM-S, and ST.  Do not park in visitor or handicap spaces.

7:30pm EDT

Opening Reception - sponsored by DDN
Enjoy demonstrations, tours of the Indiana University Data Center, Innovation Center, and IU-enabled technology in the Cyberinstructure Building, including the IQ series.

Amazing hors d'oeuvres, local brews and wine, and entertainment in a large space conducive to networking with peers.  Bring your GREEN drink tickets, located in your namebadge, for two complimentary drinks.

Plan to enjoy these savory and sweet selections:
Fresh Fruits (GF, VN)
Crudites (VN, GF) & Pita Points(VN) with Hummus (VN,GF) & Tapenade (S,GF)
Lennie's Spinach Artichoke Torta with Crackers (D,VG)
Roasted Red Pepper-Ricotta-Herb Tartlets (D,VG)
Goat Cheese and Fig Jam Crostini (D,VG)
BBQ Bison Meatballs, Apple BBQ sauce and House Made Mexican Turkey Meatballs with Chipotle Aioli (GF)
Churrasco-style Steak Skewers with Chimichurri (GF)
Thai Chicken Satay, Peanut Sauce (N, GF)
Savory Stuffed Mushrooms - Vegetarian (VG, D)
Peruvian Black Bean & Quinoa Patties with Salsa Verde (GF)
Herbed Sausage Bites in Puff Pastry (N,D)
Savory Mushroom Pastry Bites (V)

Chocolate Dipped Strawberries
Assorted Cheesecake Bites - Chocolate, New York Style
Cream Pie Assortment
Mini Tiramisu
Mini Chocolate Mousse

Filtered Water
Iced Tea with Lemon
Guava Lemonade
Coca-Cola Products (Coke, Diet Coke & Sprite)

Bloomington Brewing Company Sixtel
Bloomington Brewing Company Specialty Sixtel
Black Ridge Pinot Grigio
Black Ridge Chardonnay
Black Ridge Shiraz
Black Ridge Pinot Noir

Sponsored by DDN.

Sponsors
avatar for UITS Research Technologies

UITS Research Technologies

A Pervasive Technology Institute Center and division of University Information Technology Services, Indiana University
The UITS Research Technologies team provides software, training, and systems in support of leading-edge research. We deliver high-performance computing resources, large-capacity data storage, and advanced visualization tools.Read Impact Highlights from UITS Research Technologies here... Read More →


Tuesday May 30, 2017 7:30pm - 9:30pm EDT
Cyberinfrastructure Building (CIB) 2709 E 10th St, Bloomington, IN, 47408

7:30pm EDT

Registration
Pick up your badge and program.

Chairs
avatar for Robert Ping

Robert Ping

Program Management Specialist, Indiana University
In 2024, he accepted program management responsibilities for Jetstream2 and the Midwest Research Computing and Data Consortium where he will facilitate the success of the National Science Foundation-sponsored programs.As RDA-US Program Manager, he oversees the multiple projects within... Read More →

Sponsors

Tuesday May 30, 2017 7:30pm - 9:30pm EDT
Cyberinfrastructure Building (CIB) 2709 E 10th St, Bloomington, IN, 47408
 
Wednesday, May 31
 

7:30am EDT

Breakfast
Network with your peers and visit the Platinum and Gold sponsor tables as you enjoy:
Scrambled Eggs
Sausage Links
Breakfast Potatoes
Fresh Cut Seasonal Fruit
Assorted Baked Goods
Oatmeal with toppings
Coffee, decaf, tea, juices, water


Sponsors
avatar for Research Technologies - Pervasive Technology Institute

Research Technologies - Pervasive Technology Institute

Indiana University
UITS Research Technologies (RT) develops, delivers, and supports advanced technology solutions that enable new possibilities in research, scholarly endeavors, and creative activity at Indiana University and beyond. RT is also a cyberinfrastructure and service center affiliated... Read More →


Wednesday May 31, 2017 7:30am - 9:00am EDT
Solarium (IMU, enter through Alumni Hall - 1st Floor) 900 E 7th St, Bloomington, IN, 47405

7:30am EDT

Transportation to/from Conference Center (IMU)
Conference Shuttle
Leave Springhill at 7:30am
Leave Hyatt at 7:45am
Drop-off at IMU

Leave Springhill at 8:15am
Leave Hyatt at 8:30am
Drop-off at IMU

Leave Springhill at 9:00am
Leave Hyatt at 9:15am
Drop-off at IMU

Sponsors
avatar for Research Technologies - Pervasive Technology Institute

Research Technologies - Pervasive Technology Institute

Indiana University
UITS Research Technologies (RT) develops, delivers, and supports advanced technology solutions that enable new possibilities in research, scholarly endeavors, and creative activity at Indiana University and beyond. RT is also a cyberinfrastructure and service center affiliated... Read More →


7:30am EDT

Registration
Pick up your badge and program.

Chairs
avatar for Robert Ping

Robert Ping

Program Management Specialist, Indiana University
In 2024, he accepted program management responsibilities for Jetstream2 and the Midwest Research Computing and Data Consortium where he will facilitate the success of the National Science Foundation-sponsored programs.As RDA-US Program Manager, he oversees the multiple projects within... Read More →

Sponsors

Wednesday May 31, 2017 7:30am - 5:40pm EDT
Alumni Hall (IMU - 1st Floor) 900 E 7th St, Bloomington, IN, 47405

9:00am EDT

LUG17 Day 1 Opening remarks
Presented by OpenSFS Board President, Stephen Simms.

Chairs
S

Stephen Simms

Indiana University Pervasive Technology Institute

Sponsors
avatar for Research Technologies - Pervasive Technology Institute

Research Technologies - Pervasive Technology Institute

Indiana University
UITS Research Technologies (RT) develops, delivers, and supports advanced technology solutions that enable new possibilities in research, scholarly endeavors, and creative activity at Indiana University and beyond. RT is also a cyberinfrastructure and service center affiliated... Read More →


Wednesday May 31, 2017 9:00am - 9:10am EDT
Alumni Hall (IMU - 1st Floor) 900 E 7th St, Bloomington, IN, 47405

9:10am EDT

Intel Lustre Update
Bryon leads the Technical Computing I/O (HPDD) team at Intel.  This team develops the Lustre file system, DAOS, and the CORAL I/O software components.   Bryon has been with the Lustre development team for over 10 years, and joined Intel through the acquisition of Whamcloud.    He has also held software leadership positions at IBM, Sun Microsystems, and two startup companies.   Bryon holds a BS in Computer Science from Michigan State University, and an MS in Comp Sci from Florida Atlantic University.

Presenter
BN

Bryon Neitzel

Bryon leads the Technical Computing I/O (HPDD) team at Intel. This team develops the Lustre file system, DAOS, and the CORAL I/O software components. Bryon has been with the Lustre development team for over 10 years, and joined Intel through the acquisition of Whamcloud. He... Read More →

Sponsors


Wednesday May 31, 2017 9:10am - 9:25am EDT
Alumni Hall (IMU - 1st Floor) 900 E 7th St, Bloomington, IN, 47405

9:30am EDT

Community release update
Community release update from Intel.

Presenter
PJ

Peter Jones

Intel
Peter runs the Lustre Engineering team at Intel. He has been working on the Lustre project for over a decade with positions at CFS, Sun, Oracle, Whamcloud and Intel. He served a term on the OpenSFS board and is presently am co-lead of the OpenSFS Lustre Working Group. People should... Read More →

Sponsors


Wednesday May 31, 2017 9:30am - 10:00am EDT
Alumni Hall (IMU - 1st Floor) 900 E 7th St, Bloomington, IN, 47405

10:00am EDT

Status of the Lustre upstream client
The Lustre client in the mainline Linux kernel has gone from being
hated to accepted and now is looking to leave the staging area. We
will reflect on the milestones reached as well as what has been
accomplished in this last year. Over this last year we have witness
a convergence of both the OpenSFS and upstream client. The upstream
client has gone from Lustre version 2.4.92 to Lustre 2.8 with many patches
from both the 2.9 and latest master merged in. By the time of the
LUG the upstream client will be in sync with the OpenSFS branch. At
the same time the unique work present in the upstream client is being
ported to the OpenSFS branch. At LUG details of the progress of this
work will be presented. The topic of what is finally needed to leave
the staging area will be gone over. In order for the Lustre upstream
client to move out of staging we have to meet the requirements of
Greg Kroah-Hartman who maintains the staging tree and then meet
the requirements of Al Viro who is maintainer of the VFS layer.
Those requirements will be listed in the presentation and how far
long that work is. Lastly the impact of what an out of staging Lustre
client will mean in the future.

Presenter
JS

James Simmons

Storage Engineer, Oakridge National Labs


Wednesday May 31, 2017 10:00am - 10:30am EDT
Alumni Hall (IMU - 1st Floor) 900 E 7th St, Bloomington, IN, 47405

10:30am EDT

Break
Wednesday May 31, 2017 10:30am - 10:50am EDT
Solarium (IMU, enter through Alumni Hall - 1st Floor) 900 E 7th St, Bloomington, IN, 47405

10:50am EDT

Scaling LDISKFS for the future. Again.
This is an update of the previous year's presentation about scaling LDISKFS. LDISKFS is a commonly used Lustre FS backend. OST and MDT targets can be formatted as LDISKFS to store Lustre objects. Total data storage size depends on target size or target count. Increasing the target count requires adding new hardware or increasing target size. Target size can be increased by switching to hard drives with larger capacity. Last year, Seagate significantly upgraded the size of its hard drives. Current solutions have 8TB,10TB hard drives aboard, and 12TB and 16TB are coming soon. LDISKFS based on the EXT4 file system are scalable, but keeping the size of the target device (~300TB for 10TB hard drives) in mind, some preparation is needed. There are some verification steps that were used in previous LDISKFS scaling phases, pointed out in last year's presentation, and the same work is done for this iteration:
  • Issues with external inode xattr are fixed. New tests are added;
  • Large memory structures are checked to be ready; 
  • Inode count limitation public discussion is started; 
  • Large_dir support is added to ext4 and ready to be added to Lustre FS.
Another important problem in this scaling iteration are EXT4 (and LDISKFS) metadata limits. Without additional support, block groups can be allocated only for partitions < 256TB. There are two possible solutions for this problem. The “bigalloc” feature makes block sizes bigger, so there's no need to allocate many block groups. “Meta_bg” changes the filesystem structure so that enough block groups can be allocated. Both of these approaches have their own advantages and disadvantages. To make the right choice, Seagate performed functional and stress testing that shows the following results:
  • Bigalloc has some known issues. Fixes are exists in ext4 and need to be ported to Lustre;
  • Some new issues in bigalloc have been found: quota and symbolic links are not ready; 
  • Mount time with meta_bg is too long. Some patches were added to preload metadata, decreasing mount time dramatically; 
  • mkfs.Lustre requires a fix to exclude resize_inode if meta_bg is enabled (these options can not be set at the same time).
At this moment “meta_bg” looks more stable and more attractive as a solution. As the result of much work, safe partition size limit has increased to 512TB. Hard drive capacity is continuously increasing, so in the feature the Lustre FS community will face the need to create partitions > 512TB. Meanwhile, Seagate continues to work to extend LDISKFS capacity.

Presenter
Authors


Wednesday May 31, 2017 10:50am - 11:20am EDT
Alumni Hall (IMU - 1st Floor) 900 E 7th St, Bloomington, IN, 47405

11:20am EDT

Roads to Zester
At Indiana University we are challenged to provide up-to-date metadata statistics for our site-wide Data Capacitor 2 Lustre filesystem. Our environment requires reporting of usage on a per-user/per-project basis on both a size and time basis. Presently, our Lustre filesystems are used for HPC scratch and project storage, with time horizons for both categories. This necessitates per-file stat() information on a regular basis, which informs our automated purge management cycle, as well as reporting requirements. Our current solution, which has been used for several years, processes a Lester dump of the MDT, and then performs a Lustre stat() for each entry to provide full metadata for every file in over 5 PB of storage.

We will soon be bringing up a new ZFS-based Lustre filesystem; and as the Lester tool is limited to working with ldiskfs-based filesystems a new method for gathering this file metadata has proven necessary. Also, our current Lustre stat-based method to provide file size takes considerably longer than desired on our present system. With this in mind, we started a pilot project to provide Lester-like functionality plus file-size information on ZFS-based Lustre systems via the processing of ZDB dumps of the MDT and OST ZFS datasets. We refer to it as Zester, as an homage to Lester with a nod to compatibility with ZFS.

For this project, we’ve employed Python, SQL, and Lustre API functionality through C functions mapped into the Python namespace. The Zester tools we are developing provide queryable database functionality useful for both filesystem management as well as reporting functionality.

Presenter
KR

Kenrick Rawlings

Indiana University Pervasive Technology Institute

Authors
SS

Shawn Slavin

Indiana University Pervasive Technology Institute



Wednesday May 31, 2017 11:20am - 11:50am EDT
Alumni Hall (IMU - 1st Floor) 900 E 7th St, Bloomington, IN, 47405

11:50am EDT

OST data migration using ZFS snapshots/send/receive
Data migrations can be time-consuming and tedious, often requiring large maintenance windows of downtime. Some common reasons for data migrations include aging and/or failing hardware, increases in capacity, and greater performance. Traditional file and block-based “copy tools” each have pros and cons, but the time to complete the migration is often the core issue. Some file-based tools are feature-rich, allowing quick comparisons of date/time stamps, or changed blocks inside a file. However, examining multiple millions, or even billions of files takes time. Even when there is little no no data churn, a final "sync" may take hours or even days to complete, with little data movement. Block based tools have fairly predictable transfer speeds when the block device is otherwise "idle," however many block-based tools do not allow "delta" transfers. The entire block device needs to be read, and then written out to another block device to complete the migration.

ZFS backed OST’s can be migrated to new hardware or to existing reconfigured hardware by leveraging ZFS snapshots and ZFS send/receive operations. The ZFS snapshot/send/receive migration method leverages incremental data transfers, allowing an initial data copy to be "caught up" with subsequent incremental changes. This migration method preserves all the ZFS Lustre properties (mgsnode, fsname, network, index, etc), but allows the underlying zpool geometry to be changed on the destination. The rolling ZFS snapshot/send/receive operations can be maintained on a per-OST basis, allowing granular migrations.

This migration method greatly reduces the final "sync" downtime, as rolling snapshot/send/receive operations can be continuously run, thereby pairing down the delta's to the smallest possible amount. There is no overhead to examine all the changed data, as the snapshot "is" the changed data. Additionally, the final sync can be estimated from previous snapshot/send/receive operations, which supports a more accurate downtime window.

This presentation will overview how Indiana University is leveraging ZFS snapshots and ZFS send/receive to migrate OST data.

Presenter
TC

Tom Crowe

Indiana University Pervasive Technology Institute


Wednesday May 31, 2017 11:50am - 12:20pm EDT
Alumni Hall (IMU - 1st Floor) 900 E 7th St, Bloomington, IN, 47405

12:20pm EDT

Lunch
Network with your peers and visit the Platinum and Gold sponsor tables as you enjoy:
Taco Bar with: South of the border-style salad, hard and soft tortillas, seasoned taco meat, bandits beans, lettuce, tomatoes, onions, black olives, sour cream, cheddar cheese, salsa, vegetarian three bean chili, tortilla chips

Caesar Bar with:  Marinated Grilled Chicken, Seasoned Beef Strips and Grilled Shrimp, Fresh Romaine, Shredded Parmesan Cheese, Seasoned Crutons, and Caesar Dressing

Chocolate cake and cheesecake with toppings

Beverage service

Sponsors
avatar for Research Technologies - Pervasive Technology Institute

Research Technologies - Pervasive Technology Institute

Indiana University
UITS Research Technologies (RT) develops, delivers, and supports advanced technology solutions that enable new possibilities in research, scholarly endeavors, and creative activity at Indiana University and beyond. RT is also a cyberinfrastructure and service center affiliated... Read More →


Wednesday May 31, 2017 12:20pm - 1:20pm EDT
Solarium (IMU, enter through Alumni Hall - 1st Floor) 900 E 7th St, Bloomington, IN, 47405

1:20pm EDT

The effects of fragmentation and capacity on Lustre Filesystem performance
After a Lustre file system is put in production, and the usage model fluctuates with the
variability of applications and users creating and deleting files overtime, available
storage capacity tends to decrease while file system fragmentation increases. This
affects the overall performance throughput, bandwidth, compared to a pristine file
system, or a file system which is just fully populated. Seagate will discuss a study
around both capacity and fragmentation, including the methodologies, nomenclature, and tools used to study both capacity and the introduction of fragmentation at different capacity
points to analyze the overall throughput performance impact. The talk will illustrate how,
at various percentages of capacity, the largest impact is the amount of fragmentation
that exists, and not just the utilized capacity of the file system.

The overall determination is that despite some popular belief, performance impacts
can occur at very low percentage of utilization depending on patterns of utilization
and fragmentation that may be generated by a file system's overall operational
production utilization. As such modeling behaviors, RFI’s and RFP’s should be modified
to consider factors outside a pristine file system to more properly reflect real world
operational requirements for customers. Customers can thus consider the inclusion
of such factors in their solicitation for file system proposals. Vendors in turn, will
be encouraged to perform product benchmarks that are more realistic to both
fragmentation and capacity as major constraints in Lustre file system utilization for
responses to said requirements, irrespective of whether those requirements are aligned
to a more traditional Lustre on disk filesystem, LDISKFS, or for alternate technologies
such as zfs.

Presenter
JK

John Kaitschuck

Seagate CSSG


Wednesday May 31, 2017 1:20pm - 1:50pm EDT
Alumni Hall (IMU - 1st Floor) 900 E 7th St, Bloomington, IN, 47405

1:50pm EDT

Providing QoS-mechanisms for Lustre through centralized control applying the TBF-NRS
Parallel applications access shared storage by successively sending requests as fast as the storage system can handle them, while HPC file systems today work in a best-effort manner. Individual applications therefore can flood the file system with requests, effectively leading to a denial of service for all other tasks. The need for allocating the I/O bandwidth reasonably among different clients, jobs, or operations and providing high Quality of service (QoS)-levels to the applications and users is therefore becoming an increasingly strong demand in such settings.

QoS mechanisms for distributed storage have become popular in cloud environments with upcoming Software defined Storage (SDS)-implementations. Lustre has already taken a first step to support QoS by integrating a Token Bucket filter (TBF) into its Network Request Scheduler (NRS). This enables administrators to set upper bandwidth limits for certain clients, HPC jobs, or I/O operation types. However, setting QoS policies manually is unfeasible in bigger environments, so that additional support is required to automate its usage.

We will present the current state of the NRS-TBF policies and its coupling with a centralized controller, which sets QoS policies and provides interfaces to batch environments and applications. The basic policies are set by the batch manager, while additional dynamic policies can be enforced using a negotiation protocol with the checkpointing mechanisms of parallel applications, helping to decouple bandwidth-intensive data streams.

Presenter
T

Tim SĂĽĂź

Johannes Gutenberg University Mainz

Authors
A

André Brinkmann

Johannes Gutenberg University Mainz
J

JĂĽrgen Kaiser

Johannes Gutenberg University Mainz
L
L

Lingfang Zeng

Johannes Gutenberg University Mainz



Wednesday May 31, 2017 1:50pm - 2:20pm EDT
Alumni Hall (IMU - 1st Floor) 900 E 7th St, Bloomington, IN, 47405

2:20pm EDT

Lustre Filesystem - Online patching
Online patching, also known as live or dynamic patching, is a technology that allows one to patch a running Linux kernel without any downtime or affecting the running applications. Online patching is desirable as it eliminates the need for scheduled maintenance due to patches and allows for patching in a timelier manner.

With these goals in mind, this talk will introduce our current efforts investigating online patches for the Lustre filesystem. The current technology of choice is “kpatch,” a popular dynamic kernel patching system from Redhat. Kpatch technology works by registering “ftrace” no-op to call a new function instead of the original function. It also modifies the address to return to the new function effectively by-passing the older one.

Kpatch works very well on a single node/machine.  However, when attempting to patch multiple nodes/machines, care must be exercised to ensure that all of them are indeed running the modified patch. If a patch fails on a node/server, that failure must be identified and the patch must be tried again. Our current approach to solving this issue is a wrapper program around kpatch that makes sure not only that everything is upgraded, but that it may be listed, verified, and rolled back if necessary. This talk intends to include a demonstration of the proposed technique, showing a system that needs to be patched without downtime, the commands used to patch and verify the patch on all nodes/machines, and the system running applications that are ultimately able to keep running despite the patching.

Presenter
A

Arshad Hussain

Seagate
Arshad Hussain is linux kernel programmer. Arshad, joined Seagate Lustre team at the end of 2015 and works on lustre bugs and improvements.

Authors


Wednesday May 31, 2017 2:20pm - 2:50pm EDT
Alumni Hall (IMU - 1st Floor) 900 E 7th St, Bloomington, IN, 47405

2:50pm EDT

Client IO parallelization
In the HPC landscape, the number of cores in client machines is continually increasing. However, even in parallel applications, single-thread Input/Output (I/O) remains very common. While Lustre was originally optimized for multiple I/O operations at the same time, single-threaded applications cannot utilize this optimization if a single core is slow. Therefore, the time cost of single thread I/O in parallel applications cannot be reduced by simply adding more compute nodes.

There is optimization that can be done in the Lustre client to provide a significant performance gain for single-threaded applications or parallel applications containing large amounts of single-threaded I/O. Each stage of the Lustre I/O flow has been analyzed and an overview of the potential solution will be presented that will be a critical improvement for the utilization of many-core and multiple network interface architectures that we see in clusters today.

A proof-of-concept solution has been developed and tested with a real-world Hybrid Coordinate Ocean Model (HYCOM) application to demonstrate the significant performance gains that can be realized on many-core architectures. The HYCOM application performs a large amount of data reads when launched, before beginning intensive compute operations to analyze the data. If the I/O process on a many-core architecture is slow, it extends the total run time and is a primary bottleneck to increasing application throughput. With the proof-of-concept solution that will be presented, the I/O time for this application was significantly improved and the application’s performance was no longer restricted by I/O operations.

This development is being targeted for the community 2.10 release and full details can be read in the JIRA ticket - https://jira.hpdd.intel.com/browse/LU-8964

Presenter
avatar for Dmitry	Eremin

Dmitry Eremin

Senior Software Engineer, Intel
Dmitry is in support and development team of High Performance Data Division at Intel. He mostly focused on adoption Lustre to new Intel hardware like Xeon Phi, Omni-Path, etc.


Wednesday May 31, 2017 2:50pm - 3:20pm EDT
Alumni Hall (IMU - 1st Floor) 900 E 7th St, Bloomington, IN, 47405

3:20pm EDT

Break
Wednesday May 31, 2017 3:20pm - 3:40pm EDT
Solarium (IMU, enter through Alumni Hall - 1st Floor) 900 E 7th St, Bloomington, IN, 47405

3:40pm EDT

CSCS site update
CSCS Site update on the two Lustre: one designed for performance and the other one focused on capacity.

CSCS will provide a description of the environment and the main changes after last year upgrade.

Presenter
Authors


Wednesday May 31, 2017 3:40pm - 4:10pm EDT
Alumni Hall (IMU - 1st Floor) 900 E 7th St, Bloomington, IN, 47405

4:10pm EDT

Scaling parallel modeling of agroecosystems with Lustre
Agro-IBIS is a traditional, serial simulation code written in Fortran, which falls into a category of Dynamic Global Ecosystem Models. The code simulates coupled water, energy, carbon, and nitrogen cycles and their interactions on a discretized landscape. The code was developed to decompose the land surface into a grid and process each cell in serial. As data resolution and precision of specifying model forcing (e.g., land management decisions by farmers) increases, the model run-times are prohibitively long. To date, it has not been possible to efficiently conduct sensitivity analyses nor ensemble foreasts, both of which are needed to improve resource management under present and future conditions.

In an attempt to scale the Agro-IBIS code to much larger problem sizes and higher resolution than previously attempted, this research team has used a straightforward domain decomposition on the grid problem, to allow the Agro-IBIS code to solve models on subsets of the complete problem space, and developed a parallel, C++ post-processing code to manage the results of each independent simulation and combine those into a coherent output.

The I/O model in Agro-IBIS was initially designed to run in serial, with multiple output streams to account for the time-evolving solutions for many variables that account for solutions to the internal equations that represent the physical, chemical, and biologic processes occurring in each simulation. The I/O pattern of file access and management required significant tuning to implement Agro-IBIS as an effective parallel application with the BigRed II Cray XE6/XK7 supercomputer and Lustre file systems at Indiana University. Initial runs on BigRed II while computing directly against the Data Capacitor 2 (DC2) 5 PB, Lustre file system were capable of driving IOPS values and write throughput to approximately 25000 and 24 GB/s, respectively, in isolated testing. These initial experiences were slowing the system to unacceptable levels of responsiveness while the system was in production, and the I/O was proving to be the single biggest bottleneck to performance.

With the newly parallelized Agro-IBIS, even with refined efficiency to reduce read and write operations in the code using on-node memory, the filesystem performance of DC2 was the limiting factor. We needed a system that would support development of a working solution, and which could handle the many-file I/O of this parallelized application. To that end, we constructed DCRAM, an SSD-based Lustre filesystem with 35 TB of Lustre (v2.8) storage, with two MDS and six OSS nodes. The 2 MDTs and 12 OSTs are each comprised of four 800 GB Intel SSDs in striped RAID-0 configurations, for highest possible performance. Previous testing had proven sets of 4-drive OSTs in pairs on each OSS had given best performance. DCRAM, like DC2, is connected to BR2 through Infiniband to the BR2 Gemini interconnect through our LNET routing setup. The current approach exclusively uses DCRAM for I/O with intermediate writes happening on compute nodes, with aggressive read-buffer caching via the netCDF library for input/boundary condition data.

In summary, our parallelization of the code exposed excessive I/O operations that were not important when being run in serial. Even with optimization of code and memory management to reduce I/O, the DC2 filesystem was not capable of supporting the I/O demands of the parallelized code without significant performance losses. To that end, the DCRAM SSD configuration was developed. The combined hardware and software modification reduced runtime for a 60-yr simulation across the Mississippi River Basin from about 10 days (single node) to 6 hours (512 compute nodes on BigRed II).

Presenter
SS

Shawn Slavin

Indiana University Pervasive Technology Institute

Authors
avatar for H.E. Cicada	Dennis

H.E. Cicada Dennis

Research Software Developer, Indiana University
avatar for Robert Henschel

Robert Henschel

Director, Pervasive Technology Institute at Indiana University
S

Stephen Simms

Indiana University Pervasive Technology Institute
T

Tyler Balson

Indiana University Pervasive Technology Institute
AW

Adam Ward

Indiana University Pervasive Technology Institute
Y

Yuwei Li

Indiana University Pervasive Technology Institute



Wednesday May 31, 2017 4:10pm - 4:40pm EDT
Alumni Hall (IMU - 1st Floor) 900 E 7th St, Bloomington, IN, 47405

4:40pm EDT

Intel Enterprise Edition 3.0 for Lustre on Sonexion 3000
The Argonne Leadership Computing Facility (ALCF) installed and accepted a Cray XC40 named Theta at the end of 2016. Along with Theta, ALCF also installed a Cray Sonexion 3000 system that consists of 28 Scalable Storage Units (SSU) in 4 cabinets with an aggregate capacity of 10.8 PB and a peak bandwidth of 240 GB/s as measured via IOR. The Sonexion is connected to the Theta LNET routers via Infiniband FDR using a Mellanox SX6536 switch. The Sonexion was deployed with the Cray/Seagate variant of Lustre version 2.5.1. In April 2017, we will upgrade our Sonexion storage to Intel Lustre Enterprise Edition 3.0. In this presentation, we will describe the process to upgrade from the Cray/Seagate Lustre to the Intel Lustre version. We will describe any issues that arose and steps that were taken to correct problems. Intel Enterprise Edition 3.0 is based on Lustre 2.7 with OpenZFS, but our deployment will continue to use ldiskfs on top of GridRAID. We will compare and contrast the performance between the different version using several different workloads using IOR, mdtest and HACC-IO. The features that are new or have been changed will be examined. Any relevant differences in the support model will also be reviewed.

Presenter
K

Kevin Harms

Argonne National Laboratory

Authors
B

Ben Allen

Argonne National Laboratory
G

Gordon McPheeters

Argonne National Laboratory
M

Mark Fahey

Argonne National Laboratory



Wednesday May 31, 2017 4:40pm - 5:10pm EDT
Alumni Hall (IMU - 1st Floor) 900 E 7th St, Bloomington, IN, 47405

5:10pm EDT

OpenStack Cinder drive for Lustre
Higher storage capacity and throughput needs on High Performance Computing demands utilization of Lustre file system not only as primary (scratch) storage, but also as long-term and permanent shared filesystem and archiving system, requiring features and capabilities beyond the traditional HPC storage.

The industry’s needs require compute, network and storage resources in a converged datacenter with multiple virtualization and security techniques that simplify the overall architecture yet reducing the operational costs. Many organizations have started build these private clouds on HPC systems leveraging Openstack, and providing a fusion between HPC and Cloud environment.

OpenStack is an open source software stack used to build and manage virtualization of Cloud, HPC and BigData, private and public, infrastructure. Openstack software consists of many components that manages compute, network and storage resources via integrated set of APIs and tools such GUI dashboards, etc. On the storage layer Openstack supports two types of implementations: SWIFT, utilized for object storage and Cinder for block storage access.

Cinder drivers varies according to the storage layer to be used and there are multiple implementations available that relies on the storage hardware, local filesystem and network filesystems to be used. However, there’s no Cinder driver for Lustre yet.

DDN has ported the OpenStack Cinder driver for Lustre and submitted all the patches to the upstream Openstack’s branch, contributing to both Lustre and Openstack community. Lustre Cinder driver provides a block device storage service for OpenStack. OpenStack’s administrators are now capable to create block devices for virtual machines with Openstack using the Lustre Cinder driver, thus storing VMs images on a parallel, high performance, file system.

In this talk we will present some use cases of Openstack with Lustre Cinder driver and demonstrate how it works and what’s the current functionalities. The presentation will also cover the design of Lustre cinder driver and some preliminary benchmark results.

Presenter
Authors


Wednesday May 31, 2017 5:10pm - 5:40pm EDT
Alumni Hall (IMU - 1st Floor) 900 E 7th St, Bloomington, IN, 47405

6:00pm EDT

Dinner and a Movie - sponsored by Intel

Sponsored by Intel.

Dinner at 6pm to include:
Seasonal Roasted Vegetable Antipasto (VN,GF)
Artisanal Cheeses and Crackers (D,VG)

House Salad with Carrot Ribbons, Grape Tomatoes, and Cucumbers (VN,GF)
Traditional Corn Muffins (VG,D)

BBQ Smoked Beef Brisket (Sandwich portion with bun) with Sauce
Pretenderloin (vegan pork tenderloin) with bun and condiments
Macaroni and Cheese (VG, D)
Vegan Baked Beans - no pork (VN,GF,S)

Baker's Choice Mini Cupcakes

Beverages:
Filtered Water
Sparkling Fruit Punch
Ginger, Mint & Local Honey Tisane

Bloomington Brewing Company Sixtel beers
Bloomington Brewing Company Specialty Sixtel beers
Black Ridge Pinot Grigio
Black Ridge Chardonnay
Black Ridge Pinot Noir
Black Ridge Shiraz

Movie at 7:30PM:
Breaking Away
is a 1979 American coming of age comedy-drama film produced and directed by Peter Yates and written by Steve Tesich. It follows a group of four male teenagers in Bloomington, Indiana, who have recently graduated from high school. The film stars Dennis Christopher, Dennis Quaid, Daniel Stern, Jackie Earle Haley, Barbara Barrie, Paul Dooley, and Robyn Douglass.

Breaking Away won the 1979 Academy Award for Best Original Screenplay for Tesich, and received nominations in four other categories, including Best Picture. It also won the 1979 Golden Globe Award for Best Film (Comedy or Musical), and received nominations in three other Golden Globe categories.

As the film's young lead, Christopher won the 1979 BAFTA Award for Most Promising Newcomer and the 1979 Young Artist Award for Best Juvenile Actor, as well as getting a Golden Globe nomination as New Star of the Year.

The film is ranked eighth on the List of America's 100 Most Inspiring Movies compiled by the American Film Institute (AFI) in 2006. In June 2008, AFI announced its "Ten top Ten"—the best ten films in ten classic American film genres—after polling over 1,500 people from the creative community. Breaking Away ranked as the eighth best film in the sports genre.[4][5]

Tesich was an alumnus of Indiana University Bloomington. The film was shot in and around Bloomington and on the university's campus.

Sponsored by Intel.

Sponsors


Wednesday May 31, 2017 6:00pm - 9:00pm EDT
IU Auditorium 1211 E 7th St, Bloomington, IN, 47405
 
Thursday, June 1
 

7:30am EDT

Breakfast
Network with your peers and visit the Platinum and Gold sponsor tables as you enjoy:
Biscuits and Gravy
Scrambled Eggs
Shredded Cheese
Breakfast Potatoes
Maple Sausage Links
Smoked Bacon
French Toast with syrup and fruit toppings
Seasonal Fruit
Coffee, decaf, tea, assorted juices, water

Sponsors
avatar for Research Technologies - Pervasive Technology Institute

Research Technologies - Pervasive Technology Institute

Indiana University
UITS Research Technologies (RT) develops, delivers, and supports advanced technology solutions that enable new possibilities in research, scholarly endeavors, and creative activity at Indiana University and beyond. RT is also a cyberinfrastructure and service center affiliated... Read More →


Thursday June 1, 2017 7:30am - 9:00am EDT
Solarium (IMU, enter through Alumni Hall - 1st Floor) 900 E 7th St, Bloomington, IN, 47405

7:30am EDT

Transportation to/from Conference Center (IMU)
Conference Shuttle
Leave Springhill at 7:30am
Leave Hyatt at 7:45am
Drop-off at IMU

Leave Springhill at 8:15am
Leave Hyatt at 8:30am
Drop-off at IMU

Leave Springhill at 9:00am
Leave Hyatt at 9:15am
Drop-off at IMU

Sponsors
avatar for Research Technologies - Pervasive Technology Institute

Research Technologies - Pervasive Technology Institute

Indiana University
UITS Research Technologies (RT) develops, delivers, and supports advanced technology solutions that enable new possibilities in research, scholarly endeavors, and creative activity at Indiana University and beyond. RT is also a cyberinfrastructure and service center affiliated... Read More →


7:30am EDT

Registration
Pick up your badge and program.

Chairs
avatar for Robert Ping

Robert Ping

Program Management Specialist, Indiana University
In 2024, he accepted program management responsibilities for Jetstream2 and the Midwest Research Computing and Data Consortium where he will facilitate the success of the National Science Foundation-sponsored programs.As RDA-US Program Manager, he oversees the multiple projects within... Read More →

Sponsors

Thursday June 1, 2017 7:30am - 5:30pm EDT
Alumni Hall (IMU - 1st Floor) 900 E 7th St, Bloomington, IN, 47405

9:00am EDT

LUG17 Day 2 Opening remarks
Sponsors
avatar for Research Technologies - Pervasive Technology Institute

Research Technologies - Pervasive Technology Institute

Indiana University
UITS Research Technologies (RT) develops, delivers, and supports advanced technology solutions that enable new possibilities in research, scholarly endeavors, and creative activity at Indiana University and beyond. RT is also a cyberinfrastructure and service center affiliated... Read More →


Thursday June 1, 2017 9:00am - 9:10am EDT
Alumni Hall (IMU - 1st Floor) 900 E 7th St, Bloomington, IN, 47405

9:10am EDT

Growing-up with Lustre
DDN has been involved with Lustre almost since the start. Over these 15 years, DDN has come a long way from supplying storage hardware components to Lustre users, to providing a full range of Lustre-related solutions. LUG is a technical conference and marketing presentations from sponsors are always boring, so rather than telling you how great the DDN Lustre offering is, I will use this time to review our experiences growing-up with Lustre and what we learned along the way. I will also discuss whether we should measure the age of Lustre in kangaroo, whale, elephant, human, or tortoise years, which is an important consideration for our future engagement with the Lustre file system. I will conclude with the observations that building, deploying, and supporting globally-consistent-open-source-distributed-file-systems is hard, and while we are doing whatever we can to make it easier, this is unlikely to change fundamentally in the near future.

Presenter
Sponsors


Thursday June 1, 2017 9:10am - 9:25am EDT
Alumni Hall (IMU - 1st Floor) 900 E 7th St, Bloomington, IN, 47405

9:30am EDT

OpenSFS Board Elections

Thursday June 1, 2017 9:30am - 10:30am EDT
Alumni Hall (IMU - 1st Floor) 900 E 7th St, Bloomington, IN, 47405

10:30am EDT

Break
Thursday June 1, 2017 10:30am - 10:50am EDT
Solarium (IMU, enter through Alumni Hall - 1st Floor) 900 E 7th St, Bloomington, IN, 47405

10:50am EDT

LNet network health
LNet Multi-Rail - which is included in the upcoming community Lustre 2.10 release - has implemented the ability for multiple interfaces to be used on the same LNet network or across multiple LNet networks utilizing the underlying homogeneous or heterogeneous fabrics. The LNet Health feature, which is targeted for the community Lustre 2.11 release, will add the ability to resend messages across different interfaces when interface or network failures are detected.

The implementation of this feature at the LNet layer allows LNet to mitigate communication failures before passing the failures to upper layers for further error handling. To accomplish this, LNet Network Health depends on health information reported by the underlying fabrics such as MLX and OPA, as well as monitoring the transmit timeouts maintained by the LND.

This implementation also provides the ability for LNet to retransmit messages across different types of interfaces. For example, if a peer has both MLX and OPA interfaces and a transmit error is detected on one of them then LNet can retransmit the message on the other available interface.

LNet Network Health will monitor three different types of failures, each dealt with separately at the LNet layer:
  • Local interface failures as reported by the underlying fabric to the LND.
- LND will notify LNet of the failure and LNet will mark the health of the interface. Future LNet messages will not be sent over that interface. The interface will be added on a queue and will be pinged periodically to attempt recovery.
- LNet will attempt to resend the message on a different local interface if one is available. If no interfaces are available for that peer, then the message fails and the failure is reported to PtlRPC, which will commence its failure and recovery operations.
  • Remote interface failures as reported by the remote fabric.
- LNet will demerit the health of the remote interface, thereby reducing its overall selection priority. If a remote interface is consistently down, it will be marked as down and will not be selected. It will be added to the recovery queue and pinged on regular intervals to determine if it can continue to be used.
- LNet will attempt to resend the message to a different interface for the same peer. If no interfaces are available for that peer, then the message fails and the failure is reported to PTLRPC, which will commence its failure and recovery operations.
  • Network timeouts.
- LNet will demerit the health of both the local and remote interfaces, since it’s not deterministic where the problem is.
- LNet will attempt to resend the message over a new pathway altogether. If none are available then the message fails and the failure is reported to PtlRPC, which will commence its failure and recovery operations.

In all failure cases, LNet will continue attempting to retransmit the
message up until the peer_timeout expires. If the peer_timeout expires
and a message has not been successfully sent to the next-hop, then
the message fails and LNet reports the failure to PtlRPC.

Presenter
A

Amir Shehata

Intel
Amir has been working in HPDD at Intel on the Lustre Networking module (LNet), since March 2013. He worked on Dynamic LNet Configuration, Multi-Rail and LNet Health/Resiliency features.


Thursday June 1, 2017 10:50am - 11:20am EDT
Alumni Hall (IMU - 1st Floor) 900 E 7th St, Bloomington, IN, 47405

11:20am EDT

How to use perf/tracepoint with Lustre
When Lustre was first started it developed its own debugging infrastructure since at that time Lustre was multi-platform and Linux was lacking in its debugging infrastructure. Over
time linux has created a robust tracing and performance accounting system that now exceeds what Lustre provides. Lustre also has evolved into an linux only product which
has eliminated the barrier of adopting Linux tracepoints.

The secondary reason for this work is that the linux kernel maintainers do not like the current Lustre debugging infrastructure. While one of the goals of bringing tracepoint support to Lustre is to enable the lctl utility to seamlessly work with tracepoint so to minimize the transition for the user I like to present how and why using perf is a better option. Instructions need to be giving on how to setup an environment to maximize the usage of perf and also suggest other utilities such as the dwarf package to help enhance the debugging material available to the user. With this it will be shown that not only Lustre information can be collected but a complete stack analysis can be done. The discussion will cover pitfalls in setting it up as well. Advance features such as flame graphs will be shown and how they can be used to help diagnose problems.

Presenter
JS

James Simmons

Storage Engineer, Oakridge National Labs


Thursday June 1, 2017 11:20am - 11:50am EDT
Alumni Hall (IMU - 1st Floor) 900 E 7th St, Bloomington, IN, 47405

11:50am EDT

Lustre 2.11 and beyond
Andreas will provide an overview of planned upcoming features in the Lustre 2.11 and 2.12 releases.  Specific attention will be given to those features that do not have their own presentations at LUG and/or are of significant importance to a wider audience.  Some features that are being investigated for the 2.13 will be described.

Presenter
AD

Andreas Dilger

Principal Lustre Architect, Intel Corporation
Andreas has been involved in the development of Lustre since its inception. From early prototypes in 2000, before the founding of Cluster File Systems, though Sun Microsystems, Oracle, and Whamcloud over the next fifteen years - Andreas was one of the lead Lustre developers. After... Read More →


Thursday June 1, 2017 11:50am - 12:20pm EDT
Alumni Hall (IMU - 1st Floor) 900 E 7th St, Bloomington, IN, 47405

12:20pm EDT

Lunch
Network with your peers and visit the Platinum and Gold sponsor tables as you enjoy:
Texas-style BBQ with:  BBQ Chicken breast and pulled pork, Macaroni and Cheese, Baked Beans, Coleslaw and Potato Salad

Caesar Bar with:  Marinated Grilled Chicken, Seasoned Beef Strips and Grilled Shrimp, Fresh Romaine, Shredded Parmesan Cheese, Seasoned Crutons, and Caesar Dressing

Marble Pound Cake

Beverage service

Sponsors
avatar for Research Technologies - Pervasive Technology Institute

Research Technologies - Pervasive Technology Institute

Indiana University
UITS Research Technologies (RT) develops, delivers, and supports advanced technology solutions that enable new possibilities in research, scholarly endeavors, and creative activity at Indiana University and beyond. RT is also a cyberinfrastructure and service center affiliated... Read More →


Thursday June 1, 2017 12:20pm - 1:30pm EDT
Solarium (IMU, enter through Alumni Hall - 1st Floor) 900 E 7th St, Bloomington, IN, 47405

1:30pm EDT

Lustre management with Ansible
The CADES project at Oak Ridge National Lab deploys several Lustre file systems across multiple compute platforms. To simplify system administration, diskless Lustre servers are used with root file system images being managed by the GeDI software. Configuration changes to the root file system images are tracked using Ansible. Ansible's support for running playbooks in chroot directories is convenient for managing the Lustre server root images. In addition, Ansible playbooks can be used for common Lustre administration tasks such as formatting the file system and starting/stopping storage targets.

This presentation will provide a brief overview of the CADES project and the Lustre file systems that are deployed. We will discuss how Ansible is used within the CADES project, as well as the advantages and disadvantages of using Ansible to manage chroot directories. Example playbooks will be shown to illustrate how they are used in managing the Lustre server root file system images. We will also discuss how Ansible can be used as a day-to-day Lustre administration tool by using playbooks to format a file system or manage OST failover.

Presenter
avatar for Rick	Mohr

Rick Mohr

Senior HPC System Administrator, National Institute for Computational Sciences
Rick is a senior HPC systems administrator and storage team lead at the University of Tennessee’s National Institute for Computational Sciences (NICS). Rick has worked in the HPC field for over 16 years and has worked primarily with Lustre during the last 7 years. He currently... Read More →

Authors
C

Christopher Layton

Oak Ridge National Laboratory
avatar for Nathan	Grodowitz

Nathan Grodowitz

HPC Admin, Oak Ridge National Labs
Nathan has worked in HPC for 10 years, starting at Mississippi State's HPC^2 labratory, then working for the Cray serving the DOD, and works for the CADES project at Oak Ridge National Labs. His main duties involve designing and managing the CADES Scalable HPC cluster, CADES HPC resources... Read More →



Thursday June 1, 2017 1:30pm - 2:00pm EDT
Alumni Hall (IMU - 1st Floor) 900 E 7th St, Bloomington, IN, 47405

2:00pm EDT

Scalable high availability for Lustre with Pacemaker
Pacemaker, together with Corosync, is often used to launch Lustre services
and coordinate Lustre failover. However the common approaches to employing
Pacemaker/Corosync presented LLNL with two main challenges: scalability, and
limited compatibility with stateless server nodes. Corosync has algorithmic
limitations that constrain the normal Pacamaker/Corosync cluster size to
sixteen nodes or less. Pacemaker's use of its main configuration file
as an active store for system state means that it assumes that every node
has its own persistent storage, which is not the case with stateless servers.

The common solution in the Lustre world to the scalability issue is to have
many Pacemaker/Corosync "clusters" within one Lustre cluster. For instance,
each OSS failover pair would form an independent Pacemaker/Corosync cluster.
This works, but means that monitoring the Lustre cluster state requires looking
at many Pacemaker/Corsysnc systems rather than a single installation. It also
precludes more advanced Pacemaker abilities such as global ordering of Lustre
service startup (i.e. start the MGS before MDS and OSS).

We will present a solution for employing the lesser know pacemaker-remote
functionality. This solution works well with stateless servers, and has
allowed LLNL to field a production cluster of 54 nodes controlled by a single
Pacemaker/Corosync instance.

Presenter
avatar for Christopher Morrone

Christopher Morrone

Computer Scientist, Lawrence Livermore National Laboratory


Thursday June 1, 2017 2:00pm - 2:30pm EDT
Alumni Hall (IMU - 1st Floor) 900 E 7th St, Bloomington, IN, 47405

2:30pm EDT

Lustre integrated policy engine
Lustre file systems features, like HSM and OST pools based on SSD, have enabled
multiple new use cases, which makes data management of the Lustre file system a new
daily task. The Robinhood Policy Engine is able to do several kinds of data
management based on pre-configured rules and has been confirmed as a versatile
tool to manage large Lustre file systems. However, using Robinhood requires an
external server with high-end CPUs, memory, and adequate storage for the
back-end RDBMS. It also relies on the Lustre Changelogs feature, adding additional
administration effort. We (DDN) want to propose a different, more integrated
approach, by developing a new policy engine named LIPE (Lustre Integrated
Policy Engine).

LIPE scans the MDTs/OSTs directly and maps the required information in memory,
avoiding the need for an external persistent data storage. The implemented
algorithms don’t rely on Lustre Changelog, simplifying the overall
administration.

The core component of Policy Engine is an arithmetic unit that can calculate
the values of arithmetic expressions. An expression is either 1) a 64-bit
integer, 2) a constant name, 3) a system attribute name, 4) an object
attribute name, 5) special function with configurable argument, 6) two
expressions that are combined together by an operator. By defining expressions
with different attributes, constants and operators, users have a flexible and
powerful way to define rules to match objects in MDTs/OSTs.

When LIPE is running against a MDT or OST device, it will scan all the inodes
in the device and attempts to match the inode against the rules. A rule is
matched if the value of its expression turns out to be a non-zero value. When
a rule is matched, an action defined in that rule will be triggered against
the inode.

Different types of actions could be defined as rules, e.g. HSM actions,
counter increasing actions, remove actions, copy actions, etc. Except counter
increasing, other kinds of actions are handled by agents, which could be
implemented as a plugin of LIPE. Thus, new types of actions could be easily
extended for new purposes.

In order to provide clearer functional classification, multiple rules can
be grouped together as a rule group in a manner of sorted list. When scanning
an inode, if one rule in a group is matched against that inode, the evaluation
of the whole entire group on that inode is finished. That means the other
rules below that rule in the list will not be evaluated against that inode
later on.

Multiple rule groups can be defined in a LIPE configuration set. Usually, these
rule groups are focusing on different aspects; for example, one rule group
for HSM, another for size distribution. Other examples of rule groups are
type distribution, access time distribution, modification time distribution,
stripe count distribution, temporary files, location on OSTs, location on OST
pools, etc.

Editing LIPE’s configuration file could be challenging since hundreds of rules
can be defined in a single configuration file. A web-based GUI has been
developed to simplify the configuration and usage of LIPE.

The evaluation of rules and LIPE’s device scanning mechanism is implemented in
a such efficient way that drastically improves the MDT/OST scanning process.
The scanning speed of a single SSD could provide un-cached scanning rate of
more than 1 million inodes per second. And if the server has enough memory to
cache this data, the scanning speed could be as high as 50+ million inodes per
second.

During the presentation, we'd like to provide more details on design and
implementation as well some preliminary benchmark results. Additionally,
possible LIPE use cases will be introduced.

Presenter
L


Thursday June 1, 2017 2:30pm - 3:00pm EDT
Alumni Hall (IMU - 1st Floor) 900 E 7th St, Bloomington, IN, 47405

3:00pm EDT

Break
Thursday June 1, 2017 3:00pm - 3:20pm EDT
Solarium (IMU, enter through Alumni Hall - 1st Floor) 900 E 7th St, Bloomington, IN, 47405

3:20pm EDT

Take back control with Robinhood v3
Robinhood Policy Engine is a file system management tool developed at CEA. Its original purpose was to manage purges in scratch file systems, which it can still do powerfully, and then Lustre/HSM where it plays a crucial role. Robinhood maintains a database (MySQL/MariaDB, for now) containing information about all entries of the file system. Each entry (file, directory, symlink, etc.) is associated to numerous (flexible) tags and values. Complex state machines can be defined to manage the entries' lifecycle and automatically run policies at large scale. Therefore, the tool can now handle extremely diverse management and reporting tasks on large file systems.

After recalling the basic concepts and features of the tool, we will present v3 in more depth. This latest version, which we released last year, introduced "generic policies" and a plugin mechanism. Combined together, these features allow innovative usages, some of which we have implemented and many more that still have to be designed and shared. We will illustrate how system administrators can use "generic policies" for fine-grained reporting or to define new workflows using configuration only, and developers - vendors in particular - can enrich their solutions with new plugins.

Robinhood is driven by a vibrant community, which works exemplary well. Robinhood developers have a strong Lustre background, either as sysadmins or core developers. We will present the numerous changes that we contributed back to Lustre to improve the efficiency and flexibility of Robinhood. The last section of the talk will focus on the ongoing development and what to expect in the future versions of Robinhood Policy Engine.

Presenter

Thursday June 1, 2017 3:20pm - 3:50pm EDT
Alumni Hall (IMU - 1st Floor) 900 E 7th St, Bloomington, IN, 47405

3:50pm EDT

Improving overall Robinhood performance for use on large scale deployments
We present the details of real world deployments of the Robinhood Policy Engine for large scale HPC systems. We note what changes, improvements and additions have been required in areas of changelog ingest, code stability, and database interaction to allow for successful scale deployments. As a part of this effort we introduced new features, rewrote large parts of the core to allow for a plugin style architecture, built a preprocessor plugin, and improved the stability of the code.

Presenter
C

Colin Faber

Sr. Staff Engineer, Seagate
* Original CFS engineer working on Lustre * Over a decade of community involvement * Helped construct some of the largest file systems on the planet * Current role as Product architect for the A200 policy engine system * Current focus is on policy engine technology, specifically... Read More →


Thursday June 1, 2017 3:50pm - 4:20pm EDT
Alumni Hall (IMU - 1st Floor) 900 E 7th St, Bloomington, IN, 47405

4:20pm EDT

An extensible and scalable Lustre HSM Agent and Data Movers
This talk will present a technical overview of and early user experience from Cambridge University's use of the new Lustre HSM Agent and Data Movers, codenamed Lemur. Lemur is an open source project available at https://github.com/intel-hpdd/lemur and can be used in conjunction with any recent Lustre release (2.6+).

Lustre HSM (Hierarchical Storage Management) was developed primarily by CEA and was first introduced in the 2.5 series of Lustre releases. HSM gives filesystem administrators better control over storage utilization by allowing infrequently-used file data to be moved off of expensive Lustre storage onto less-expensive secondary storage.
Unlike the in-tree POSIX copytool which combines the Lustre HSM agent and data movement functionalities in a single binary, the Lemur approach separates these concerns with a modular design which allows new data movement targets to be added more easily. The goal is to enable and encourage a larger ecosystem of Lustre HSM storage tiers (e.g. Amazon S3, Scality, HPSS, etc) and secondary data movement possibilities.
Using modern technologies (the Go programming language, gRPC, etc), highly-performant data movers can be quickly developed to focus on the specifics of new HSM storage tiers without requiring an in-depth understanding and reimplementation of the Lustre HSM agent functionality.

The current releases of the Lemur project include a common Lustre HSM Agent which communicates with modular data mover plugins using gRPC, and two implementations of data mover plugins for POSIX and AWS S3-compatible HSM storage tiers.
The University of Cambridge has been using Lustre for many years as the primary high-performance storage for their research computing clusters. We are currently building a new storage platform for the wider University based around Intel Enterprise Edition Lustre and HSM as a general-purpose research storage area that is continually being archived for disaster recovery purposes. We have been using the Lemur copytool from an early stage in the project in conjunction with the Robinhood Policy Engine and have been impressed by its speed and stability over the in-tree POSIX copytool. Our design currently involves a 1.2PB Lustre filesystem built on Dell storage hardware that is accessed through dedicated gateway machines by users, and a HSM backend tier composed of a Tape archive and a 300TB disk cache that is combined into a single unified POSIX filesystem by QStar Archive Manager.

Presenter
M

Matt Rásó-Barnett

Cambridge University
JH

John Hammond

Software Engineer, Intel Corporation
John Hammond is a negative eighth level Lustre* morlock at Intel HPDD. His interests include reviewing, deleting, breaking, and occasionally fixing your code. This is his sixth time speaking at LUG.

Authors
W

Wojciech Turek

Cambridge University



Thursday June 1, 2017 4:20pm - 4:50pm EDT
Alumni Hall (IMU - 1st Floor) 900 E 7th St, Bloomington, IN, 47405

4:50pm EDT

High speed distributed data mover for Lustre HSM
Lustre HSM data mover is known as copy tool which facilitates transfer (i.e. archival) of file data to secondary storage. There can be multiple copy tools registered with Lustre. However, the transfer of a single lustre file is limited to only one copy tool (which is single threaded currently). Current implementation of copy tool holds well for small files and has limitations in terms of performance and scalability for large files. It may pose challenges in moving large sized files or a significantly large number of files. This talk introduces Distributed Data Mover, a tool being developed at Seagate to address these challenges.

For a large file, the data migration can happen over multiple nodes and in multiple threads, all in parallel. We present the architecture and design of the new improved data mover, which enables multiple coordinated copy tools to distribute file copy job and perform data copy in parallel. The distributed data mover offers a faster and scalable copy tool for Lustre.

The architecture of Distributed Data Mover includes a common request queue for all the copy jobs. When a copy tool receives the copy request from Lustre, it does not processes as-is on an immediate basis. Instead, the request is split into parts depending on the size of the request and then is forwarded to a shared queue. This way a single copy request may be represented as several smaller copy jobs on the shared queue. All of the instances of copy tools, which are running on different nodes, are linked together by means of connections to the shared queue for processing the job. Each copy tool has a set of threads for processing the copy jobs that are pulled from the queue. When a copy tool has one or more threads available for work, it simply pulls requests from the shared queue and removes them from the queue. This way, jobs are submitted to worker threads to execute in parallel.

The completed copy jobs are stored in another shared queue called response queue. Once the copy completes the result is placed on the response queue. The copy tool that did the job of splitting the original copy request from Lustre looks for the results of copy jobs (i.e. parts of copy request) and reports progress back to Lustre as and when they are completed.

The overall mechanism offers parallelism in high speed data transfer of either a single large file or a large number of files by distributing the copy job over to several copy tools running on different nodes. It also maintains the integrity of the copy request by placing the responsibility to copy on the original copy tool that received the copy request from Lustre, thus providing a channel for recovery in case a copy job fails.

Presenter
avatar for Ujjwal	Lanjewar

Ujjwal Lanjewar

Architect, Seagate Technology
A veteran of storage industry with 18+ years in development of Distributed File Systems products. Have also architected and developed technologies around NAS, such as Global Namespace with Federated FS, Caching Solutions, File Data Protection with Replication, etc. Relatively new... Read More →

Authors
B

Bikrant Singh

Seagate Technology



Thursday June 1, 2017 4:50pm - 5:20pm EDT
Alumni Hall (IMU - 1st Floor) 900 E 7th St, Bloomington, IN, 47405

5:30pm EDT

Transportation to/from Conference Center (IMU)
Conference Shuttle
Leave IMU at 5:30pm
Drop-off at Springhill and Hyatt

Leave IMU at 6:15pm
Drop-off at Springhill and Hyatt

Sponsors
avatar for Research Technologies - Pervasive Technology Institute

Research Technologies - Pervasive Technology Institute

Indiana University
UITS Research Technologies (RT) develops, delivers, and supports advanced technology solutions that enable new possibilities in research, scholarly endeavors, and creative activity at Indiana University and beyond. RT is also a cyberinfrastructure and service center affiliated... Read More →


7:45pm EDT

Transportation to/from Pub Crawl
Pub Crawl Shuttle Stop 1:
  Bus1
Leave IMU at 7:45pm
  Bus2
Leave Springhill at 7:35pm
Leave Hyatt at 7:45pm
Arrive at Nick’s English Hut at 8pm

Pub Crawl Shuttle Stop 2:
2 buses leave Nick’s English Hut at 9pm
Arrive at Upland Brewing Company at 9:10pm

Pub Crawl Shuttle Home:
2 buses, leave Upland Brewing Company at 10:15pm
Arrive at Springhill/Hyatt/IMU


8:00pm EDT

Pub Crawl - sponsored by HGST+WARP
Network with others in the Lustre community as we tour a few of the brewpubs in Bloomington.  We'll start at 8pm at Nick's English Hut on Kirkwood(5th St).  Nick's was established in 1927 and is a staple among IU alumni and the Bloomington community.  Typical pub fare (chicken fingers, chicken wings, pizza, mushrooms, and fries) along with two drink tickets will be provided.  https://www.nicksenglishhut.com/

At 9pm we'll move to Upland Brewing Company.  Upland is anything but typical and was established in 1988.  We'll enjoy local brews and wine, as well as upscale food (Asian chicken skewers, vegetable and chicken Chimmichurri skewers, cheese bites, frest fruit bites, sausage bites, chocolate porter bites, carrot cake bites, strawberry cheesecake bites, and mini-cheesecake bites). http://www.uplandbeer.com/about/

Sponsored by HGST+WARP.

Pub Crawl Shuttle Stop 1:
  Bus1
Leave IMU at 7:45pm
  Bus2
Leave Springhill at 7:35pm
Leave Hyatt at 7:45pm
Arrive at Nick’s English Hut at 8pm

Pub Crawl Shuttle Stop 2:
2 buses leave Nick’s English Hut at 9pm
Arrive at Upland Brewing Company at 9:10pm

Pub Crawl Shuttle Home:
2 buses, leave Upland Brewing Company at 10:15pm
Arrive at Springhill/Hyatt/IMU


Thursday June 1, 2017 8:00pm - 10:00pm EDT
Nick's English Hut and Upland Brewing Company
 
Friday, June 2
 

7:30am EDT

Breakfast

Network with your peers and visit the Platinum and Gold sponsor tables as you enjoy:
Scrambled Eggs with fresh herbs
Roasted Red Potato Home Fries with Peppers and Onions
Maple Sausage Links and Smoked Bacon
Smoked salmon Eggs Benedict
Homemade granola and yogurt parfaits
Strawberry and Ricotta Stuffed French Toast
Fresh Breakfast Breads and Pastries
Seasonal Fruit
Coffee, decaf, tea, assorted juices, water

Sponsors
avatar for Research Technologies - Pervasive Technology Institute

Research Technologies - Pervasive Technology Institute

Indiana University
UITS Research Technologies (RT) develops, delivers, and supports advanced technology solutions that enable new possibilities in research, scholarly endeavors, and creative activity at Indiana University and beyond. RT is also a cyberinfrastructure and service center affiliated... Read More →


Friday June 2, 2017 7:30am - 9:00am EDT
Solarium (IMU, enter through Alumni Hall - 1st Floor) 900 E 7th St, Bloomington, IN, 47405

7:30am EDT

Transportation to/from Conference Center (IMU)
Bring your bags to IMU if checking out of hotel today!  If you need to return to your hotel after 1pm, contact Robert Ping (robping at iu dot edu).

Conference Shuttle
Leave Springhill at 7:30am
Leave Hyatt at 7:45am
Drop-off at IMU

Leave Springhill at 8:15am
Leave Hyatt at 8:30am
Drop-off at IMU

Leave Springhill at 9:00am
Leave Hyatt at 9:15am
Drop-off at IMU

Sponsors
avatar for Research Technologies - Pervasive Technology Institute

Research Technologies - Pervasive Technology Institute

Indiana University
UITS Research Technologies (RT) develops, delivers, and supports advanced technology solutions that enable new possibilities in research, scholarly endeavors, and creative activity at Indiana University and beyond. RT is also a cyberinfrastructure and service center affiliated... Read More →


7:30am EDT

Registration
Pick up your badge and program.

Chairs
avatar for Robert Ping

Robert Ping

Program Management Specialist, Indiana University
In 2024, he accepted program management responsibilities for Jetstream2 and the Midwest Research Computing and Data Consortium where he will facilitate the success of the National Science Foundation-sponsored programs.As RDA-US Program Manager, he oversees the multiple projects within... Read More →

Sponsors

Friday June 2, 2017 7:30am - 1:00pm EDT
Alumni Hall (IMU - 1st Floor) 900 E 7th St, Bloomington, IN, 47405

9:00am EDT

LUG17 Day 3 Opening remarks
Sponsors
avatar for Research Technologies - Pervasive Technology Institute

Research Technologies - Pervasive Technology Institute

Indiana University
UITS Research Technologies (RT) develops, delivers, and supports advanced technology solutions that enable new possibilities in research, scholarly endeavors, and creative activity at Indiana University and beyond. RT is also a cyberinfrastructure and service center affiliated... Read More →


Friday June 2, 2017 9:00am - 9:10am EDT
Alumni Hall (IMU - 1st Floor) 900 E 7th St, Bloomington, IN, 47405

9:10am EDT

Managing self-encrypting HDDs with Lustre/ZFS
Encrypting data at rest for HPC systems is almost always desirable. But doing so in an open Lustre system involves cost and performance trade-offs. Software encryption imposes performance penalties undesirable in HPC systems, whereas hardware systems traditionally required vendor-locked and expensive proprietary approaches. This talk will discuss how WARP Mechanics architected a simple software solution for managing COTS hardware self-encrypting HDDs, to strike a balance using high performance hardware encryption with low cost open software.

Presenter
Sponsors


Friday June 2, 2017 9:10am - 9:25am EDT
Alumni Hall (IMU - 1st Floor) 900 E 7th St, Bloomington, IN, 47405

9:30am EDT

Profiling application IO patterns with Lustre
This talk presents a related, but non-typical, use case for Lustre – profiling Application IO patterns. Our initiative is from a set of common questions among the HPC storage community:  What does the IO pattern of my application look like? Is my application IO bound or CPU bound? Which processes are consuming most IO? How can I optimize the storage to make my application running faster? What advice you would give for me to optimize the IO on the application side? Lustre provides a comprehensive list of statistics (rpc stats, brw stat, extents stat, etc.) that can help demystify what is really happening on the IO side. We can treat the applications as black boxes and use these built-in statistics to observe their behavior and also how the file system reacts to these IO requests. Therefore, we can use this methodology to profile both proprietary applications and open source software. In this talk we will start with the analysis of how different IO patterns would impact IO performance on common storage medium. Then, we discuss the metrics we need to collect in order to understand the Application IO patterns. We will present the probe points of each of these metrics in the context of Lustre. We will use some common synthetic benchmarks to show the correspondence between the IO patterns and the IO statistics on the Lustre side. To demonstrate the effectiveness of this methodology, we will present a case study on Nemo Ocean Model (http://www.nemo-ocean.eu/), an open source common scientific application in HPC research. We will demonstrate how to track down which Nemo Ocean processes were generating IOs and their IO patterns and, based on our analysis, we demonstrate how to tune the compute node and Lustre storage to improve the Nemo Ocean run time. As our methodology uses all of Lustre built-in statistics and generic Linux tool - nothing related to proprietary software - anyone from the Lustre community can take advantage of this approach. Furthermore, we hope this talk will inspire the community to consider using Lustre in non-traditional areas.

Presenter
JN

James Nunez

Intel Corporation
James works in the HPDD at Intel Corporation and spends his days monitoring, fixing and improving Lustre testing and the Lustre test suites. Talk to him about your ideas on how to improve and expand Lustre testing.

Authors
Sponsors


Friday June 2, 2017 9:30am - 10:00am EDT
Alumni Hall (IMU - 1st Floor) 900 E 7th St, Bloomington, IN, 47405

10:00am EDT

Analyzing I/O performance on a NEXTGenIO class system
I/O intensive applications are a significant bottleneck for current HPC systems and they can have noticeable impact on the performance of an entire system’s workload. On general purposes HPC systems, the I/O nodes and the network that provisions them are resources that are shared by all jobs on the system. Applications thus have to compete for resources.
The overall objective of the Next Generation I/O Project (NEXTGenIO) is to design and prototype a new, scalable, high-performance, energy efficient computing platform, designed to address the challenge of delivering the necessary scalable I/O performance to applications at the Exascale. I/O-wise, the architecture covers a novel multi-tier storage hierarchy ranging from a speed-focused, limited capacity NVRAM tier over I/O fabrics to a high-capacity storage tier built by a hierarchy of one or more solid state drive tiers and conventional storage, like hard disk drives and tape.

This presentation is about performance analysis techniques that will enable applications to take full advantage of the new memory and I/O models afforded by the new non-volatile memory technology that we expect to be seen soon in many systems. We will discuss the performance analysis opportunities and necessities for NVRAM-based I/O in combination with parallel filesystem I/O provided by a Lustre file system, and high level I/O libraries like NetCDF or HDF5. The overall goal of this presentation is to show the comprehensive performance analysis of a complex I/O stack using the NEXTGenIO system as an example.

Presenter
H

Holger Brunst

TU Dresden

Authors


Friday June 2, 2017 10:00am - 10:30am EDT
Alumni Hall (IMU - 1st Floor) 900 E 7th St, Bloomington, IN, 47405

10:30am EDT

Break
Friday June 2, 2017 10:30am - 10:50am EDT
Solarium (IMU, enter through Alumni Hall - 1st Floor) 900 E 7th St, Bloomington, IN, 47405

10:50am EDT

Unraveling "Burst Buffer" tiers: A survey of the various instantiations
An ever-increasing portfolio of storage media (NVDIMM, SSD, SMR, etc) has storage developers excited and storage users burdened. For storage developers, these storage media create exciting opportunities for us to build exotic tiered architectures and to brush off the old concepts we learned in school about prefetch, scheduling, victimization algorithms, starvation, etc. For storage users who'd prefer to focus on actual computational science, these storage tiers are pure frustration. Lustre users can either ignore these emerging tiered storage systems at their own performance peril, or they can attempt to
learn how to use them. Unfortunately for users, these storage tiers are entering a period of rapid flux in which the multiple vendors appear nowhere near close to converging on a standard.

In this talk, we will present an overview of the various vendor offerings for tiered storage systems (colloquially called burst buffers). These include DDN's IME, Cray's DataWarp, IBM's eponymously named Burst Buffers, Seagate's NXD, and the EMC research prototype 2Tiers. The talk will provide analysis of areas in which the various systems have general commonality with only slight terminology differences such as staging mechanisms which are offered under many different names such as promote, persist, transfer, stage, and migrate. It will also discuss several interesting features unique to only a single system such as attaching key-values to a staging operation as well as querying paths to provide unique access to metadata services.

At the end of the presentation, the audience will be familiar with the general similarities and differences in the various tiered storage offerings. They will understand their functionality and usability and terminology. Although the presentation will absolutely not be an evaluation of the respective value of the various offerings, the audience will have the information needed for them to begin their own evaluation of which functions and features are most appropriate for their Lustre systems and workloads. This information will also be useful to the Lustre community as these concepts can be considered in the Luster user requirement gathering exercise.

Presenter
J

John Bent

Seagate Government Solutions

Authors


Friday June 2, 2017 10:50am - 11:20am EDT
Alumni Hall (IMU - 1st Floor) 900 E 7th St, Bloomington, IN, 47405

11:20am EDT

Responsive storage
It is not uncommon now for research data to flow through pipelines comprised of dozens of different management, organization, and analysis processes, while simultaneously being distributed across a number of different storage systems. To alleviate these issues we propose Ripple, a system that enables storage systems to become "responsive." Ripple allows users to express data management tasks using intuitive, high-level, if-trigger-then-action style rules. It monitors storage systems for file system events, evaluates rules, and uses serverless computing techniques to execute actions in response to these events. We have developed a prototype implementation of Ripple that leverages inotify and Lustre ChangeLogs to reliably detect filesystem events. Events are filtered based on active rules, and actions are invoked using Amazon Lambda functions. Supported actions include transferring data using Globus or executing operations on the local storage system (e.g., job submission or container execution). In this talk we will describe Ripple, specifically focusing on its integration with Lustre, and outline its use in several real-world scientific applications. We show that Ripple’s capabilities can be applied to almost any Lustre store, at very large scale, therefore providing benefits to researchers with respect to their ability to automate common data management processes.

Presenter
Authors
R

Ryan Chard

Argonne National Laboratory



Friday June 2, 2017 11:20am - 11:50am EDT
Alumni Hall (IMU - 1st Floor) 900 E 7th St, Bloomington, IN, 47405

11:50am EDT

LUG17 Closing Remarks
Chairs
S

Stephen Simms

Indiana University Pervasive Technology Institute

Sponsors
avatar for Research Technologies - Pervasive Technology Institute

Research Technologies - Pervasive Technology Institute

Indiana University
UITS Research Technologies (RT) develops, delivers, and supports advanced technology solutions that enable new possibilities in research, scholarly endeavors, and creative activity at Indiana University and beyond. RT is also a cyberinfrastructure and service center affiliated... Read More →


Friday June 2, 2017 11:50am - 12:00pm EDT
Alumni Hall (IMU - 1st Floor) 900 E 7th St, Bloomington, IN, 47405

12:00pm EDT

Lunch
Network with your peers and visit the Platinum and Gold sponsor tables as you enjoy:
Salad and Baked Potato Bar with:  Three bean chili, colossal baked potatoes, romaine lettuce, grilled chicken, beef strips, blackened shrimp, cheddar cheese, tomatoes, bacon bits, broccoli, green onions, sour cream, butter, parmesan cheese and croutons, dressings

Executive Boxed lunches to include:  Pork Banh, Roast Beef, Spiced Falafel each with seasonal fruit, antipasto salad, condiments, potato chips, and high-end desserts

Assorted Fruit Pies

Fresh baked rolls and buttter
Beverage service

Sponsors
avatar for Research Technologies - Pervasive Technology Institute

Research Technologies - Pervasive Technology Institute

Indiana University
UITS Research Technologies (RT) develops, delivers, and supports advanced technology solutions that enable new possibilities in research, scholarly endeavors, and creative activity at Indiana University and beyond. RT is also a cyberinfrastructure and service center affiliated... Read More →


Friday June 2, 2017 12:00pm - 1:00pm EDT
Solarium (IMU, enter through Alumni Hall - 1st Floor) 900 E 7th St, Bloomington, IN, 47405
 
Filter sessions
Apply filters to sessions.