2026 Program

  • 111 South Wacker Conference Center, 29th floor
    111 South Wacker Drive, Chicago, IL 60606
  • Please be prepared to show your ID and check in at the lobby for access to the elevators and the conference.
  • Agenda and times subject to change.
  • (click on a presentation title to view abstract)
Tuesday, April 21, 2026
8:00—17:00 registration desk open
29th floor lobby
8:00—9:00 breakfast
 
9:00—12:00

We will present advanced topics for administrators of Globus Connect Server (GCS) and Globus Compute multi-user deployments. We will focus on newer GCS features such as ACL expiration and policies for restricting shared access to specific domains, as well as configuration and use of the Globus streaming service. We will also discuss how to set up Globus Compute multi-user endpoints in common environments such as Slurm clusters. This session will include hands-on exercise for experimenting with these advanced Globus capabilities. Time will be reserved at the end of the session to address questions and provide guidance tailored to your specific Globus deployment requirements.

12:00—13:00 lunch 
13:00—14:45 Rachana Ananthakrishnan, Kyle Chard, Globus

We will review notable events in the evolution of the Globus service over the past year, and provide an update on future product direction and sustainability.

14:45—15:15 break
 
15:15—16:45 Charles Christoffer, Computational Life Sciences Lead, Purdue University

The Rosen Center for Advanced Computing (RCAC) at Purdue University facilitates research computing and data workflows both for campus users and for participants in the national ACCESS program. Globus is a key part of the cyberinfrastructure underpinning this work, via both Standard and High Assurance services.

Multiple Purdue core facilities operating instruments for life sciences research integrate directly with Globus to support reliable, orderly, high-throughput data movement from instrument systems to campus storage and computing resources. Outside of shared facilities, Globus is also adopted by labs at-large and for individual research projects, including in particular for work in the life sciences, AI, astronomy, and earth science domains. Close integration with the capabilities of Multi-User Globus Compute Endpoints further supports automated workflows and our shared access model.

Evolving regulatory requirements, including the 48 CFR CMMC final rule and NIH NOT-OD-25-159, have added requirements for technical and administrative controls to entities carrying out covered research projects, and in particular to the computing and storage systems they use to do so. Aligning relevant systems with NIST SP 800-171 controls has thus become a necessity for RCAC and any center facilitating such research. Globus High Assurance collections at RCAC are enabling workflows that are both compliant with applicable regulations and also convenient. The recent release of High Assurance Flows has created an opportunity in the pipeline to streamline workflows that otherwise entail out-of-band coordination and create opportunities for deviations.

Ohinoyi Moiza, Research Assistant, Texas Tech

Epilepsy affects 3.4 million Americans, with 30% developing drug-resistant epilepsy requiring surgical intervention. Despite decades of research, surgical success rates remain at 50-60%, largely due to reliance on generic brain models that fail to capture patient-specific seizure propagation patterns. We developed a computational epilepsy modeling pipeline integrating real intracranial EEG (iEEG) data with fractional-order dynamics to create patient-specific seizure simulations. Our workflow processes stereo-EEG recordings from the University of Pennsylvania OpenNeuro dataset, extracting precise 3D electrode coordinates and mapping them to anatomical brain regions.

We use Globus Transfer for secure, verified data sharing across our three-person research team. Patient electrode mapping files (CSV, 3-10KB) and simulation results (HTML visualizations, 4-5MB) are transferred between personal workstations and shared research storage, replacing insecure email attachments with audit-logged, integrity-verified transfers.

We recently implemented Globus Compute on Texas Tech's HPC cluster to automate seizure simulation workflows. Python scripts submit patient-specific electrode network simulations to HPC compute nodes, enabling parallel processing of 20+ patients for validation studies. This automated pipeline eliminates manual job submission and accelerates our research timeline from weeks to days.

Results: We successfully processed 5 patients with 262 total electrodes mapped across brain regions, generating interactive 3D visualizations of patient-specific seizure propagation. Scaling to 20+ patients through Globus Compute-enabled HPC parallelization will validate whether individualized models improve surgical outcome predictions compared to generic approaches, potentially benefiting 1 million drug-resistant epilepsy patients nationwide.

Jim Sesser, Director of System Services, Mississippi State University

We run Globus as a core service at MSU HPC², supporting several storage systems, multiple computational resources, and a wide variety of research. Over the past decade, it’s become the primary way data moves on and off our systems. I’ll talk about how our use has evolved over the years.

Jeferson Souza, Computer Engineer, RNP

At RNP (Brazilian NREN) we are building the eScience network to support data-intensive science and collaboration. Such network will be data transfer-native with a planned use of Globus on top of the solution. In this talk, I will give you a short view about the RNP eScience network, the initial evaluation of Globus, lessons learned and challenges.

Lee Liming, Director of Professional Services, Globus

The Professional Services team at Globus specializes in integrating Globus data management capabilities with research and education applications. In this talk, we share our latest adventures, including highlights from the Earth Systems Modeling community and DOE's Genesis Mission.

Nicholas Schwarz, Principal Computer Scientist and Group Leader, Argonne National Laboratory

The upgraded Advanced Photon Source (APS) represents a monumental shift for light source science, delivering increased source brightness by a factor of 500 or more. While this enables finer resolution and complex multi-modal measurements, the resulting X-ray detectors generate massive data volumes at challenging new rates. To address this explosion in data, a robust model has been developed to seamlessly couple large-scale experimental facilities with supercomputing centers.

This presentation details the integration of the APS and the Argonne Leadership Computing Facility (ALCF) to enable on-demand experimental workflows. Leveraging a ubiquitous fabric consisting of Globus Auth, Transfer, Compute, Flows, and Portal Services provides experimental instruments with immediate, automated access to computing resources on the Polaris supercomputer. The presentation examines the application of these capabilities at the APS X-ray Photon Correlation Spectroscopy beamline. The presentation will conclude with a discussion of future directions, including the application of Globus services as an enabling technology for the American Science Cloud and the adoption of Globus Streaming for real-time data processing, along with plans to integrate with additional large-scale computing centers.

JP Navarro, Globus Professional Services Team Member and ACCESS Operations Co-PI, Argonne National Laboratory

Pilot and production examples that leverage the highly scalable, reliable, professionally operated Globus Search Software as a Service (SaaS).

17:00—18:30 Reception 
Wednesday, April 22, 2026
8:00—17:00 registration desk open
29th floor lobby
8:00—9:00 breakfast
 
9:00—12:00

Motivated by the growing data sharing and collaboration needs in the research enterprise, we will describe how Globus services may be used to dramatically increase the value of your research. In this session we will demonstrate how to easily generate and deploy a data portal for a guest collection—and automatically generate and ingest metadata into a Globus Search index—enabling data publication that meets FAIRness mandates while ensuring compliance with security and privacy regimes. We will also demonstrate how you can integrate Globus Compute into your portal, transforming it into a true science gateway that facilitates analysis of your data by collaborators.

12:00—13:00 lunch 
13:00—14:00 Brian Roland, Data Management Specialist, Northwestern University
Llewellyn Fernandes, Data Management Specialist, Northwestern University

Northwestern University offers researchers a Google Cloud hosted Secure Data Enclave (SDE) aligned with NIST SP 800-171 compliance standards to support controlled research computing. While this model provides the needed technical controls, securely transferring data in and out of the environment introduces compliance-driven complexities that must be carefully managed.

In this session, we will explore our deployment of a Globus Connect Server (GCS) endpoint inside the SDE boundary, using a High Assurance Storage Gateway to interface with Google Cloud Storage while preserving enclave compliance. We will walk through mapping the SDE project structure to Globus Collections, enabling predictable and efficient data transfer workflows without weakening controls.

The talk is based on the Northwestern SDE deployment strategy and focuses on how cloud administrators could replicate a similar approach, including:

  • Infrastructure configuration: sizing and securing the GCS node in the Google Cloud VPC, including firewall rules and egress restrictions
  • Identity and registration: endpoint identity in a federated environment and High Assurance requirements
  • GCS configuration: storage gateway setup for Google Cloud Storage and mapping distinct Ingress/Egress buckets to segregate behaviors
  • Access control and guardrails: configuring collection roles and access policies to enforce least privileged permissions and reduce accidental exposure
  • Operational workflows: how researchers move data through Ingress and Egress while working within their Workspace and without bypassing SDE technical controls

Attendees will leave with a set of best practices to follow, key design decisions to make early, and common pitfalls that can slow deployment or create support issues if not handled up front.

  Geoffrey Lentner, Principal AI Scientist, Purdue University

Large language models have evolved from curiosity to co-pilot in under four years. With the emergence of agentic AI (systems that reason, plan, and execute multi-step tasks autonomously) HPC centers face a new category of user need; researchers want these tools integrated into their workflows, not merely tolerated. This talk offers our perspective from Purdue’s Rosen Center for Advanced Computing (RCAC), where we've begun deploying system-wide configurations, custom MCP servers, and user guidance for agentic tools. We’ve developed a dedicate Globus MCP server implementation that allows AI agents to interact with Globus-connected storage and compute, with 18 discrete "tools" and growing. We share our thoughts on balancing enablement with caution along with a live demo (assuming favorable conditions).

  John Conklin, Senior Research Cloud Architect and Engineer, The Salk Institute for Biological Studies

The Salk Institute for Biological Studies has been using Globus for over two years in support of the data movement needs of labs and shared cores with laboratory instruments that include high-throughput microscopes and sequencers. As both the volume of data and the number of labs generating the data has increased, Globus and its automation features have transformed from an ad hoc lab tool to an institutional platform. This brief presentation will share an overview of the Institute's Globus deployment and its integration into the workflows of wet-bench level scientists.

  Alok Kamatar, Globus Labs

Realizing a shared responsibility between providers and consumers is critical to manage the sustainability of HPC. However, while cost may motivate efficiency improvements by infrastructure operators, broader progress is impeded by a lack of user incentives. We conduct a survey of HPC users that reveals fewer than 30 percent are aware of their energy consumption, and that energy efficiency is among users' lowest priority concerns. One explanation is that existing pricing models may encourage users to prioritize performance over energy efficiency. We propose two transparent multi-resource pricing schemes, Energy- and Carbon-Based Accounting, that seek to change this paradigm by incentivizing more efficient user behavior. These two schemes charge for computations based on their energy consumption or carbon footprint, respectively, rewarding users who leverage efficient hardware and software. We evaluate these two pricing schemes via simulation, in a prototype, and a user study.

14:00—14:30 break
 
14:30—17:00

The Customer Forum is an opportunity for Globus subscribers to discuss their experiences with the service, to learn about our product development plans, and to provide input on future product directions. Attendance at the customer forum is by invitation only. If you would like to represent your institution/community please contact us for an invitation.

Gold Sponsor

Dell Technologies Spectra Logic

Bronze Sponsors

Open On Demand iRODS

Media Partners

AI Wire Big Data Wire HPC Wire

Important Dates

Past Event Programs

2025 2024 2023 2022 2021 2020 2019 2018 2017 2016 2015 2014 2013 2012 2011