Monday, April 20 · Nebraska Hall 213 · 20 participants
A full-day, hands-on workshop from UNL's Holland Computing Center on running AI and ML workflows across campus high-performance computing resources.
Instructor: Caughlin Bohn (Holland Computing Center, UNL)
| Time | Session | Focus |
|---|---|---|
| 10:00 - 10:15 AM | Setup and Support | Account setup, DUO, troubleshooting |
| 10:15 - 10:30 AM | Introduction to PLUMAGE | HCC's new NSF-funded GPU environment |
| 10:30 - 10:45 AM | Break | |
| 10:45 - 11:30 AM | Introduction to GPU Resources on HCC | Accessing GPU resources on Swan |
| 11:30 AM - 12:30 PM | Lunch | |
| 12:30 - 1:15 PM | AI and ML Workflows | Running AI/ML workflows on Swan |
| 1:15 - 2:00 PM | Introduction to PyTorch | GPU PyTorch code on Swan |
| 2:00 - 2:10 PM | Break | |
| 2:10 - 3:00 PM | Introduction to LLMs | LLM workflows on Swan |
| 3:00 - 3:10 PM | Break | |
| 3:10 - 4:20 PM | Introduction to the National Research Platform (NRP) | External free GPU provider |
| 4:20 - 4:30 PM | Open Questions |
Hands-on · Intermediate · Bring your laptop · Full-day commitment
Week 2 opened without a podium. Day 6 was a workshop in the literal sense. Twenty participants came to Nebraska Hall 213 to log in, set up accounts, work through errors, and by the close of the day, run GPU-backed PyTorch code across three different compute environments. Caughlin Bohn led from the front.
The day covered more ground than the title suggested. Participants moved through three distinct compute stacks. Swan is HCC's production cluster, accessed through JupyterLab on Open OnDemand and scheduled through SLURM. PLUMAGE (Promoting Learning Using Mixed Advanced GPU Environments) is HCC's new NSF-funded GPU resource, introduced here in its first workshop setting. The National Research Platform (NRP) rounded out the afternoon as a free external option for work that outgrows campus compute. Seeing all three in a single day gave participants a clear map of where to reach for what.
The afternoon pushed through practical workflows. PyTorch on Swan came first, then an LLM-focused session building on the same stack. By the close of the day, participants had run real GPU jobs on real infrastructure with code they had touched themselves.
The clearest feedback from the day was about HCC's accessibility. Participants remarked again and again that they had no idea HCC was this much of a resource, or that they could get up and running on it this easily. For campus researchers sitting on the edge of bringing ML into their work, the day lowered the perceived cost of entry by a real margin.