/

Introduction to the PyTorch Lightning Ecosystem

Copy Link

Unlock This Lesson

33

min

Introduction to the PyTorch Lightning Ecosystem

Introduction to the PyTorch Lightning Ecosystem

Introduction to the PyTorch Lightning Ecosystem

Introduction to the PyTorch Lightning Ecosystem

publish date

Jul 19, 2022

duration

33

min

Difficulty

Intermediate

Beginner

Beginner

Beginner

Case details

Abstract:This talk introduces the PyTorch lightning ecosystem one step at a time. We'll look into the PyTorch Lightning framework's core components. This session also discusses the fundamentals of distributed training and how scaling works with ML model training. Using the PyTorch Lightning components, we demonstrate how to implement a simple model and scale it with different distributed strategies and accelerators with ease without worrying about the hassles of engineering. Outline of the Session Part 1: An Introduction to the PyTorch Lightning Core Components This section will go through Pytorch Lightning's core building components and how they fit into the typical research/data scientist development pipeline. We'll show you how to organize your PyTorch research code in LightningModule and go through the feature-rich PyTorch Lightning Trainer to help you supercharge your ML pipeline. Part 2: Fundamentals of Distributed Training We will discuss the core principles of distributed training in machine learning. We'll also talk about why we need it and why it's so complicated. Then, we'll go over two fundamental approaches to distributed training in depth: Data and Model Parallelism. Part 3: Using PyTorch Lightning at Scale Accelerator refers to the hardware being used by PyTorch Lightning for training and inference applications. Currently, PyTorch Lightning supports several accelerators: CPUs, GPUs, TPUs, IPUs, and HPUs. We will go over some of the accelerators in depth. As an ML practitioner, you'd like your focus more on research rather than engineering logic around hardware. We'll show you how to easily scale your training for a large dataset across several accelerators such as GPUs and TPUs. We'll also go through the essential API internals of how PyTorch Lightning succeeds in abstracting the accelerator logic from users with support for distributed strategies, allowing them to focus on writing accelerator-agnostic code. Part 4: Overview of Lightning Projects We will go briefly through other Lightning projects like Lightning Flash, TorchMetrics, etc, and how you could leverage them in your Machine Learning projects. Who is it aimed at? Data scientists and ML engineers, who may or may not have used PyTorch Lightning in the past and wish to use distributed training for their models. What will the audience learn by attending the session? Get started with PyTorch Lightning Get an overview of Distributed Training and several ML accelerators Train a model with PyTorch Lightning using different accelerators and strategies. Background Knowledge: Some familiarity with Python, deep learning terminology, and the basics of neural networks.

Share case:

Questions?

Chat with Us!

910 Foulk Road, Suite 201

Wilmington, DE 19803, USA

© 2025 Geekle. All rights reserved.

Questions?

Chat with Us!

910 Foulk Road, Suite 201

Wilmington, DE 19803, USA

© 2025 Geekle. All rights reserved.

Questions?

Chat with Us!

910 Foulk Road, Suite 201

Wilmington, DE 19803, USA

© 2025 Geekle. All rights reserved.

Questions?

Chat with Us!

910 Foulk Road, Suite 201

Wilmington, DE 19803, USA

© 2025 Geekle. All rights reserved.