Workshop: Generative AI with Diffusion Models
Friday, 18 October 2024 -
10:00
Monday, 14 October 2024
Tuesday, 15 October 2024
Wednesday, 16 October 2024
Thursday, 17 October 2024
Friday, 18 October 2024
10:00
Introduction: Meet the instructor; Create an account at courses.nvidia.com/join
10:00 - 10:15
10:15
From U-Nets to Diffusion: Build a U-Net, a type of autoencoder for images; Learn about transposed convolution to increase the size of an image; Learn about non-sequential neural networks and residual connections; Experiment with feeding noise through the U-Net to generate new images
10:15 - 11:15
11:15
Break
11:15 - 11:25
11:25
Control with Context: Learn how to alter the output of the diffusion process by adding context embeddings; Add additional model optimizations such as Sinusoidal Position Embeddings, The GELU activation function, Attention
11:25 - 12:25
12:25
Text-to-Image with CLIP: Walk through the CLIP architecture to learn how it associates image embeddings with text embeddings; Use CLIP to train a text-to-image diffusion model
12:25 - 13:25
13:25
Break
13:25 - 14:25
14:25
State-of-the-art Models: Review various state-of-the-art generative ai models and connect them to the concepts learned in class; Discuss prompt engineering and how to better influence the output of generative AI models; Learn about content authenticity and how to build trustworthy models
14:25 - 15:25
15:25
Final Review: Review key learnings and answer questions; Complete the assessment and earn a certificate; Complete the workshop survey; Learn how to set up your own AI application development environment.
15:25 - 16:25