This seminar focuses on recent advances in unsupervised learning, an increasingly important field within machine learning. In unsupervised learning, we use the data itself rather than additional output labels to define a training objective, such as completing a given text sequence or filling in an image region. This way, we can learn powerful representations and stable generative paths. We will discuss powerful and versatile UL methods for pre-training state-of-the-art models (i.e., GPT-3), classics like VAEs, and less mainstream model designs such as invertible Normalizing Flows.