The symposium will focus on the latest developments in generative text, image, and graph models. The aim will be to cover some technical building blocks for models like ChatGPT and Stable Diffusion that have seen tremendous recent interest. There will be a discussion on transformers and attention, with methods like BERT, GPT, ViT, Graph Transformers, etc. The aim will be to lay the foundations for variational autoencoders and diffusion models. An attempt to cover material on generative modeling for graphs and energy-based models will be made. Besides the fundamentals, there will be an attempt to touch upon the practice with some examples in Pytorch.
Expert Talk | Speaker | Timing | Overview |
---|---|---|---|
ET1: Transformers and attention | M. J. Zaki | 09:45 AM to 11:15 AM | This session will cover the fundamentals of attention, and recent advances for text, vision and graphs. |
ET2: Generative pretraining for text | M. J. Zaki | 11:45 AM to 01:15 PM | This session will cover GPT, and recent methods like ChatGPT. |
ET3: Autoencoders and diffusion | M. J. Zaki | 02:15 PM to 03:45 PM | This session will cover the variational autoencoders and stable diffusion for images. |
ET4: Multi-modality and graphs | M. J. Zaki | 04:00 PM to 05:00 PM | This session will cover the generative models combining text and images, and graphs. |
ET5: Right! Wrong! May be! - Are you sure dear LLM? | L. Dey | 05:00 PM to 06:30 PM | Right! Wrong! May be! - Are you sure dear LLM? |