CVPR 2023 Tutorial on Prompting in Vision

19 June 2023, 9AM-12PM

West 223-224, Vancouver Convention Centre


Tutorial lecturers


Originating from natural language processing, the new paradigm of prompting has recently swept through the computer vision community, bringing disruptive changes to various computer vision applications, such as image recognition and image generation. In comparison to the traditional fixed-once-learned architecture, like a linear classifier trained to recognize a specific set of categories, prompting offers greater flexibility and more opportunities for novel applications. It allows the model to perform new tasks, such as recognizing new categories, by tuning textual instructions or modifying a small number of parameters in the model's input space while keeping the majority of the pre-trained parameters untouched. This paradigm significantly pushes conversational human-AI interaction to unprecedented levels. Within a short period of time, the effectiveness of prompting has been demonstrated in a wide range of problem domains, including image classification, object detection, image generation and editing, video analytics, and robot control. In this tutorial, our aim is to provide a comprehensive background on prompting by building connections between research in computer vision and natural language processing. We will also review the latest advances in using prompting to tackle computer vision problems.


09:00 am - 09:10 am Opening remarks, by Kaiyang Zhou
09:10 am - 09:50 am Prompting in visual intelligence and generation, by Kaiyang Zhou & Ziwei Liu [slides-a & video-a, slides-b & video-b]
09:50 am - 10:30 am Teaching language models to reason, by Denny Zhou [slides]
10:30 am - 10:40 am Coffee break
10:40 am - 11:20 am Visual prompting, by Hyojin Bahng & Phillip Isola [slides]
11:20 am - 12:00 pm Improved model adaptation via weight interpolation and prompt generation, by Sarah Pratt & Ludwig Schmidt [slides-a & video-a, video-b]


Big thanks to everyone showing up (in-person & online) at our tutorial!


Please contact Kaiyang Zhou for general inquiries.