Workshop at ACL 2022

May, 27th 2022

About

2 years after the appearance of GPT-3, large language models seem to have taken over NLP. Their capabilities, limitations, societal impact and the potential new applications they unlocked have been discussed and debated at length. A handful of replication studies have been published since then, confirming some of the initial findings and discovering new limitations.

This workshop aims to gather researchers and practitioners involved in the creation of these models in order to:

  1. Share ideas on the next directions of research in this field, including – but not limited to – grounding, multi-modal models, continuous updates and reasoning capabilities.
  2. Share best-practices, brainstorm solutions to identified limitations and discuss challenges, such as:

This workshop is organized by the BigScience initiative and will also serve as the closing session of this one year-long initiative aimed at developing a multilingual large language model, which is gathering 1,000+ researchers from more than 60 countries and 250 institutions and research labs. Its goal is to investigate the creation of a large scale dataset and model from a very wide diversity of angles.

Call for Papers

We call for relevant contributions, either in long (8 pages) or short (4 pages) format. Accepted papers will be presented during a poster session. Submissions can be archival or non-archival. Submission opens on February 1st, 2022 and should be made via OpenReview. The paper submissions will be done via ARR. For more information about templates, guidelines, and instructions, see the ARR CFP guidelines.

Dates

Programme

Panels (pre-recorded)

Scaling

Myle Ott, Connor Leahy, Shruti Bhosale

Moderator: Matthias Gallé

Ethical & Legal considerations