Invited Speakers

Schedule

8:30-8:40 Opening remarks Workshop Organizers
8:40-9:15 Invited Talk
9:15-9:50 Invited Talk
9:50-10:00 Video-guided Machine Translation Challenge
10:00-10:10 Challenge Talk [runner up]
10:10-10:20 Challenge Talk [winner]
10:20-10:50 Coffee Break and Poster Session
10:50-11:25 Invited Talk
11:25-12:00 Invited Talk
12:00-2:00 Lunch
2:00-2:35 Invited Talk
2:35-2:45 REVERIE Challenge
2:45-2:55 Challenge Talk [runner up]
2:55-3:05 Challenge Talk [winner]
3:05-3:30 Poster Highlights
3:30-4:00 Coffee Break and Poster Session
4:00-4:35 Invited Talk
4:35-5:10 Invited Talk
5:10-5:40 Panel Discussion

Overview and Call For Papers

Language and vision research has attracted great attention from both natural language processing (NLP) and computer vision (CV) researchers. Gradually, this area is shifting from passive perception, templated language, and synthetic imagery/environments to active perception, natural language, and photo-realistic simulation or real world deployment. Thus far, few workshops on language and vision research have been organized by groups from the NLP community. We propose the first workshop on Advances in Language and Vision Research (ALVR) in order to promote the frontier of language and vision research and to bring interested researchers together to discuss how to best tackle and solve real-world problems in this area.

This workshop covers (but is not limited to) the following topics:

Video-guided Machine Translation (VMT) Challenge

We also hold the first Video-guided Machine Translation (VMT) Challenge. This challenge aims to benchmark progress towards models that translate source language sentence into the target language with video information as the additional spatiotemporal context. The challenge is based on the recently released large-scale multilingual video description dataset, VATEX. The VATEX dataset contains over 41,250 videos and 825,000 high-quality captions in both English and Chinese, half of which are English-Chinese translation pairs.

Winners will be announced and awarded in the workshop.

Please refer to the VMT Challenge website for additional details (such as participation, dates, and starter code)!

REVERIE Challenge

The objective of REVERIE Challenge is to benchmark the state-of-the-art for the remote object grounding task defined in our CVPR paper, in the hope that it might drive progress towards more flexible and powerful human interactions with robots.

The REVERIE task requires an intelligent agent to correctly localise a remote target object (cannot be observed at the starting location) specified by a concise high-level natural language instruction, such as 'bring me the blue cushion from the sofa in the living room'. Since the target object is in a different room from the starting one, the agent needs first to navigate to the goal location. When the agent determines to stop, it should select one object from a list of candidates provided by the simulator. The agent can attempt to localise the target at any step, which is totally up to algorithm design. But we only allow the agent output once in each episode, which means the agent only can guess the answer once in a single run.

Please visit REVERIE Challenge website for more details!

Important Dates

Submission

The workshop includes an archival and a non-archival track on topics related to language-and-vision research. For both tracks, the reviewing process is single-blind. That is, the reviewer will know the authors but not the other way around. Submission is electronic, using the Softconf START conference management system. The submission site will be available at https://www.softconf.com/acl2020/alvr/.

If you are interested in taking a more active part in the workshop, we also encourage you to apply to join the program committee and participate in reviewing submissions following this link: https://forms.gle/voyxjQLFb8duYM5e7. Qualified reviewers will be selected based on the prior reviewing experience and publication records.

Archival Track

The archival track follows the ACL short paper format. Submissions to the archival track may consist of up to 4 pages of content (excluding references) in ACL format (style sheets are available below), plus unlimited references. Accepted papers will be given 5 content pages for camera-ready version. Authors are encouraged to use this additional page to address reviewers’ comments in their final versions. The accepted papers to the archival track will be included in the ACL 2020 Workshop proceedings. The archival track does not accept double submissions, e.g., no previously published papers or concurrent submissions to other conferences or workshops.

The format of submitted papers to the archival track must follow the ACL Author Guidelines. Style sheets (Latex, Word) are available here. And the Overleaf template is also available here.

Non-archival Track

The workshop also includes a non-archival track to allow submission of previously published papers and double submission to ALVR and other conferences or journals. Accepted non-archival papers can still be presented as posters at the workshop.

There are no formatting or page restrictions for non-archival submissions. The accepted papers to the non-archival track will be displayed on the workshop website, but will NOT be included in the ACL 2020 Workshop proceedings or otherwise archived.

Proceedings

TBD

Organizers and PCs

Organizers

  • Xin Wang
  • UC Santa Barbara xwang@cs.ucsb.edu
  • Jesse Thomason
  • University of Washington thomason.jesse@gmail.com
  • Ronghang Hu
  • UC Berkeley ronghang@berkeley.edu
  • Xinlei Chen
  • Facebook AI Research xinleic@fb.com
  • Peter Anderson
  • Georgia Tech peter.anderson@gatech.edu
  • Qi Wu
  • Adelaide University qi.wu01@adelaide.edu.au
  • Asli Celikyilmaz
  • Microsoft Research asli.ca@live.com
  • Jason Baldridge
  • Google Research jasonbaldridge@google.com
  • William Yang Wang
  • UC Santa Barbara william@cs.ucsb.edu

    Contact the Organizing Committee: alvr-2020@googlegroups.com

    Program Committee

  • Jacob Andreas
  • MIT
  • Angel Chang
  • Simon Fraser Univeristy
  • Devendra Chaplot
  • CMU
  • Abhishek Das
  • Georgia Tech
  • Daniel Fried
  • UC Berkeley
  • Zhe Gan
  • Microsoft
  • Christopher Kanan
  • Rochester Institute of Technology
  • Jiasen Lu
  • Georgia Tech
  • Ray Mooney
  • University of Texas, Austin
  • Khanh Nguyen
  • University of Maryland
  • Aishwarya Padmakumar
  • University of Texas, Austin
  • Hamid Palangi
  • Microsoft Research
  • Alessandro Suglia
  • Heriot-Watt University
  • Vikas Raunak
  • CMU
  • Volkan Cirik
  • CMU
  • Parminder Bhatia
  • Amazon
  • Khyathi Raghavi Chandu
  • CMU
  • Asma Ben Abacha
  • NIH/NLM
  • Thoudam Doren Singh
  • National Institute of Technology, Silchar, India
  • Dhivya Chinnappa
  • Thomson Reuters
  • Shailza Jolly
  • TU Kaiserslautern
  • Alok Singh
  • National Institute of Technology, Silchar, India
  • Mohamed Elhoseiny
  • KAUST
  • Marimuthu Kalimuthu
  • Saarland University
  • Simon Dobnik
  • University of Gothenburg
  • Shruti Palaskar
  • CMU

    If you are interested in taking a more active part in the workshop, apply to join the program committee.