ECCV 2020
Phil’s Thoughts
Due to Covid-19, this European Conference on Computer Vision has been streamed online this year. This has probably negatively affected the networking and mingling that normally comes with such conferences, but it also meant that we could attend the conference in the comforts of our own homes. Of course there were some of the normal hiccups and there were difficulties in attending some presentations due to difference in time zones, but it has been a very fruitful and interesting experience attending this conference.
Personally, I focused mostly on the presentations because they were the most convenient to watch. I took a more haphazard, spontaneous approach to this conference, watching attending talks that seemed interesting and leaving ones that did not.
A good place to start would be to watch the presentations on the top 3 papers:
- RAFT: Recurrent All-Pairs Field Transforms for Optical Flow by Zachary Teed and Jia Deng
- Towards Streaming Perception by Mentian Li, Yu-Xiong Wang, and Deva Ramanan
- NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis by Ben Mildenhall et al.
Watching them all, I can say that they all deserve the awards.
Phil’s Presentation Top-Pick
Rewriting a Deep Generative Model by David Bau, Steven Liu, Tongzhou Wang, Jun-Yan Zhu, Antonio Torralba website
Considering that the main activity during this conference is to watch the presentations related to the papers, I think that the organizers have been remiss to not have an award for best presentation, so I am going to make my own. David Bau deserves an award for how clearly he presented his ideas and his passion for his research and teaching others about it really shows.
Bau’s talk was about using a generative model to replicate a photograph and then modifying the neurons so change the output in some way like adding a door to a building. Interestingly, these changes are fairly localized and do not change the rest of the image. The purpose was mainly to investigate how generative neural networks think and he demonstrated the capabilities of his investigations by generalizing a changes to the photographs by adding trees to all of the towers in photographs. What was most interesting was that these were different trees on the towers or doors on buildings.
His Q&A session mainly talked about how he made such a discovery by simply fiddling around to get rid of a watermark in a photograph, discovering that turning off some neurons in the middle of the network completely got rid of it. He talked some limitations of his method such as trying to make a door in the middle of the sky will produce some unexpected results because the generative network has no idea how to create a door in the sky. Goes to show how much you can learn by just fooling around and having fun.