Octave Convolution & Cyclical Visual Grounding (June 2019)

Two new technical reports are on arXiv:

Numerous implementations of OctConv can be found on Papers With Code.

Papers accepted at CVPR 2019 (February 2019)

Delighted to announce that four papers were accepted at CVPR 2019!

Computer Vision for Global Challenges Workshop @ CVPR 2019 (January 2019)

Our first Workshop on Computer vision for Global Challenges (CV4GC) was accepted at CVPR this year! Really excited about organizing an initiative to bring the computer vision community closer to socially impactful tasks, datasets and applications for the whole world. Check out the CV4GC website!

Paper accepted in TPAMI (January 2019)

Our paper Focal Visual-Text Attention for Memex Question Answering was accepted for publication at the IEEE Transactions on Pattern Analysis and Machine Inteligence (impact factor: 9.455). It introduces our MemexQA Dataset, the first publicly available multimodal question answering dataset consisting of real personal photo albums.

Large-scale Visual Relationship Detection @ AAAI 2019 (November 2018)

Our paper Large-scale Visual Relationship Detection was accepted at the Thirty-Third AAAI Conference on Artificial Intelligence (AAAI 2019, acceptance rate 16.2%).
[Update Feb 2019]: Code is now on github.

Double Attention Networks @ NIPS 2018 (September 2018)

Our paper A^2-Nets: Double Attention Networks was accepted at NIPS 2018. See you in Montreal! Work with awesome collaborators from the National University of Singapore.

Multi-Fiber Networks @ ECCV 2018 (July 2018)

Our paper Multi-Fiber Networks for Video Recognition was accepted at the European Conference on Computer Vision (ECCV) 2018. Work with awesome collaborators from the National University of Singapore. Code on GitHub.

Do Silhouettes Dream? (May 2017)

Our interactive art installation entitled Do Silhouettes Dream? will be on display from July 26th till August 2nd 2017 at the ArtScience Museum in Singapore. More information at this page. You may watch a short interview with Cheng and me here.

Similarity Search at Flickr (March 2017)

Our work on visual similarity search over the whole Flickr corpus just launched! Try it yourselves by clicking on the magnifying glass icon at the top right corner of any photopage! Story covered in The Verge, Engadget, Petapixel, Digital Trends and Venture Beat.

New chapter: Facebook Research (February 2017)

After two amazing years at Yahoo Research, joined the Computer Vision Group at Facebook Research in Menlo Park.

Best paper at LSCVS Workshop in NIPS 2016 (December 2016)

Our paper "Tag Prediction in Flickr: A view from the darkroom" on large scale image classification with noisy training data received the best paper award at the 1st Workshop on Large Scale Computer Vision Systems at NIPS 2016.

Paper accepted at CHI 2017 (December 2016)

Our paper "Multimodal Classification of Moderated Online Pro-Eating Disorder Content" was accepted at the ACM CHI 2017 conference (25% acceptance rate).

Guest Lecturer at Stanford CS-131 (October 2016)

Will be a guest lecturer at Fei-Fei's and Juan Carlos' CS 131 Computer Vision: Foundations and Applications course at Stanford during the 2016-2017 Fall Semester.

Paper accepted at WSDM 2017 (October 2016)

Our paper "Delving Deep into Personal Photo and Video Search" was accepted for publication at WSDM 2017 (16% acceptance rate).


About Me

Grew up and lived in Greece until 2015 with brief breaks in Sweden, Spain and the United States. Lived in San Francisco from 2015 till 2017 and currently in Oakland.

Got my PhD in late 2014 from the National Technical University of Athens under the supervision of Prof. Stefanos Kollias and Yannis Avrithis, working closely with my research brother Giorgos Tolias.

The large-scale visual similarity search work of my PhD came to a nice closure while a researcher at Yahoo Research, as it was applied on a trully web scale powering the visual search feature on Flickr. At the same time, my interests expanded towards modeling of vision and language and collaborated with Stanford on the Visual Genome project.

Currently conducting research and development on video understanding, temporal segmentation, learning image & video representations, multi-modal classification and large-scale vision and language.

Contact Details

ykalant(at) yannisk(at)

Development projects and Demos

Selected Publications

Full list at my Google Scholar profile.


2018 2017 2016 2015 2014 2011 - 2013