Excited to announce that our paper Decoupling Representation and Classifier for Long-Tailed Recognition was accepted at the International Conference on Representation Learning (ICLR) 2020!
Moreover, the 1st workshop on Computer Vision for Agriculture (CV4A), the second workshop of the Computer Vision for Global Challenges initiative, will be held in April 2020 in conjunction with ICLR, in Addis Ababa, Ethiopia. Find more information at the CV4A Workshop webpage.
Excited to teach a short tutorial on "Image representations and fine-grained recognition" at Data Science Africa, in Accra, Ghana, on October 22nd. The tutorial slides can be found here in normal resolution (~6.2MB) or lower resolution (~1.6MB).
Our paper Drop an Octave: Reducing Spatial Redundancy in Convolutional Neural Networks with Octave Convolution. was accepted at ICCV 2019! Work with awesome collaborators from the National University of Singapore.
Delighted to announce that four papers (3 poster and 1 oral) were accepted at CVPR 2019!
The first Workshop on Computer vision for Global Challenges (CV4GC) was accepted at CVPR this year! Really excited about organizing an initiative to bring the computer vision community closer to socially impactful tasks, datasets and applications for the whole world.
- The CV4GC website
- Computer vision and global challenges: New research and applications
- Computer Vision for Global Challenges research award winners
Our paper Focal Visual-Text Attention for Memex Question Answering was accepted for publication at the IEEE Transactions on Pattern Analysis and Machine Inteligence (impact factor: 9.455). It introduces our MemexQA Dataset, the first publicly available multimodal question answering dataset consisting of real personal photo albums.
Our paper Large-scale Visual Relationship Detection was accepted at the Thirty-Third AAAI Conference on Artificial Intelligence (AAAI 2019, acceptance rate 16.2%). [Update Feb 2019]: Code is now on github.
Our paper A^2-Nets: Double Attention Networks was accepted at NeurIPS 2018. See you in Montreal! Work with awesome collaborators from the National University of Singapore.
Our work on visual similarity search over the whole Flickr corpus just launched! Try it yourselves by clicking on the magnifying glass icon at the top right corner of any photopage! Story covered in The Verge, Engadget, Petapixel, Digital Trends and Venture Beat.
After two amazing years at Yahoo Research, joined the Computer Vision Group at Facebook Research in Menlo Park.
Our paper "Tag Prediction in Flickr: A view from the darkroom" on large scale image classification with noisy training data received the best paper award at the 1st Workshop on Large Scale Computer Vision Systems at NeurIPS 2016.
Our paper "Multimodal Classification of Moderated Online Pro-Eating Disorder Content" was accepted at the ACM CHI 2017 conference (25% acceptance rate).
Will be a guest lecturer at Fei-Fei's and Juan Carlos' CS 131 Computer Vision: Foundations and Applications course at Stanford during the 2016-2017 Fall Semester.
Grew up and lived in Greece until 2015 with brief breaks in Sweden, Spain and the United States. Lived in San Francisco from 2015 till 2017 and currently in Oakland.
From 2015 to 2017 I was a research scientist at Yahoo Research. The large-scale visual similarity search work of my PhD came to a nice closure when we applied it on a trully web-scale real-time application, powering the visual search feature on Flickr. At the same time, my interests expanded towards modeling of vision and language and collaborated with Stanford on the Visual Genome project.
From February 2017 and till October 2019, I was a research scientist at Facebook AI. During this time my interests expanded to video understanding and deep learning arhitecture modeling. Currently, my research interests include representation learning, video understanding, multi-modal classification and large-scale vision and language.
Full list at my Google Scholar profile.