Projects
Recognition in the Open World
We teach computer vision models to say 'Sorry, I don't know...'
to the images they actually don’t know.
Current models usually adopt closed-world assumptions. Unfortunately, this leads to their super arrogance. For example, a dog classifier will firmly treat a handful of aliens as a dog breed with overconfident scores. We focus on the image classification solution with alternative open-world assumptions.
Paper on the topic:
- Semantically Coherent Out-of-Detection Detection (accepted to ICCV-21)
- Generalized Out-of-Distribution Detection: A Survey (submitted to TPAMI)
Webly and Semi-Supervised Supervised Learning
We learn powerful classification backbones from limited/noisy supervision.
Classic industrial image classifier deployment requires large-scale well-annotated datasets. Our goal is to weaken the reliance on the expensive human labeling by using weaker/cheaper and fewer annotations, corresponding to webly and semi-supervised learning, respectively.
Paper on the topic:
- Webly supervised image classification with metadata: Automatic noisy label correction via visual-semantic graph. (accepted to MM-20, Oral)
- Webly supervised image classification with self-contained confidence. (accepted to ECCV-20)
- First price in Semi-Supervised Challenge at CVPR-20 FGVC7 workshop. report
Graph Neural Networks
We extensively explore GNN models to obtain better performance on large-scale graphs.
The oversmoothing effect of deeper GNN models hinders further performance improvement. Two perspectives to address the problem:
- From a model point of view, we turn to the alternative overparameterized wide GNN training through a distributed training framework.
- From a data point of view, we perform deconvolution preprocessing on the graph signals to neutralize the oversmoothing effect in the later stage.
Paper on the topic:
- GIST: Distributed training for large-scale graph convolutional networks. (Preprint)
- Enhancing geometric deep learning via graph filter deconvolution. (accepted to GlobalSIP-18)
Causal Inference in Bioinformatics
Correlation is not causality. Related genes are not pathogenic genes.
We exclude a large amount of related genes to locate pathogenic genes via linear mixed models and then non-linear mixed models.
Paper on the topic: