Recognition in the Open World

We teach computer vision models to say 'Sorry, I don't know...' to the images they actually don’t know.

Current models usually adopt closed-world assumptions. Unfortunately, this leads to their super arrogance. For example, a dog classifier will firmly treat a handful of aliens as a dog breed with overconfident scores. We focus on the image classification solution with alternative open-world assumptions.

Paper on the topic:

Webly and Semi-Supervised Supervised Learning

We learn powerful classification backbones from limited/noisy supervision.

Classic industrial image classifier deployment requires large-scale well-annotated datasets. Our goal is to weaken the reliance on the expensive human labeling by using weaker/cheaper and fewer annotations, corresponding to webly and semi-supervised learning, respectively.

Paper on the topic:

Graph Neural Networks

We extensively explore GNN models to obtain better performance on large-scale graphs.

The oversmoothing effect of deeper GNN models hinders further performance improvement. Two perspectives to address the problem:

  1. From a model point of view, we turn to the alternative overparameterized wide GNN training through a distributed training framework.
  2. From a data point of view, we perform deconvolution preprocessing on the graph signals to neutralize the oversmoothing effect in the later stage.

Paper on the topic:

Causal Inference in Bioinformatics

Correlation is not causality. Related genes are not pathogenic genes.

We exclude a large amount of related genes to locate pathogenic genes via linear mixed models and then non-linear mixed models.

Paper on the topic: