Xudong Lin

Ph.D at Columbia University
xudong.lin@columbia.edu

ResumeGithubGoogle Scholar

About Me

I am Xudong Lin (In Chinese, 蔺旭东), a Ph.D of computer science at Columbia University. I do research in the DVMM lab, advised by Prof. Shih-Fu Chang. I also work closely with Prof. Heng Ji and Prof. Zheng Shou. My research interest lies in multimedia content understanding, representation learning, and video analysis. My research vision is upgradabale multimodal AI.

During my undergraduate program, Mathematics and Physics, I spent four wonderful years at Tsinghua University, and began my research under the supervision of Prof. Jiwen Lu, and Prof. Jie Zhou.

I was also fortunate to work with Prof. B.S. Manjunath at UCSB, Dr. Lin Ma and Dr. Wei Liu at Tencent AI Lab. I am also happy to spend two wonderful summers with Prof. Lorenzo Torresani, Prof. Devi Parikh, Prof. Gedas Bertasius, Dr. Fabio Petroni, Dr. Marcus Rohrbach, and Dr. Jue Wang at Facebook AI Research.

On the job market. Send me an email if you are interested!

Recent Activities

New Trends in Multimodal Human Action Perception, Understanding and Generation (MANGO), Workshop to appear at CVPR 2024.

Knowledge-Driven Vision-Language Encoding, Tutorial at CVPR 2023. Jun, 2023.

Knowledge-Driven Vision-Language Pretraining, Tutorial at AAAI 2023. Feb, 2023.

Selected Papers

Full list

Towards Fast Adaptation of Pretrained Contrastive Models for Multi-channel Video-Language Retrieval
Xudong Lin, Simran Tiwari, Shiyuan Huang, Manling Li, Mike Zheng Shou, Heng Ji, Shih-Fu Chang
Accepted by IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2023
Arxiv Project Page

Learning To Recognize Procedural Activities with Distant Supervision
Xudong Lin, Fabio Petroni, Gedas Bertasius, Marcus Rohrbach, Shih-Fu Chang, Lorenzo Torresani
Accepted by IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2022
Arxiv Project Page

CLIP-Event: Connecting Text and Images with Event Structures
Manling Li, Ruochen Xu, Shuohang Wang, Luowei Zhou, Xudong Lin, Chenguang Zhu, Michael Zeng, Heng Ji, Shih-Fu Chang
Accepted by IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2022
Arxiv Project Page

Joint Multimedia Event Extraction from Video and Article
Brian Chen*, Xudong Lin*, Christopher Thomas, Manling Li, Shoya Yoshida, Lovish Chum, Heng Ji, Shih-Fu Chang (* indicates equal contribution)
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP) Findings
Arxiv Project Page

VX2TEXT: End-to-End Learning of Video-Based Text Generation From Multimodal Inputs
Xudong Lin, Gedas Bertasius, Jue Wang, Shih-Fu Chang, Devi Parikh, Lorenzo Torresani
Accepted by IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2021
Arxiv Patent

Co-Grounding Networks with Semantic Attention for Referring Expression Comprehension in Videos
Sijie Song, Xudong Lin, Jiaying Liu, Zongming Guo, Shih-Fu Chang
Accepted by IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2021
Arxiv Project Page

Context-Gated Convolution
Xudong Lin, Lin Ma, Wei Liu, Shih-Fu Chang
Accepted by European Conference on Computer Vision (ECCV), 2020
Arxiv Project Page

Dmc-net: Generating discriminative motion cues for fast compressed video action recognition
Zheng Shou, Xudong Lin, Yannis Kalantidis, Laura Sevilla-Lara, Marcus Rohrbach, Shih-Fu Chang, Zhicheng Yan
Accepted by IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2019
Paper Project Page

Deep Variantional Metric Learning
Xudong Lin, Y. Duan, Q. Dong, J. Lu, J. Zhou
Accepted by European Conference on Computer Vision (ECCV) 2018
Paper

Academic Service

Expert Reviewer: Transactions on Machine Learning Research

Reviewer: CVPR, ICCV, ECCV, NeurIPS, ICML, ICLR, EMNLP, ARR, AAAI, IJCAI, ACM MM, WACV

Reviewer: IEEE Transactions on Pattern Analysis and Machine Intelligence, International Journal of Computer Vision, Neurocomputing, IEEE Transactions on Neural Networks and Learning Systems, IEEE Transactions on Circuits and Systems for Video Technology

“More is different.”