(NIPS2013_DeViSE) DeViSE: A Deep Visual-Semantic Embedding Model.
Andrea Frome, Greg S. Corrado, Jonathon Shlens, Samy Bengio, Jeffrey Dean, Marc’Aurelio Ranzato, Tomas Mikolov.
[paper]
(TACL2014_SDT-RNN) Grounded Compositional Semantics for Finding and Describing Images with Sentences.
Richard Socher, Andrej Karpathy, Quoc V. Le, Christopher D. Manning, Andrew Y. Ng.
[paper]
(NIPSws2014_UVSE) Unifying Visual-Semantic Embeddings with Multimodal Neural Language Models.
Ryan Kiros, Ruslan Salakhutdinov, Richard S. Zemel.
[paper]
[code]
[demo]
(NIPS2014_DeFrag) Deep fragment embeddings for bidirectional image sentence mapping.
Andrej Karpathy, Armand Joulin, Li Fei-Fei.
[paper]
(ICCV2015_m-CNN) Multimodal Convolutional Neural Networks for Matching Image and Sentence.
Lin Ma, Zhengdong Lu, Lifeng Shang, Hang Li.
[paper]
(CVPR2015_DCCA) Deep Correlation for Matching Images and Text.
Fei Yan, Krystian Mikolajczyk.
[paper]
(CVPR2015_FV) Associating Neural Word Embeddings with Deep Image Representationsusing Fisher Vectors.
Benjamin Klein, Guy Lev, Gil Sadeh, Lior Wolf.
[paper]
(CVPR2015_DVSA) Deep Visual-Semantic Alignments for Generating Image Descriptions.
Andrej Karpathy, Li Fei-Fei.
[paper]
(NIPS2015_STV) Skip-thought Vectors.
Ryan Kiros, Yukun Zhu, Ruslan Salakhutdinov, Richard S. Zemel, Antonio Torralba, Raquel Urtasun, Sanja Fidler.
[paper]
(CVPR2016_SPE) Learning Deep Structure-Preserving Image-Text Embeddings.
Liwei Wang, Yin Li, Svetlana Lazebnik.
[paper]
(ICCV2017_HM-LSTM) Hierarchical Multimodal LSTM for Dense Visual-Semantic Embedding.
Zhenxing Niu, Mo Zhou, Le Wang, Xinbo Gao, Gang Hua.
[paper]
(ICCV2017_RRF-Net) Learning a Recurrent Residual Fusion Network for Multimodal Matching.
Yu Liu, Yanming Guo, Erwin M. Bakker, Michael S. Lew.
[paper]
(CVPR2017_2WayNet) Linking Image and Text with 2-Way Nets.
Aviv Eisenschtat, Lior Wolf.
[paper]
(MM2018_WSJE) Webly Supervised Joint Embedding for Cross-Modal Image-Text Retrieval.
Niluthpol Chowdhury Mithun, Rameswar Panda, Evangelos E. Papalexakis, Amit K. Roy-Chowdhury.
[paper]
(WACV2018_SEAM) Fast Self-Attentive Multimodal Retrieval.
Jônatas Wehrmann, Maurício Armani Lopes, Martin D More, Rodrigo C. Barros.
[paper]
[code]
(CVPR2018_CSE) End-to-end Convolutional Semantic Embeddings.
Quanzeng You, Zhengyou Zhang, Jiebo Luo.
[paper]
(CVPR2018_CHAIN-VSE) Bidirectional Retrieval Made Simple.
Jonatas Wehrmann, Rodrigo C. Barros.
[paper]
[code]
(CVPR2018_SCO) Learning Semantic Concepts and Order for Image and Sentence Matching.
Yan Huang, Qi Wu, Liang Wang.
[paper]
(NC2019_MDM) Bidirectional image-sentence retrieval by local and global deep matching.
Lin Ma, Wenhao Jiang, Zequn Jie, Xu Wang.
[paper]
(MM2019_SAEM) Learning Fragment Self-Attention Embeddings for Image-Text Matching.
Yiling Wu, Shuhui Wang, Guoli Song, Qingming Huang.
[paper]
[code]
(ICCV2019_VSRN) Visual Semantic Reasoning for Image-Text Matching.
Kunpeng Li, Yulun Zhang, Kai Li, Yuanyuan Li, Yun Fu.
[paper]
[code]
(ICCV2019_LIWE) Language-Agnostic Visual-Semantic Embeddings.
Jonatas Wehrmann, Maurício Armani Lopes, Douglas Souza, Rodrigo Barros.
[paper]
[code]
[demo]
(CVPR2019_Personality) Engaging Image Captioning via Personality.
Kurt Shuster, Samuel Humeau, Hexiang Hu, Antoine Bordes, Jason Weston.
[paper]
(CVPR2019_PVSE) Polysemous Visual-Semantic Embedding for Cross-Modal Retrieval.
Yale Song, Mohammad Soleymani.
[paper]
[code]
(Access2020_GSLS) Combining Global and Local Similarity for Cross-Media Retrieval.
Zhixin Li, Feng Ling, Canlong Zhang, Huifang Ma.
[paper]
(ICPR2020_TERN) Transformer Reasoning Network for Image-Text Matching and Retrieval.
Nicola Messina, Fabrizio Falchi, Andrea Esuli, Giuseppe Amato.
[paper]
[code]
(TOMM2020_TERAN) Fine-grained Visual Textual Alignment for Cross-Modal Retrieval using Transformer Encoders.
Nicola Messina, Giuseppe Amato, Andrea Esuli, Fabrizio Falchi, Claudio Gennaro, Stéphane Marchand-Maillet.
[paper]
[code]
(TOMM2020_NIS) Upgrading the Newsroom: An Automated Image Selection System for News Articles.
Fangyu Liu, Rémi Lebret, Didier Orel, Philippe Sordet, Karl Aberer.
[paper]
[slides]
[demo]
(TCSVT2020_MFM) Matching Image and Sentence With Multi-Faceted Representations.
Lin Ma, Wenhao Jiang, Zequn Jie, Yu-Gang Jiang, Wei Liu.
[paper]
(TCSVT2020_DSRAN) Learning Dual Semantic Relations with Graph Attention for Image-Text Matching.
Keyu Wen, Xiaodong Gu, Qingrong Cheng.
[paper]
[code]
(WACV2020_SGM) Cross-modal Scene Graph Matching for Relationship-aware Image-Text Retrieval.
Sijin Wang, Ruiping Wang, Ziwei Yao, Shiguang Shan, Xilin Chen.
[paper]
(arXiv2014_NIC) Show and Tell: A Neural Image Caption Generator.
Oriol Vinyals, Alexander Toshev, Samy Bengio, Dumitru Erhan.
[paper]
(ICLR2015_m-RNN) Deep Captioning with Multimodal Recurrent Neural Network(M-RNN).
Junhua Mao, Wei Xu, Yi Yang, Jiang Wang, Zhiheng Huang, Alan Yuille.
[paper]
[code]
(CVPR2015_LRCN) Long-term Recurrent Convolutional Networks for Visual Recognition and Description.
Jeff Donahue, Lisa Anne Hendricks, Marcus Rohrbach, Subhashini Venugopalan, Sergio Guadarrama, Kate Saenko, Trevor Darrell.
[paper]
(CVPR2017_DAN) Dual Attention Networks for Multimodal Reasoning and Matching.
Hyeonseob Nam, Jung-Woo Ha, Jeonghee Kim.
[paper]
(CVPR2017_sm-LSTM) Instance-aware Image and Sentence Matching with Selective Multimodal LSTM.
Yan Huang, Wei Wang, Liang Wang.
[paper]
(ECCV2018_CITE) Conditional Image-Text Embedding Networks.
Bryan A. Plummer, Paige Kordas, M. Hadi Kiapour, Shuai Zheng, Robinson Piramuthu, Svetlana Lazebnik.
[paper]
(ECCV2018_SCAN) Stacked Cross Attention for Image-Text Matching.
Kuang-Huei Lee, Xi Chen, Gang Hua, Houdong Hu, Xiaodong He.
[paper]
[code]
(CVPR2018_DSVE-Loc) Finding beans in burgers: Deep semantic-visual embedding with localization.
Martin Engilberge, Louis Chevallier, Patrick Pérez, Matthieu Cord.
[paper]
(arXiv2019_R-SCAN) Learning Visual Relation Priors for Image-Text Matching and Image Captioning with Neural Scene Graph Generators.
Kuang-Huei Lee, Hamid Palang, Xi Chen, Houdong Hu, Jianfeng Gao.
[paper]
(arXiv2019_ParNet) ParNet: Position-aware Aggregated Relation Network for Image-Text matching.
Yaxian Xia, Lun Huang, Wenmin Wang, Xiaoyong Wei, Jie Chen.
[paper]
(arXiv2019_TOD-Net) Target-Oriented Deformation of Visual-Semantic Embedding Space.
Takashi Matsubara.
[paper]
(ACML2019_SAVE) Multi-Scale Visual Semantics Aggregation with Self-Attention for End-to-End Image-Text Matching.
Zhuobin Zheng, Youcheng Ben, Chun Yuan.
[paper]
(ICMR2019_OAN) Improving What Cross-Modal Retrieval Models Learn through Object-Oriented Inter- and Intra-Modal Attention Networks.
Po-Yao Huang, Vaibhav, Xiaojun Chang, Alexander Georg Hauptmann.
[paper]
[code]
(MM2019_BFAN) Focus Your Attention: A Bidirectional Focal Attention Network for Image-Text Matching.
Chunxiao Liu, Zhendong Mao, An-An Liu, Tianzhu Zhang, Bin Wang, Yongdong Zhang.
[paper]
[code]
(MM2019_MTFN) Matching Images and Text with Multi-modal Tensor Fusion and Re-ranking.
Tan Wang, Xing Xu, Yang Yang, Alan Hanjalic, Heng Tao Shen, Jingkuan Song.
[paper]
[code]
(IJCAI2019_RDAN) Multi-Level Visual-Semantic Alignments with Relation-Wise Dual Attention Network for Image and Text Matching.
Zhibin Hu, Yongsheng Luo,Jiong Lin,Yan Yan, Jian Chen.
[paper]
(IJCAI2019_PFAN) Position Focused Attention Network for Image-Text Matching.
Yaxiong Wang, Hao Yang, Xueming Qian, Lin Ma, Jing Lu, Biao Li, Xin Fan.
[paper]
[code]
(ICCV2019_CAMP) CAMP: Cross-Modal Adaptive Message Passing for Text-Image Retrieval.
Zihao Wang, Xihui Liu, Hongsheng Li, Lu Sheng, Junjie Yan, Xiaogang Wang, Jing Shao.
[paper]
[code]
(ICCV2019_SAN) Saliency-Guided Attention Network for Image-Sentence Matching.
Zhong Ji, Haoran Wang, Jungong Han, Yanwei Pang.
[paper]
[code]
(TC2020_SMAN) SMAN: Stacked Multimodal Attention Network for Cross-Modal Image-Text Retrieval.
Zhong Ji, Haoran Wang, Jungong Han, Yanwei Pang.
[paper]
(TMM2020_PFAN++) PFAN++: Bi-Directional Image-Text Retrieval with Position Focused Attention Network.
Yaxiong Wang, Hao Yang, Xiuxiu Bai, Xueming Qian, Lin Ma, Jing Lu, Biao Li, Xin Fan.
[paper]
[code]
(TNNLS2020_CASC) Cross-Modal Attention With Semantic Consistence for Image-Text Matching.
Xing Xu, Tan Wang, Yang Yang, Lin Zuo, Fumin Shen, Heng Tao Shen.
[paper]
[code]
(AAAI2020_DP-RNN) Expressing Objects just like Words: Recurrent Visual Embedding for Image-Text Matching.
Tianlang Chen, Jiebo Luo.
[paper]
(AAAI2020_ADAPT) Adaptive Cross-modal Embeddings for Image-Text Alignment.
Jonatas Wehrmann, Camila Kolling, Rodrigo C Barros.
[paper]
[code]
(CVPR2020_CAAN) Context-Aware Attention Network for Image-Text Retrieval.
Qi Zhang, Zhen Lei, Zhaoxiang Zhang, Stan Z. Li.
[paper]
(CVPR2020_MMCA) Multi-Modality Cross Attention Network for Image and Sentence Matching.
Xi Wei, Tianzhu Zhang, Yan Li, Yongdong Zhang, Feng Wu.
[paper]
(CVPR2020_IMRAM) IMRAM: Iterative Matching with Recurrent Attention Memory for Cross-Modal Image-Text Retrieval.
Hui Chen, Guiguang Ding, Xudong Liu, Zijia Lin, Ji Liu, Jungong Han.
[paper]
[code]
(ICLR2016_Order-emb) Order-Embeddings of Images and Language.
Ivan Vendrov, Ryan Kiros, Sanja Fidler, Raquel Urtasun.
[paper]
(CVPR2020_HOAD) Visual-Semantic Matching by Exploring High-Order Attention and Distraction.
Yongzhi Li, Duo Zhang, Yadong Mu.
[paper]
(CVPR2020_GSMN) Graph Structured Network for Image-Text Matching.
Chunxiao Liu, Zhendong Mao, Tianzhu Zhang, Hongtao Xie, Bin Wang, Yongdong Zhang.
[paper]
[code]
(ICML2020_GOT) Graph Optimal Transport for Cross-Domain Alignment.
Liqun Chen, Zhe Gan, Yu Cheng, Linjie Li, Lawrence Carin, Jingjing Liu.
[paper]
[code]
(EMNLP2020_WD-Match) Wasserstein Distance Regularized Sequence Representation for Text Matching in Asymmetrical Domains.
Weijie Yu, Chen Xu, Jun Xu, Liang Pang, Xiaopeng Gao, Xiaozhao Wang, Ji-Rong Wen.
[paper]
[code]
(AAAI2021_SGRAF) Similarity Reasoning and Filtration for Image-Text Matching.
Haiwen Diao, Ying Zhang, Lin Ma, Huchuan Lu.
[paper]
[code]
(KSEM2019_SCKR) Semantic Modeling of Textual Relationships in Cross-Modal Retrieval.
Jing Yu, Chenghao Yang, Zengchang Qin, Zhuoqian Yang, Yue Hu, Weifeng Zhang.
[paper]
[code]
(IJCAI2019_SCG) Knowledge Aware Semantic Concept Expansion for Image-Text Matching.
Botian Shi, Lei Ji, Pan Lu, Zhendong Niu, Nan Duan.
[paper]
(ECCV2020_CVSE) Consensus-Aware Visual-Semantic Embedding for Image-Text Matching.
Haoran Wang, Ying Zhang, Zhong Ji, Yanwei Pang, Lin Ma.
[paper]
[code](Corrected codes)
(MM2017_ACMR) Adversarial Cross-Modal Retrieval.
Bokun Wang, Yang Yang, Xing Xu, Alan Hanjalic, Heng Tao Shen.
[paper]
[code]
(COLING2018_CAS) Learning Visually-Grounded Semantics from Contrastive Adversarial Samples.
Haoyue Shi, Jiayuan Mao, Tete Xiao, Yuning Jiang, Jian Sun.
[paper]
[code]
(CVPR2018_GXN) Look, Imagine and Match: Improving Textual-Visual Cross-Modal Retrieval with Generative Models.
Jiuxiang Gu, Jianfei Cai, Shafiq Joty, Li Niu, Gang Wang.
[paper]
(ICCV2019_TIMAM) Adversarial Representation Learning for Text-to-Image Matching.
Nikolaos Sarafianos, Xiang Xu, Ioannis A. Kakadiaris.
[paper]
(CVPR2019_UniVSE) Unified Visual-Semantic Embeddings: Bridging Vision and Language with Structured Meaning Representations.
Hao Wu, Jiayuan Mao, Yufeng Zhang, Yuning Jiang, Lei Li, Weiwei Sun, Wei-Ying Ma.
[paper]
(arXiv2020_ADDR) Beyond the Deep Metric Learning: Enhance the Cross-Modal Matching with Adversarial Discriminative Domain Regularization.
Li Ren, Kai Li, LiQiang Wang, Kien Hua.
[paper]
(TPAMI2018_TBNN) Learning Two-Branch Neural Networks for Image-Text Matching Tasks.
Liwei Wang, Yin Li, Jing Huang, Svetlana Lazebnik.
[paper]
[code]
(BMVC2018_VSE++) VSE++: Improving Visual-Semantic Embeddings with Hard Negatives.
Fartash Faghri, David J. Fleet, Jamie Ryan Kiros, Sanja Fidler.
[paper]
[code]
(ECCV2018_CMPL) Deep Cross-Modal Projection Learning for Image-Text Matching.
Ying Zhang, Huchuan Lu.
[paper]
[code]
(ACLws2019_kNN-loss) A Strong and Robust Baseline for Text-Image Matching.
Fangyu Liu, Rongtian Ye.
[paper]
(ICASSP2019_NAA) A Neighbor-aware Approach for Image-text Matching.
Chunxiao Liu, Zhendong Mao, Wenyu Zang, Bin Wang.
[paper]
(CVPR2019_PVSE) Polysemous Visual-Semantic Embedding for Cross-Modal Retrieval.
Yale Song, Mohammad Soleymani.
[paper]
[code]
(CVPR2019_SoDeep) SoDeep: a Sorting Deep net to learn ranking loss surrogates.
Martin Engilberge, Louis Chevallier, Patrick Pérez, Matthieu Cord.
[paper]
(TOMM2020_Dual-Path) Dual-path Convolutional Image-Text Embeddings with Instance Loss.
Zhedong Zheng, Liang Zheng, Michael Garrett, Yi Yang, Mingliang Xu, YiDong Shen.
[paper]
[code]
(AAAI2020_HAL) HAL: Improved Text-Image Matching by Mitigating Visual Semantic Hubs.
Fangyu Liu, Rongtian Ye, Xun Wang, Shuaipeng Li.
[paper]
[code]
(AAAI2020_CVSE++) Ladder Loss for Coherent Visual-Semantic Embedding.
Mo Zhou, Zhenxing Niu, Le Wang, Zhanning Gao, Qilin Zhang, Gang Hua.
[paper]
(CVPR2020_MPL) Universal Weighting Metric Learning for Cross-Modal Matching.
Jiwei Wei, Xing Xu, Yang Yang, Yanli Ji, Zheng Wang, Heng Tao Shen.
[paper]
(ECCV2020_PSN) Preserving Semantic Neighborhoods for Robust Cross-modal Retrieval.
Christopher Thomas, Adriana Kovashka.
[paper]
[code]
(ECCV2020_AOQ) Adaptive Offline Quintuplet Loss for Image-Text Matching.
Tianlang Chen, Jiajun Deng, Jiebo Luo.
[paper]
[code]
(ECCV2018_VSA-AE-MMD) Visual-Semantic Alignment Across Domains Using a Semi-Supervised Approach.
Angelo Carraggi, Marcella Cornia, Lorenzo Baraldi, Rita Cucchiara.
[paper]
(MM2019_A3VSE) Annotation Efficient Cross-Modal Retrieval with Adversarial Attentive Alignment.
Po-Yao Huang, Guoliang Kang, Wenhe Liu, Xiaojun Chang, Alexander G Hauptmann.
[paper]
(CVPR2017_DEM) Learning a Deep Embedding Model for Zero-Shot Learning.
Li Zhang, Tao Xiang, Shaogang Gong.
[paper]
[code]
(AAAI2019_GVSE) Few-shot image and sentence matching via gated visual-semantic matching.
Yan Huang, Yang Long, Liang Wang.
[paper]
(ICCV2019_ACMM) ACMM: Aligned Cross-Modal Memory for Few-Shot Image and Sentence Matching.
Yan Huang, Liang Wang.
[paper]
(ICCV2015_LSTM-Q+I) VQA: Visual question answering.
Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, MargaretMitchell, Dhruv Batra, C Lawrence Zitnick, Devi Parikh.
[paper]
(CVPR2016_Word-NN) Learning Deep Representations of Fine-grained Visual Descriptions.
Scott Reed, Zeynep Akata, Bernt Schiele, Honglak Lee.
[paper]
(CVPR2017_GNA-RNN) Person search with natural language description.
huang Li, Tong Xiao, Hongsheng Li, Bolei Zhou, DayuYue, Xiaogang Wang.
[paper]
[code]
(ICCV2017_IATV) Identity-aware textual-visual matching with latent co-attention.
Shuang Li, Tong Xiao, Hongsheng Li, Wei Yang, Xiaogang Wang.
[paper]
(WACV2018_PWM-ATH) Improving text-based person search by spatial matching and adaptive threshold.
Tianlang Chen, Chenliang Xu, Jiebo Luo.
[paper]
(ECCV2018_GLA) Improving deep visual representation for person re-identification by global and local image-language association.
Dapeng Chen, Hongsheng Li, Xihui Liu, Yantao Shen, JingShao, Zejian Yuan, Xiaogang Wang.
[paper]
(CVPR2019_DSCMR) Deep Supervised Cross-modal Retrieval.
Liangli Zhen, Peng Hu, Xu Wang, Dezhong Peng.
[paper]
[code]
(AAAI2020_PMA) Pose-Guided Multi-Granularity Attention Network for Text-Based Person Search.
Ya Jing, Chenyang Si, Junbo Wang, Wei Wang, Liang Wang, Tieniu Tan.
[paper]
(ECCV2018_SS) Single Shot Scene Text Retrieval.
Lluís Gómez, Andrés Mafla, Marçal Rusiñol, Dimosthenis Karatzas.
[paper]
[code_Tensorflow][code_Pytorch]
(WACV2020_PHOC) Fine-grained Image Classification and Retrieval by Combining Visual and Locally Pooled Textual Features.
Andres Mafla, Sounak Dey, Ali Furkan Biten, Lluis Gomez, Dimosthenis Karatzas.
[paper]
[code]
(WACV2021_MMRG) Multi-Modal Reasoning Graph for Scene-Text Based Fine-Grained Image Classification and Retrieval.
Andres Mafla, Sounak Dey, Ali Furkan Biten, Lluis Gomez, Dimosthenis Karatzas.
[paper]
[code]
(WACV2021_StacMR) StacMR: Scene-Text Aware Cross-Modal Retrieval.
Andrés Mafla, Rafael Sampaio de Rezende, Lluís Gómez, Diane Larlus, Dimosthenis Karatzas.
[paper]
[code]
(Machine Learning 2010) Large scale image annotation: learning to rank with joint word-image embeddings.
Jason Weston, Samy Bengio, Nicolas Usunier.
[paper]
(NIPS2013_Word2Vec) Distributed Representations of Words and Phrases and their Compositionality.
Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, Jeffrey Dean.
[paper]
(CVPR2017_DVSQ) Deep Visual-Semantic Quantization for Efficient Image Retrieval.
Yue Cao, Mingsheng Long, Jianmin Wang, Shichen Liu.
[paper]
(ACL2018_ILU) Illustrative Language Understanding: Large-Scale Visual Grounding with Image Search.
Jamie Kiros, William Chan, Geoffrey Hinton.
[paper]
(AAAI2018_VSE-ens) VSE-ens: Visual-Semantic Embeddings with Efficient Negative Sampling.
Guibing Guo, Songlin Zhai, Fajie Yuan, Yuan Liu, Xingwei Wang.
[paper]
(ECCV2018_HTG) An Adversarial Approach to Hard Triplet Generation.
Yiru Zhao, Zhongming Jin, Guo-jun Qi, Hongtao Lu, Xian-sheng Hua.
[paper]
(ECCV2018_WebNet) CurriculumNet: Weakly Supervised Learning from Large-Scale Web Images.
Sheng Guo, Weilin Huang, Haozhi Zhang, Chenfan Zhuang, Dengke Dong, Matthew R. Scott, Dinglong Huang.
[paper]
[code]
(CVPR2018_BUTD) Bottom-Up and Top-Down Attention for Image Captioning and Visual Question Answering.
Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, Lei Zhang.
[paper]
[code]
(EMNLP2019_GMMR) Multi-Head Attention with Diversity for Learning Grounded Multilingual Multimodal Representations.
Po-Yao Huang, Xiaojun Chang, Alexander Hauptmann.
[paper]
(EMNLP2019_MIMSD) Unsupervised Discovery of Multimodal Links in Multi-Image, Multi-Sentence Documents.
Jack Hessel, Lillian Lee, David Mimno.
[paper]
[code]
(ICCV2019_DRNet) Fashion Retrieval via Graph Reasoning Networks on a Similarity Pyramid.
Zhanghui Kuang, Yiming Gao, Guanbin Li, Ping Luo, Yimin Chen, Liang Lin, Wayne Zhang.
[paper]
(ICCV2019_Align2Ground) Align2Ground: Weakly Supervised Phrase Grounding Guided by Image-Caption Alignment.
Samyak Datta, Karan Sikka, Anirban Roy, Karuna Ahuja, Devi Parikh, Ajay Divakaran.
[paper]
(CVPR2019_TIRG) Composing Text and Image for Image Retrieval - An Empirical Odyssey.
Nam Vo, Lu Jiang, Chen Sun, Kevin Murphy, Li-Jia Li, Li Fei-Fei, James Hays.
[paper]
(SIGIR2019_PAICM) Prototype-guided Attribute-wise Interpretable Scheme for Clothing Matching.
Xianjing Han, Xuemeng Song, Jianhua Yin, Yinglong Wang, Liqiang Nie.
[paper]
(SIGIR2019_NCR) Neural Compatibility Ranking for Text-based Fashion Matching.
Suthee Chaidaroon, Mix Xie, Yi Fang, Alessandro Magnani.
[paper]
(arXiv2020_Tweets) Deep Multimodal Image-Text Embeddings for Automatic Cross-Media Retrieval.
Hadi Abdi Khojasteh, Ebrahim Ansari, Parvin Razzaghi, Akbar Karimi.
[paper]
(arXiv2020_TIMNet) Weakly-Supervised Feature Learning via Text and Image Matching.
Gongbo Liang, Connor Greenwell, Yu Zhang, Xiaoqin Wang, Ramakanth Kavuluru, Nathan Jacobs.
[paper]
[code]
(ECCV2020_InfoNCE) Contrastive Learning for Weakly Supervised Phrase Grounding.
Tanmay Gupta, Arash Vahdat, Gal Chechik, Xiaodong Yang, Jan Kautz, Derek Hoiem.
[paper]
[code]
(ECCV2020_JVSM) Learning Joint Visual Semantic Matching Embeddings for Language-guided Retrieval.
Yanbei Chen, Loris Bazzani.
[paper]
(CVPR2020_POS-SCAN) More Grounded Image Captioning by Distilling Image-Text Matching Model.
Yuanen Zhou, Meng Wang, Daqing Liu, Zhenzhen Hu, Hanwang Zhang.
[paper]
[code]
(COLING2020_VSE-Probing) Probing Multimodal Embeddings for Linguistic Properties: the Visual-Semantic Case.
Adam Dahlgren Lindström, Suna Bensch, Johanna Björklund, Frank Drewes.
[paper]
[code]
(arXiv2021_PCME) Probabilistic Embeddings for Cross-Modal Retrieval.
Sanghyuk Chun, Seong Joon Oh, Rafael Sampaio de Rezende, Yannis Kalantidis, Diane Larlus.
[paper]