Modality to modality translation: An adversarial representation learning and graph fusion network for multimodal fusion S Mai, H Hu, S Xing Proceedings of the AAAI Conference on Artificial Intelligence 34 (01), 164-172, 2020 | 160 | 2020 |
Divide, conquer and combine: Hierarchical feature fusion network with local and global perspectives for multimodal affective computing S Mai, H Hu, S Xing Proceedings of the 57th annual meeting of the association for computational …, 2019 | 108 | 2019 |
Hybrid contrastive learning of tri-modal representation for multimodal sentiment analysis S Mai, Y Zeng, S Zheng, H Hu IEEE Transactions on Affective Computing, 2022 | 81 | 2022 |
Communicative message passing for inductive relation reasoning S Mai, S Zheng, Y Yang, H Hu Proceedings of the AAAI Conference on Artificial Intelligence 35 (5), 4294-4302, 2021 | 69 | 2021 |
Locally confined modality fusion network with a global perspective for multimodal human affective computing S Mai, S Xing, H Hu IEEE Transactions on Multimedia 22 (1), 122-137, 2019 | 67 | 2019 |
Analyzing multimodal sentiment via acoustic-and visual-LSTM with channel-aware temporal convolution network S Mai, S Xing, H Hu IEEE/ACM Transactions on Audio, Speech, and Language Processing 29, 1424-1437, 2021 | 56 | 2021 |
Adapted dynamic memory network for emotion recognition in conversation S Xing, S Mai, H Hu IEEE Transactions on Affective Computing 13 (3), 1426-1439, 2020 | 50 | 2020 |
Multi-fusion residual memory network for multimodal human sentiment comprehension S Mai, H Hu, J Xu, S Xing IEEE Transactions on Affective Computing 13 (1), 320-334, 2020 | 44 | 2020 |
Multimodal information bottleneck: Learning minimal sufficient unimodal and multimodal representations S Mai, Y Zeng, H Hu IEEE Transactions on Multimedia, 2022 | 35 | 2022 |
Attentive matching network for few-shot learning S Mai, H Hu, J Xu Computer Vision and Image Understanding 187, 102781, 2019 | 27 | 2019 |
Learning to balance the learning rates between various modalities via adaptive tracking factor Y Sun, S Mai, H Hu IEEE Signal Processing Letters 28, 1650-1654, 2021 | 26 | 2021 |
Subgraph-aware few-shot inductive link prediction via meta-learning S Zheng, S Mai, Y Sun, H Hu, Y Yang IEEE Transactions on Knowledge and Data Engineering, 2022 | 25 | 2022 |
Analyzing unaligned multimodal sequence via graph convolution and graph pooling fusion S Mai, S Xing, J He, Y Zeng, H Hu arXiv preprint arXiv:2011.13572, 2020 | 24 | 2020 |
A unimodal reinforced transformer with time squeeze fusion for multimodal sentiment analysis J He, S Mai, H Hu IEEE Signal Processing Letters 28, 992-996, 2021 | 23 | 2021 |
Graph capsule aggregation for unaligned multimodal sequences J Wu, S Mai, H Hu Proceedings of the 2021 international conference on multimodal interaction …, 2021 | 19 | 2021 |
Which is making the contribution: Modulating unimodal and cross-modal dynamics for multimodal sentiment analysis Y Zeng, S Mai, H Hu arXiv preprint arXiv:2111.08451, 2021 | 17 | 2021 |
Excavating multimodal correlation for representation learning S Mai, Y Sun, Y Zeng, H Hu Information Fusion 91, 542-555, 2023 | 15 | 2023 |
A unimodal representation learning and recurrent decomposition fusion structure for utterance-level multimodal embedding learning S Mai, H Hu, S Xing IEEE Transactions on Multimedia 24, 2488-2501, 2021 | 14 | 2021 |
Learning to learn better unimodal representations via adaptive multimodal meta-learning Y Sun, S Mai, H Hu IEEE Transactions on Affective Computing, 2022 | 12 | 2022 |
Communicative subgraph representation learning for multi-relational inductive drug-gene interaction prediction J Rao, S Zheng, S Mai, Y Yang arXiv preprint arXiv:2205.05957, 2022 | 9 | 2022 |