The Neural Hawkes Process: A Neurally Self-Modulating Multivariate Point Process H Mei, J Eisner arXiv, 2016 | 568 | 2016 |
What to talk about and how? Selective Generation using LSTMs with Coarse-to-Fine Alignment H Mei, M Bansal, MR Walter NAACL, 2016 | 312 | 2016 |
Listen, attend, and walk: Neural mapping of navigational instructions to action sequences H Mei, M Bansal, MR Walter AAAI, 2016 | 259 | 2016 |
Coherent Dialogue with Attention-based Language Models H Mei, M Bansal, MR Walter AAAI, 2017 | 114 | 2017 |
Imputing missing events in continuous-time event streams H Mei, G Qin, J Eisner International Conference on Machine Learning, 4475-4485, 2019 | 34 | 2019 |
Neural Datalog through time: Informed temporal modeling via logical specification H Mei, G Qin, M Xu, J Eisner International Conference on Machine Learning, 6808-6819, 2020 | 18 | 2020 |
Noise-contrastive estimation for multivariate point processes H Mei, T Wan, J Eisner Advances in neural information processing systems 33, 5204-5214, 2020 | 17 | 2020 |
Accurate Vision-based Vehicle Localization using Satellite Imagery H Chu, H Mei, M Bansal, MR Walter NIPS 2015 Transfer and Multi-Task Learning workshop, 2015 | 13 | 2015 |
Personalized dynamic treatment regimes in continuous time: a Bayesian approach for optimizing clinical decisions with timing W Hua, H Mei, S Zohar, M Giral, Y Xu Bayesian Analysis 17 (3), 849-878, 2022 | 12 | 2022 |
Transformer embeddings of irregularly spaced events and their participants C Yang, H Mei, J Eisner arXiv preprint arXiv:2201.00044, 2021 | 11 | 2021 |
Hidden state variability of pretrained language models can guide computation reduction for transfer learning S Xie, J Qiu, A Pasad, L Du, Q Qu, H Mei arXiv preprint arXiv:2210.10041, 2022 | 7 | 2022 |
HYPRO: A Hybridly Normalized Probabilistic Model for Long-Horizon Prediction of Event Sequences S Xue, X Shi, J Zhang, H Mei Advances in Neural Information Processing Systems 35, 34641-34650, 2022 | 6 | 2022 |
Tiny-attention adapter: Contexts are more important than the number of parameters H Zhao, H Tan, H Mei arXiv preprint arXiv:2211.01979, 2022 | 5 | 2022 |
Statler: State-maintaining language models for embodied reasoning T Yoneda, J Fang, P Li, H Zhang, T Jiang, S Lin, B Picker, D Yunis, H Mei, ... arXiv preprint arXiv:2306.17840, 2023 | 4 | 2023 |
Transformer embeddings of irregularly spaced events and their participants H Mei, C Yang, J Eisner International conference on learning representations, 2021 | 4 | 2021 |
On the idiosyncrasies of the Mandarin Chinese classifier system S Liu, H Mei, A Williams, R Cotterell arXiv preprint arXiv:1902.10193, 2019 | 4 | 2019 |
EasyTPP: Towards Open Benchmarking the Temporal Point Processes S Xue, X Shi, Z Chu, Y Wang, F Zhou, H Hao, C Jiang, C Pan, Y Xu, ... arXiv preprint arXiv:2307.08097, 2023 | 3 | 2023 |
Language Models Can Improve Event Prediction by Few-Shot Abductive Reasoning X Shi, S Xue, K Wang, F Zhou, JY Zhang, J Zhou, C Tan, H Mei arXiv preprint arXiv:2305.16646, 2023 | 3 | 2023 |
Explicit Planning Helps Language Models in Logical Reasoning H Zhao, K Wang, M Yu, H Mei arXiv preprint arXiv:2303.15714, 2023 | 3 | 2023 |
Halo: Learning semantics-aware representations for cross-lingual information extraction H Mei, S Zhang, K Duh, B Van Durme arXiv preprint arXiv:1805.08271, 2018 | 3 | 2018 |