Folgen
Preksha Nema
Preksha Nema
Bestätigte E-Mail-Adresse bei google.com - Startseite
Titel
Zitiert von
Zitiert von
Jahr
Diversity driven attention model for query-based abstractive summarization
P Nema, M Khapra, A Laha, B Ravindran
The 55th Annual Meeting of the Association of Computational Linguistics, 2017, 2017
2152017
Towards a better metric for evaluating question generation systems
P Nema, MM Khapra
Conference on Empirical Methods in Natural Language Processing, 2018, 2018
1292018
Towards Transparent and Explainable Attention Models
AK Mohankumar, P Nema, S Narasimhan, MM Khapra, BV Srinivasan, ...
The 58th Annual Meeting of the Association of Computational Linguistics, 2020, 2020
972020
Let's Ask Again: Refine Network for Automatic Question Generation
P Nema, AK Mohankumar, MM Khapra, BV Srinivasan, B Ravindran
Conference on Empirical Methods in Natural Language Processing, 2019, 2019
612019
Generating descriptions from structured data using a bifocal attention mechanism and gated orthogonalization
P Nema, S Shetty, P Jain, A Laha, K Sankaranarayanan, MM Khapra
The 16th Annual Conference of the North American Chapter of the Association …, 2018
402018
Analyzing user perspectives on mobile app privacy at scale
P Nema, P Anthonysamy, N Taft, ST Peddinti
Proceedings of the 44th International Conference on Software Engineering …, 2022
392022
Disentangling Preference Representations for Recommendation Critiquing with β-VAE
P Nema, A Karatzoglou, F Radlinski
30th ACM International Conference on Information and Knowledge Management, 9, 2021
33*2021
ElimiNet: A Model for Eliminating Options for Reading Comprehension with Multiple Choice Questions
S Parikh, A Sai, P Nema, M Khapra
Proceedings of the Twenty-Seventh International Joint Conference on …, 2018
32*2018
A mixed hierarchical attention based encoder-decoder approach for standard table summarization
P Jain, A Laha, K Sankaranarayanan, P Nema, MM Khapra, S Shetty
The 16th Annual Conference of the North American Chapter of the Association …, 2018
322018
Towards Interpreting BERT for Reading Comprehension Based QA
S Ramnath, P Nema, D Sahni, MM Khapra
EMNLP, 2020, 4 pages, 2020
252020
The heads hypothesis: A unifying statistical approach towards understanding multi-headed attention in bert
M Pande, A Budhraja, P Nema, P Kumar, MM Khapra
Proceedings of the AAAI conference on artificial intelligence 35 (15), 13613 …, 2021
182021
On the weak link between importance and prunability of attention heads
A Budhraja, M Pande, P Nema, P Kumar, M M. Khapra
EMNLP, 2020, 6, 2020
92020
T-STAR: Truthful style transfer using AMR graph as intermediate representation
A Jangra, P Nema, A Raghuveer
arXiv preprint arXiv:2212.01667, 2022
62022
On the importance of local information in transformer based models
M Pande, A Budhraja, P Nema, P Kumar, MM Khapra
arXiv preprint arXiv:2008.05828, 2020
32020
Untangle: Critiquing Disentangled Recommendations
P Nema, A Karatzoglou, F Radlinski
22021
ReTAG: Reasoning Aware Table to Analytic Text Generation
D Ghosal, P Nema, A Raghuveer
arXiv preprint arXiv:2305.11826, 2023
12023
STOAT: Structured Data to Analytical Text With Controls.
D Ghosal, P Nema, A Raghuveer
CoRR, 2023
2023
A Framework for Rationale Extraction for Deep QA models
S Ramnath, P Nema, D Sahni, MM Khapra
arXiv preprint arXiv:2110.04620, 2021
2021
Frustratingly Poor Performance of Reading Comprehension Models on Non-adversarial Examples
S Parikh, AB Sai, P Nema, MM Khapra
arXiv preprint arXiv:1904.02665, 2019
2019
Encode Attend Refine Decode Enriching Contextual Representations For Natural Language Generation
P NEMA
Chennai, 0
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–20