师资队伍

返回
王晓智

发布时间:2025-08-27

完整列表见https://bakser.github.io/publications 


[1] Wang X, Gao T, Zhu Z, et al. KEPLER: A unified model for knowledge embedding and pre-trained language representation[J]. Transactions of the Association for Computational Linguistics (TACL), 2021, 9: 176-194.


[2] Wang X, Wang Z, Han X, et al. MAVEN: A Massive General Domain Event Detection Dataset[C]//Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). 2020: 1652-1671.


[3] Wang X, Peng H, Guan Y, et al. MAVEN-ARG: Completing the Puzzle of All-in-One Event Understanding Dataset with Event Argument Annotation[C]//Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL). 2024: 4072-4091.


[4] Wang X*, Chen Y*, Ding N, et al. MAVEN-ERE: A Unified Large-scale Dataset for Event Coreference, Temporal, Causal, and Subevent Relation Extraction[C]//Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). 2022: 926-941.


[5] Wang X*, Wen K*, Zhang Z, et al. Finding Skill Neurons in Pre-trained Transformer-based Language Models[C]//Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). 2022: 11132-11152.


[6] Wang Z*, Wang X*, Han X, et al. CLEVE: Contrastive Pre-training for Event Extraction[C]//Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL). 2021: 6283-6297.


[7] Su Y*, Wang X*, Qin Y, et al. On Transferability of Prompt Tuning for Natural Language Processing[C]//Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics (NAACL). 2022: 3949-3969.


[8] Peng H*, Wang X*, Hu S, et al. COPEN: Probing Conceptual Knowledge in Pretrained Language Models[C]//Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). 2022: 5015-5035.


[9] Peng H*, Wang X*, Yao F, et al. OmniEvent: A Comprehensive, Fair, and Easy-toUse Toolkit for Event Understanding[C]//Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP): System Demonstrations. 2023: 508-517.


[10] Peng H*, Wang X*, Yao F*, et al. The Devil is in the Details: On the Pitfalls of Event Extraction Evaluation[C]//Findings of the Association for Computational Linguistics: ACL 2023. 2023: 9206-9227.


[11] Yu J*, Wang X*, Tu S*, et al. KoLA: Carefully Benchmarking World Knowledge of Large Language Models[C]//Proceedings of the International Conference on Learning Representations (ICLR). 2024.


[12] Qin Y*, Wang X*, Su Y, et al. Exploring Universal Intrinsic Task Subspace for Few-Shot Learning via Prompt Tuning[J]. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2024, 32: 3631-3643.


版权所有@清华大学深圳国际研究生院 京ICP备15006448号 京公网安备 110402430053 号