ABCDEFGHIJKLMNOPQRSTUVWXYZ
1
Please write your name in the Columns "Presenter" and "Reviewer" for the papers that you want to present and review. You will need to choose one paper for presentation and one paper for reviewing. You can nominate your papers but please talk with me first.Schedule
2
Paper IDPaperURLPresenterReviewer04/11Transforming Sequence Tagging Into A Seq2Seq Taskhttps://aclanthology.org/2022.emnlp-main.813/Amir
3
1UL2: Unifying Language Learning Paradigmshttps://arxiv.org/abs/2205.05131Preventing Verbatim Memorization in Language Models Gives a False Sense of Privacyhttps://arxiv.org/pdf/2210.17546.pdfZayd
4
2The Geometry of Multilingual Language Model Representationshttps://aclanthology.org/2022.emnlp-main.9/Zaydx04/18Don’t Prompt, Search! Mining-based Zero-Shot Learning with Language Modelshttps://aclanthology.org/2022.emnlp-main.509/Viet
5
3Interpreting Language Models with Contrastive Explanationshttps://aclanthology.org/2022.emnlp-main.14/Hieu ManxA Multitask, Multilingual, Multimodal Evaluation of ChatGPT on Reasoning, Hallucination, and Interactivityhttps://arxiv.org/abs/2302.04023Gabriel
6
4Structured Prompting: Scaling In-Context Learning to 1,000 Exampleshttps://arxiv.org/abs/2212.06713Vietx04/25Cross-Linguistic Syntactic Difference in Multilingual BERT: How Good is It and How Does It Affect Transfer?https://aclanthology.org/2022.emnlp-main.552/Hakyung
7
5Parallel Context Windows Improve In-Context Learning of Large Language Modelshttps://arxiv.org/abs/2212.10947PaulxZero-shot Cross-lingual Transfer of Prompt-based Tuning with a Unified Multilingual Prompthttps://aclanthology.org/2022.emnlp-main.790/Navya
8
6A Length-Extrapolatable Transformerhttps://arxiv.org/abs/2212.10554Gabrielx05/02Interpreting Language Models with Contrastive Explanationshttps://aclanthology.org/2022.emnlp-main.14/Hieu Man
9
7A Multitask, Multilingual, Multimodal Evaluation of ChatGPT on Reasoning, Hallucination, and Interactivityhttps://arxiv.org/abs/2302.04023GabrielxRankGen: Improving Text Generation with Large Ranking Modelshttps://aclanthology.org/2022.emnlp-main.15/Timmy
10
8RankGen: Improving Text Generation with Large Ranking Modelshttps://aclanthology.org/2022.emnlp-main.15/Timmyx05/09Parallel Context Windows Improve In-Context Learning of Large Language Modelshttps://arxiv.org/abs/2212.10947Paul
11
9Entity Extraction in Low Resource Domains with Selective Pre-training of Large Language Modelshttps://aclanthology.org/2022.emnlp-main.61/NavyaxA Length-Extrapolatable Transformerhttps://arxiv.org/abs/2212.10554Gabriel
12
10Gradient-based Constrained Sampling from Language Modelshttps://aclanthology.org/2022.emnlp-main.144/Timmyx05/16The Geometry of Multilingual Language Model Representationshttps://aclanthology.org/2022.emnlp-main.9/Zayd
13
11Cross-Linguistic Syntactic Difference in Multilingual BERT: How Good is It and How Does It Affect Transfer?https://aclanthology.org/2022.emnlp-main.552/HakyungxPrompt-based Distribution Alignment for Domain Generalization in Text Classificationhttps://aclanthology.org/2022.emnlp-main.690/Hieu Man
14
12PromptBERT: Improving BERT Sentence Embeddings with Promptshttps://aclanthology.org/2022.emnlp-main.603/05/23Structured Prompting: Scaling In-Context Learning to 1,000 Exampleshttps://arxiv.org/abs/2212.06713Viet
15
13Active Example Selection for In-Context Learninghttps://aclanthology.org/2022.emnlp-main.622/AmirGradient-based Constrained Sampling from Language Modelshttps://aclanthology.org/2022.emnlp-main.144/Timmy
16
14Prompt-based Distribution Alignment for Domain Generalization in Text Classificationhttps://aclanthology.org/2022.emnlp-main.690/Hieu Manx05/30Efficient Pre-training of Masked Language Model via Concept-based Curriculum Maskinghttps://aclanthology.org/2022.emnlp-main.502/Hakyung
17
15Evade the Trap of Mediocrity: Promoting Diversity and Novelty in Text Generation via Concentrating Attentionhttps://aclanthology.org/2022.emnlp-main.745/PaulEntity Extraction in Low Resource Domains with Selective Pre-training of Large Language Modelshttps://aclanthology.org/2022.emnlp-main.61/Navya
18
16Zero-shot Cross-lingual Transfer of Prompt-based Tuning with a Unified Multilingual Prompthttps://aclanthology.org/2022.emnlp-main.790/Navyax06/06
19
17Transforming Sequence Tagging Into A Seq2Seq Taskhttps://aclanthology.org/2022.emnlp-main.813/Amirx
20
18Efficient Pre-training of Masked Language Model via Concept-based Curriculum Maskinghttps://aclanthology.org/2022.emnlp-main.502/Hakyungx
21
19Don’t Prompt, Search! Mining-based Zero-Shot Learning with Language Modelshttps://aclanthology.org/2022.emnlp-main.509/Vietx
22
20Quality at a Glance: An Audit of Web-Crawled Multilingual Datasetshttps://aclanthology.org/2022.tacl-1.4/
23
21Preventing Verbatim Memorization in Language Models Gives a False Sense of Privacyhttps://arxiv.org/pdf/2210.17546.pdfZaydx
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100