作者: Allergic 時(shí)間: 2025-3-21 22:41
Chinese ComputationalLinguistics978-3-030-84186-7Series ISSN 0302-9743 Series E-ISSN 1611-3349 作者: 填滿 時(shí)間: 2025-3-22 00:43
https://doi.org/10.1007/978-3-030-84186-7computational linguistics; computer systems; computer vision; databases; education; image processing; info作者: cruise 時(shí)間: 2025-3-22 04:46 作者: 相反放置 時(shí)間: 2025-3-22 10:46
,On Yen’s Path Logic for Petri Nets,se of the beam size often suffers from plenty of short translations, resulting in dramatical decrease in translation quality. In this paper, we handle the length bias problem through a perspective of causal inference. Specifically, we regard the model generated translation score . as a degraded true作者: 較早 時(shí)間: 2025-3-22 14:27
Kenneth L. McMillan,Lenore D. Zuckent learning affect the performance of the model. The reward for generating translation is determined by the scalability and iteration of the sampling strategy, so it is difficult for the model to achieve bias-variance trade-off. Therefore, according to the poor ability of the model to analyze the s作者: 較早 時(shí)間: 2025-3-22 19:42 作者: 草率男 時(shí)間: 2025-3-23 00:22 作者: multiply 時(shí)間: 2025-3-23 04:23 作者: Gyrate 時(shí)間: 2025-3-23 06:09
Lecture Notes in Computer Sciencehe emotion cause at the clause level. However, in many scenarios, only extracting the cause clause is ambiguous. To ease the problem, in this paper, we introduce multi-level emotion cause analysis, which focuses on identifying emotion cause clause (ECC) and emotion cause keywords (ECK) simultaneousl作者: BUMP 時(shí)間: 2025-3-23 12:17 作者: Hypopnea 時(shí)間: 2025-3-23 15:55 作者: 不規(guī)則的跳動(dòng) 時(shí)間: 2025-3-23 20:10
Lecture Notes in Computer Sciencepresent a novel multi-speaker dialogue summarizer to demonstrate how large-scale commonsense knowledge can facilitate dialogue understanding and summary generation. In detail, we consider utterance and commonsense knowledge as two different types of data and design a Dialogue Heterogeneous Graph Net作者: 白楊 時(shí)間: 2025-3-23 22:37 作者: 緯度 時(shí)間: 2025-3-24 03:43 作者: micronutrients 時(shí)間: 2025-3-24 10:17
Norbert Hundeshagen,Martin Langele by asking several Yes/No questions which are answered by oracle. How to ask proper questions is crucial to achieve the final goal of the whole task. Previous methods generally use an word-level generator, which is hard to grasp the dialogue-level questioning strategy. They often generate repeated作者: Insubordinate 時(shí)間: 2025-3-24 12:10
Reaching Algebra Readiness (RAR)on classification. Most previous works on few-shot relation classification are based on learning-to-match paradigms, which focus on learning an effective universal matcher between the query and . target class prototype based on inner-class support sets. However, the learning-to-match paradigm focuse作者: 迎合 時(shí)間: 2025-3-24 17:15
Reaching Algebra Readiness (RAR)dels generally fall into two separate parts: evidence extraction and answer prediction, where the former extracts the key evidence corresponding to the question, and the latter predicts the answer based on those sentences. However, such pipeline paradigms tend to accumulate errors, i.e. extracting t作者: cogitate 時(shí)間: 2025-3-24 20:24 作者: palliate 時(shí)間: 2025-3-24 23:52
Low-Resource Machine Translation Based on Asynchronous Dynamic Programmingent learning affect the performance of the model. The reward for generating translation is determined by the scalability and iteration of the sampling strategy, so it is difficult for the model to achieve bias-variance trade-off. Therefore, according to the poor ability of the model to analyze the s作者: 反省 時(shí)間: 2025-3-25 03:44 作者: 車床 時(shí)間: 2025-3-25 09:07
Incorporating Translation Quality Estimation into Chinese-Korean Neural Machine Translationer forcing strategy for training in the NMT models. Moreover, the NMT models usually require the large-scale and high-quality parallel corpus. However, Korean is a low resource language, and there is no large-scale parallel corpus between Chinese and Korean, which is a challenging for the researcher作者: NEEDY 時(shí)間: 2025-3-25 15:09
Emotion Classification of COVID-19 Chinese Microblogs Based on the Emotion Category Descriptionres of the microblog itself, without combining the semantics of emotion categories for modeling. Emotion classification of microblogs is a process of reading the content of microblogs and combining the semantics of emotion categories to understand whether it contains a certain emotion. Inspired by t作者: persistence 時(shí)間: 2025-3-25 16:40
Multi-level Emotion Cause Analysis by Multi-head Attention Based Multi-task Learninghe emotion cause at the clause level. However, in many scenarios, only extracting the cause clause is ambiguous. To ease the problem, in this paper, we introduce multi-level emotion cause analysis, which focuses on identifying emotion cause clause (ECC) and emotion cause keywords (ECK) simultaneousl作者: 遺留之物 時(shí)間: 2025-3-25 22:50
Using Query Expansion in Manifold Ranking for Query-Oriented Multi-document Summarizationntences, but also the relationships between the given query and the sentences. However, the information of original query is often insufficient. So we present a query expansion method, which is combined in the manifold ranking to resolve this problem. Our method not only utilizes the information of 作者: 寬宏大量 時(shí)間: 2025-3-26 01:04 作者: 不可磨滅 時(shí)間: 2025-3-26 05:10
Incorporating Commonsense Knowledge into Abstractive Dialogue Summarization via Heterogeneous Graph present a novel multi-speaker dialogue summarizer to demonstrate how large-scale commonsense knowledge can facilitate dialogue understanding and summary generation. In detail, we consider utterance and commonsense knowledge as two different types of data and design a Dialogue Heterogeneous Graph Net作者: NEG 時(shí)間: 2025-3-26 10:51 作者: Detonate 時(shí)間: 2025-3-26 12:58
Topic Knowledge Acquisition and Utilization for Machine Reading Comprehension in Social Media Domainers have specific background knowledge. Therefore, those messages are usually short and lacking in background information, which is different from the text in the other domain. Thus, it is difficult for a machine to understand the messages comprehensively. Fortunately, a key nature of social media i作者: 允許 時(shí)間: 2025-3-26 20:46 作者: 種子 時(shí)間: 2025-3-26 22:25
From Learning-to-Match to Learning-to-Discriminate: Global Prototype Learning for Few-shot Relation on classification. Most previous works on few-shot relation classification are based on learning-to-match paradigms, which focus on learning an effective universal matcher between the query and . target class prototype based on inner-class support sets. However, the learning-to-match paradigm focuse作者: Thyroxine 時(shí)間: 2025-3-27 03:34 作者: 后天習(xí)得 時(shí)間: 2025-3-27 09:12
Topic Knowledge Acquisition and Utilization for Machine Reading Comprehension in Social Media Domains clustering. A group of people tend to express their opinion or report news around one topic. Having realized this, we propose a novel method that utilizes the topic knowledge implied by the clustered messages to aid in the comprehension of those short messages. The experiments on TweetQA datasets demonstrate the effectiveness of our method.作者: 帶子 時(shí)間: 2025-3-27 12:42
Conference proceedings 2021s, Text Generation and Summarization, Information Retrieval, Dialogue and Question Answering, Linguistics and Cognitive Science, Language Resource and Evaluation, Knowledge Graph and Information Extraction, and NLP Applications. .作者: 中古 時(shí)間: 2025-3-27 16:11
0302-9743 nt Analysis, Text Generation and Summarization, Information Retrieval, Dialogue and Question Answering, Linguistics and Cognitive Science, Language Resource and Evaluation, Knowledge Graph and Information Extraction, and NLP Applications. .978-3-030-84185-0978-3-030-84186-7Series ISSN 0302-9743 Series E-ISSN 1611-3349 作者: 笨重 時(shí)間: 2025-3-27 21:35 作者: Adornment 時(shí)間: 2025-3-27 23:39 作者: 暫時(shí)休息 時(shí)間: 2025-3-28 05:31 作者: 生存環(huán)境 時(shí)間: 2025-3-28 07:35
Reducing Length Bias in Scoring Neural Machine Translation via a Causal Inference Methodupervised, which is adaptive to any NMT model and test dataset. We conduct the experiments on three translation tasks with different scales of datasets. Experimental results and further analyses show that our approaches gain comparable performance with the empirical baseline methods.作者: deadlock 時(shí)間: 2025-3-28 12:58 作者: Figate 時(shí)間: 2025-3-28 15:10
Category-Based Strategy-Driven Question Generator for Visual Dialogueerated. Then the question is generated with the helps of category-based dialogue strategy as well as encoding of both the image and dialogue history. The evaluation on large-scale visual dialogue dataset GuessWhat?! shows that our method can help guesser achieve 51.71% success rate which is the state-of-the-art on the supervised training methods.作者: nonchalance 時(shí)間: 2025-3-28 20:25
0302-9743 n August 2021..The 31 full presented in this volume were carefully reviewed and selected from 90 submissions...The conference papers covers the following topics such as Machine Translation and Multilingual Information Processing, Minority Language Information Processing, Social Computing and Sentime作者: 壕溝 時(shí)間: 2025-3-29 02:34 作者: BIDE 時(shí)間: 2025-3-29 06:37
Conference proceedings 2021021..The 31 full presented in this volume were carefully reviewed and selected from 90 submissions...The conference papers covers the following topics such as Machine Translation and Multilingual Information Processing, Minority Language Information Processing, Social Computing and Sentiment Analysi作者: Generosity 時(shí)間: 2025-3-29 10:24 作者: Benzodiazepines 時(shí)間: 2025-3-29 13:48
Incorporating Commonsense Knowledge into Abstractive Dialogue Summarization via Heterogeneous Graph s on the SAMSum dataset show that our model can outperform various methods. We also conduct zero-shot setting experiments on the Argumentative Dialogue Summary Corpus, the results show that our model can better generalized to the new domain.作者: 鑲嵌細(xì)工 時(shí)間: 2025-3-29 17:52 作者: AVERT 時(shí)間: 2025-3-29 20:38 作者: Limerick 時(shí)間: 2025-3-30 02:17
Gwendal Priser,Elena Vanneaux,Goran Frehsedel. In addition, we alleviated the lack of Korean corpus resources by adding training data. In our experiment, we introduce a monolingual corpus of a certain scale to construct pseudo-parallel data. At the same time, we also preprocessed the Korean corpus with different granularities to overcome th