派博傳思國際中心

標(biāo)題: Titlebook: Chinese Computational Linguistics; 18th China National Maosong Sun,Xuanjing Huang,Yang Liu Conference proceedings 2019 Springer Nature Swi [打印本頁]

作者: 壓縮    時間: 2025-3-21 19:41
書目名稱Chinese Computational Linguistics影響因子(影響力)




書目名稱Chinese Computational Linguistics影響因子(影響力)學(xué)科排名




書目名稱Chinese Computational Linguistics網(wǎng)絡(luò)公開度




書目名稱Chinese Computational Linguistics網(wǎng)絡(luò)公開度學(xué)科排名




書目名稱Chinese Computational Linguistics被引頻次




書目名稱Chinese Computational Linguistics被引頻次學(xué)科排名




書目名稱Chinese Computational Linguistics年度引用




書目名稱Chinese Computational Linguistics年度引用學(xué)科排名




書目名稱Chinese Computational Linguistics讀者反饋




書目名稱Chinese Computational Linguistics讀者反饋學(xué)科排名





作者: DEMUR    時間: 2025-3-22 00:16
0302-9743 ng, minority language processing, language resource and evaluation, social computing and sentiment analysis, NLP applications..978-3-030-32380-6978-3-030-32381-3Series ISSN 0302-9743 Series E-ISSN 1611-3349
作者: 縮減了    時間: 2025-3-22 01:36
Honglu Sun,Maxime Folschette,Morgan Magnin options to generate the representation of options. Finally, we fed into a max-pooling layer to obtain the ranking score for each opinion. Experiments show that our proposed achieve state-of-art performance on the Chinese opinion questions machine reading comprehension datasets in AI challenger competition.
作者: 大看臺    時間: 2025-3-22 06:02
Reachability in Succinct One-Counter Games,just the dependency arc to optimize the construction process of GCNs. Experiments show that the model combined with the graph convolution network is better than the original model, and the performance in the Google sentence compression dataset has been effectively improved.
作者: 高談闊論    時間: 2025-3-22 09:49

作者: 青少年    時間: 2025-3-22 16:26
Reconstructed Option Rereading Network for Opinion Questions Reading Comprehension options to generate the representation of options. Finally, we fed into a max-pooling layer to obtain the ranking score for each opinion. Experiments show that our proposed achieve state-of-art performance on the Chinese opinion questions machine reading comprehension datasets in AI challenger competition.
作者: 青少年    時間: 2025-3-22 19:16

作者: Veneer    時間: 2025-3-22 22:18
Syntax-Aware Attention for Natural Language Inference with Phrase-Level Matchinging the context from syntactic tree. Experimental results on SNLI and SciTail datasets demonstrate that our model has the ability to model NLI more precisely and significantly improves the performance.
作者: Infect    時間: 2025-3-23 01:25
Conference proceedings 2019r 2019. The 56 full papers presented in this volume were carefully reviewed and selected from 134 submissions. They were organized in topical sections named: linguistics and cognitive science, fundamental theory and methods of computational linguistics, information retrieval and question answering,
作者: 芭蕾舞女演員    時間: 2025-3-23 08:07

作者: forthy    時間: 2025-3-23 12:17

作者: diabetes    時間: 2025-3-23 16:17
BB-KBQA: BERT-Based Knowledge Base Question Answeringuistic knowledge to obtain deep contextualized representations. Experimental results demonstrate that our model can achieve the state-of-the-art performance on the NLPCC- ICCPOL 2016 KBQA dataset, with an 84.12% averaged F1 score(1.65% absolute improvement).
作者: 擦掉    時間: 2025-3-23 19:35

作者: EXTOL    時間: 2025-3-23 22:21
Lecture Notes in Computer Scienced to explain the recognition ability of four NN-based models at a fine-grained level. The experimental results show that all the models have poor performance in the commonsense reasoning than in other entailment categories. The highest accuracy difference is 13.22%.
作者: 移動    時間: 2025-3-24 02:36
Paulin Jacobé de Naurois,Virgile Mogbilnd the interactive effects of keypoints in two sentences to learn sentence similarity. With less computational complexity, our model yields state-of-the-art improvement compared with other baseline models in paraphrase identification task on the Ant Financial competition dataset.
作者: nonplus    時間: 2025-3-24 10:05
Synthesis Problems for One-Counter Automata,l results show due to different linguistic features, the neural components have different effects in English and Chinese. Besides, our models achieve state-of-the-art performance on CoNLL-2016 English and Chinese datasets.
作者: 有組織    時間: 2025-3-24 12:02
https://doi.org/10.1007/978-3-319-45994-3elected sentence by an abstractive decoder. Moreover, we apply the BERT pre-trained model as document encoder, sharing the context representations to both decoders. Experiments on the CNN/DailyMail dataset show that the proposed framework outperforms both state-of-the-art extractive and abstractive models.
作者: Original    時間: 2025-3-24 18:30
Testing the Reasoning Power for NLI Models with Annotated Multi-perspective Entailment Datasetd to explain the recognition ability of four NN-based models at a fine-grained level. The experimental results show that all the models have poor performance in the commonsense reasoning than in other entailment categories. The highest accuracy difference is 13.22%.
作者: Enteropathic    時間: 2025-3-24 22:59
ERCNN: Enhanced Recurrent Convolutional Neural Networks for Learning Sentence Similaritynd the interactive effects of keypoints in two sentences to learn sentence similarity. With less computational complexity, our model yields state-of-the-art improvement compared with other baseline models in paraphrase identification task on the Ant Financial competition dataset.
作者: Absenteeism    時間: 2025-3-25 02:18
Comparative Investigation of Deep Learning Components for End-to-end Implicit Discourse Relationshipl results show due to different linguistic features, the neural components have different effects in English and Chinese. Besides, our models achieve state-of-the-art performance on CoNLL-2016 English and Chinese datasets.
作者: Blazon    時間: 2025-3-25 06:52
Sharing Pre-trained BERT Decoder for a Hybrid Summarizationelected sentence by an abstractive decoder. Moreover, we apply the BERT pre-trained model as document encoder, sharing the context representations to both decoders. Experiments on the CNN/DailyMail dataset show that the proposed framework outperforms both state-of-the-art extractive and abstractive models.
作者: CRP743    時間: 2025-3-25 09:34
Conference proceedings 2019text classification and summarization, knowledge graph and information extraction, machine translation and multilingual information processing, minority language processing, language resource and evaluation, social computing and sentiment analysis, NLP applications..
作者: gnarled    時間: 2025-3-25 14:42

作者: 嬉耍    時間: 2025-3-25 19:01
Testing the Reasoning Power for NLI Models with Annotated Multi-perspective Entailment Datasetsed) models have achieved prominent success. However, rare models are interpretable. In this paper, we propose a Multi-perspective Entailment Category Labeling System (METALs). It consists of three categories, ten sub-categories. We manually annotate 3,368 entailment items. The annotated data is use
作者: mortuary    時間: 2025-3-25 23:47
Enhancing Chinese Word Embeddings from Relevant Derivative Meanings of Main-Components in Characters basic unit, or directly use the internal structure of words. However, these models still neglect the rich relevant derivative meanings in the internal structure of Chinese characters. Based on our observations, the relevant derivative meanings of the main-components in Chinese characters are very h
作者: 構(gòu)成    時間: 2025-3-26 03:46
Association Relationship Analyses of Stylistic Syntactic Structuresationships of linguistic features, such as collocation of morphemes, words, or phrases. Although they have drawn many useful conclusions, some summarized linguistic rules lack physical verification of large-scale data. Due to the development of machine learning theories, we are now able to use compu
作者: 打包    時間: 2025-3-26 04:22

作者: Ergots    時間: 2025-3-26 10:45

作者: SPER    時間: 2025-3-26 13:00
BB-KBQA: BERT-Based Knowledge Base Question Answering real-world systems. Most existing methods are template-based or training BiLSTMs or CNNs on the task-specific dataset. However, the hand-crafted templates are time-consuming to design as well as highly formalist without generalization ability. At the same time, BiLSTMs and CNNs require large-scale
作者: cauda-equina    時間: 2025-3-26 20:45
Reconstructed Option Rereading Network for Opinion Questions Reading Comprehension question referring to a related passage. Previous work focuses on factoid-based questions but ignore opinion-based questions. Options of opinion-based questions are usually sentiment phrases, such as “Good” or “Bad”. It causes that previous work fail to model the interactive information among passa
作者: 一大塊    時間: 2025-3-26 23:28

作者: SKIFF    時間: 2025-3-27 02:42

作者: 癡呆    時間: 2025-3-27 05:25

作者: perjury    時間: 2025-3-27 12:24
Comparative Investigation of Deep Learning Components for End-to-end Implicit Discourse Relationshiptic work to investigate the influence of neural components on the performance of implicit discourse relation recognition. To address it, in this work we compare many different components and build two implicit discourse parsers base on the sequence and structure of sentence respectively. Experimenta
作者: 暴露他抗議    時間: 2025-3-27 14:14

作者: 開頭    時間: 2025-3-27 20:58
Sharing Pre-trained BERT Decoder for a Hybrid Summarizations two separated subtasks. In this paper, we propose a novel extractive-and-abstractive hybrid framework for single document summarization task by jointly learning to select sentence and rewrite summary. It first selects sentences by an extractive decoder and then generate summary according to each s
作者: relieve    時間: 2025-3-27 22:24
Title-Aware Neural News Topic Predictionet user interests and make personalized recommendations. However, massive news articles are generated everyday, and it too expensive and time-consuming to manually categorize all news. The news bodies usually convey the detailed information of news, and the news titles usually contain summarized and
作者: Mutter    時間: 2025-3-28 03:10
Colligational Patterns in China English: The Case of the Verbs of Communicationt in BNC. They are . and . for the verb ., . for the verb ., and . for the verb .. (3) Some colligational patterns occur less frequently in CCE than those in BNC, such as the patterns . and . for the verb . and . for the verb ., and . for the verb .. (4) No new colligational patterns have been found
作者: Interdict    時間: 2025-3-28 08:27
Enhancing Chinese Word Embeddings from Relevant Derivative Meanings of Main-Components in Charactershe attention mechanism. Our models can fine-grained enhance the precision of word embeddings without generating additional vectors. Experiments on word similarity and syntactic analogy tasks are conducted to validate the feasibility of our models. Furthermore, the results show that our models have a
作者: 抒情短詩    時間: 2025-3-28 10:32
Association Relationship Analyses of Stylistic Syntactic Structuresied before. Combined with the linguistic theory, detailed analyses show that the association between parts of speech and syntactic structures mined by machine learning method has an excellent stylistic explanatory effect.
作者: 作嘔    時間: 2025-3-28 14:37
Adversarial Domain Adaptation for Chinese Semantic Dependency Graph Parsingponent we proposed, the model can effectively improve the performance in the target domain. On the CCSD dataset, our model achieved state-of-the-art performance with significant improvement compared to the strong baseline model.
作者: 上流社會    時間: 2025-3-28 22:02

作者: dissolution    時間: 2025-3-29 01:43
Title-Aware Neural News Topic Predictionnews to learn unified news representations. In the title view, we learn title representations from words via a long-short term memory (LSTM) network, and use attention mechanism to select important words according to their contextual representations. In the body view, we propose to use a hierarchica
作者: 說明    時間: 2025-3-29 06:32
Lecture Notes in Computer Sciencet in BNC. They are . and . for the verb ., . for the verb ., and . for the verb .. (3) Some colligational patterns occur less frequently in CCE than those in BNC, such as the patterns . and . for the verb . and . for the verb ., and . for the verb .. (4) No new colligational patterns have been found
作者: enmesh    時間: 2025-3-29 09:50

作者: 慢慢流出    時間: 2025-3-29 11:39

作者: colony    時間: 2025-3-29 18:00

作者: FILTH    時間: 2025-3-29 22:26
Olivier Bournez,Enrico Formenti,Igor PotapovWe evaluate our model on two tasks: Answer Selection and Textual Entailment. Experimental results show the effectiveness of our model, which achieves the state-of-the-art performance on WikiQA dataset.
作者: 表主動    時間: 2025-3-30 00:28
Ilaria De Crescenzo,Salvatore La Torrenews to learn unified news representations. In the title view, we learn title representations from words via a long-short term memory (LSTM) network, and use attention mechanism to select important words according to their contextual representations. In the body view, we propose to use a hierarchica
作者: 苦惱    時間: 2025-3-30 06:42
https://doi.org/10.1007/978-3-030-32381-3artificial intelligence; classification; information extraction; language resources; machine translation
作者: 刻苦讀書    時間: 2025-3-30 08:12
978-3-030-32380-6Springer Nature Switzerland AG 2019
作者: 啞巴    時間: 2025-3-30 13:44

作者: 水獺    時間: 2025-3-30 18:46

作者: DEI    時間: 2025-3-31 00:20

作者: considerable    時間: 2025-3-31 04:27

作者: Morsel    時間: 2025-3-31 07:18
On Solving Word Equations Using SAT,ationships of linguistic features, such as collocation of morphemes, words, or phrases. Although they have drawn many useful conclusions, some summarized linguistic rules lack physical verification of large-scale data. Due to the development of machine learning theories, we are now able to use compu




歡迎光臨 派博傳思國際中心 (http://www.pjsxioz.cn/) Powered by Discuz! X3.5
独山县| 如东县| 芦山县| 油尖旺区| 澜沧| 祥云县| 灵武市| 东海县| 都江堰市| 桂林市| 义马市| 沈丘县| 深水埗区| 灵石县| 包头市| 开封县| 临泉县| 巨鹿县| 阿克苏市| 肥乡县| 东港市| 定安县| 顺昌县| 福建省| 安图县| 融水| 兴国县| 三门峡市| 祁阳县| 长治县| 二连浩特市| 安宁市| 乌兰浩特市| 双牌县| 安庆市| 临潭县| 麟游县| 儋州市| 灌云县| 木兰县| 丰宁|