派博傳思國際中心

標(biāo)題: Titlebook: Machine Translation; 18th China Conferenc Tong Xiao,Juan Pino Conference proceedings 2022 The Editor(s) (if applicable) and The Author(s), [打印本頁]

作者: invigorating    時(shí)間: 2025-3-21 18:03
書目名稱Machine Translation影響因子(影響力)




書目名稱Machine Translation影響因子(影響力)學(xué)科排名




書目名稱Machine Translation網(wǎng)絡(luò)公開度




書目名稱Machine Translation網(wǎng)絡(luò)公開度學(xué)科排名




書目名稱Machine Translation被引頻次




書目名稱Machine Translation被引頻次學(xué)科排名




書目名稱Machine Translation年度引用




書目名稱Machine Translation年度引用學(xué)科排名




書目名稱Machine Translation讀者反饋




書目名稱Machine Translation讀者反饋學(xué)科排名





作者: 暖昧關(guān)系    時(shí)間: 2025-3-21 21:36
CCMT 2022 Translation Quality Estimation Task, found that pre-training the predictor with the semantic textual similarity (STS) task in the parallel corpus and using augmented training data constructed by different machine translation (MT) engines can improve the prediction effect of the Human-targeted Translation Edit Rate (HTER) in both Chinese-English and English-Chinese tasks.
作者: 裝勇敢地做    時(shí)間: 2025-3-22 01:23

作者: 爵士樂    時(shí)間: 2025-3-22 05:29

作者: 先行    時(shí)間: 2025-3-22 09:05
,Multi-strategy Enhanced Neural Machine Translation for?Chinese Minority Languages,d Ensemble. Our enhancement experiments have proved the effectiveness of above-mentioned strategies. We submit enhanced systems as primary systems for the three tracks. In addition, we train contrast models using additional bilingual data and submit results generated by these contrast models.
作者: Arthropathy    時(shí)間: 2025-3-22 13:44
,An Improved Multi-task Approach to?Pre-trained Model Based MT Quality Estimation,r model. We show that the post-editing sub-task is much more in-formative and the mBART is superior to other pre-trained models. Experiments on WMT2021 English-German and English-Chinese QE datasets showed that the proposed method achieves 1.2%–2.1% improvements in the strong sentence-level QE baseline.
作者: 卡死偷電    時(shí)間: 2025-3-22 20:17

作者: 值得贊賞    時(shí)間: 2025-3-23 01:10
,Effective Data Augmentation Methods for?CCMT 2022,ormer model with several effective data augmentation strategies which are adopted to improve the quality of translation. Experiments show that data augmentation methods have a good impact on the baseline system and aim to enhance the robustness of the model.
作者: IST    時(shí)間: 2025-3-23 04:04
,NJUNLP’s Submission for CCMT 2022 Quality Estimation Task,hich achieves outstanding success in many NLP tasks in order to improve performance. With the purpose of better utilizing parallel data, several types of pseudo data are employed in our method as well. In addition, we also ensemble several models to promote the final results.
作者: 挑剔為人    時(shí)間: 2025-3-23 07:05

作者: 我沒有命令    時(shí)間: 2025-3-23 12:07
Dynamic Mask Curriculum Learning for Non-Autoregressive Neural Machine Translation,just the amount of information input at any time by way of curriculum learning. The fine-tuning and inference phases disable the module in the same way as the normal NAT model. In this paper, we experiment on two translation datasets of WMT16, and the BLEU improvement reaches 4.4 without speed reduction.
作者: refraction    時(shí)間: 2025-3-23 14:03

作者: Kidnap    時(shí)間: 2025-3-23 20:11
: Post-editing Advancement Cookbook,t progress since 2015; however, whether APE models are really performing well on domain samples remains as an open question, and achieving this is still a hard task. This paper provides a mobile domain APE corpus with 50.1 TER/37.4 BLEU for the En-Zh language pair. This corpus is much more practical
作者: 慟哭    時(shí)間: 2025-3-23 23:45
Hot-Start Transfer Learning Combined with Approximate Distillation for Mongolian-Chinese Neural Mac is very important, and the use of pre-training model can also alleviate the shortage of data. However, the good performance of common cold-start transfer learning methods is limited to the cognate language realized by sharing its vocabulary. Moreover, when using the pre-training model, the combinat
作者: 令人心醉    時(shí)間: 2025-3-24 03:18
Review-Based Curriculum Learning for Neural Machine Translation,om simple to difficult to adapt the general NMT model to a specific domain. However, previous curriculum learning methods suffer from catastrophic forgetting and learning inefficiency. In this paper, we introduce a review-based curriculum learning method, targetedly selecting curriculum according to
作者: 毗鄰    時(shí)間: 2025-3-24 09:15
,Multi-strategy Enhanced Neural Machine Translation for?Chinese Minority Languages,an.Chinese Daily Conversation Translation, Tibetan.Chinese Government Document Translation, and Uighur.Chinese News Translation. We train our models using the Deep Transformer architecture, and adopt enhancement strategies such as Regularized Dropout, Tagged Back-Translation, Alternated Training, an
作者: 使害羞    時(shí)間: 2025-3-24 12:10
,Target-Side Language Model for?Reference-Free Machine Translation Evaluation,evaluation, where source texts are directly compared with system translations. In this paper, we design a reference-free metric that is based only on a target-side language model for segment-level and system-level machine translation evaluations respectively, and it is found out that promising resul
作者: 負(fù)擔(dān)    時(shí)間: 2025-3-24 16:31
,Life Is Short, Train It Less: Neural Machine Tibetan-Chinese Translation Based on?mRASP and?Dataset selection. The multilingual pre-trained model is designed to increase the performance of machine translation with low resources by bringing in more common information. Instead of repeatedly training several checkpoints from scratch, this study proposes a checkpoint selection strategy that uses a cl
作者: 輕浮女    時(shí)間: 2025-3-24 21:48
,Improving the?Robustness of?Low-Resource Neural Machine Translation with?Adversarial Examples, added to the input sentence, the model will produce completely different translation with high confidence. Adversarial example is currently a major tool to improve model robustness and how to generate an adversarial examples that can degrade the performance of the model and ensure semantic consiste
作者: harangue    時(shí)間: 2025-3-25 00:39

作者: 廢除    時(shí)間: 2025-3-25 04:42

作者: 駕駛    時(shí)間: 2025-3-25 08:39

作者: 不利    時(shí)間: 2025-3-25 15:24

作者: 狗窩    時(shí)間: 2025-3-25 18:49
,Optimizing Deep Transformers for?Chinese-Thai Low-Resource Translation,xplore the experiment settings (including the number of BPE merge operations, dropout probability, embedding size, etc.) for the low-resource scenario with the 6-layer Transformer. Considering that increasing the number of layers also increases the regularization on new model parameters (dropout mod
作者: ear-canal    時(shí)間: 2025-3-25 22:27
CCMT 2022 Translation Quality Estimation Task,fort estimation in the 18th China Conference on Machine Translation (CCMT) 2022. This method is based on a predictor-estimator model. The predictor is an XLM-RoBERTa model pre-trained on a large-scale parallel corpus and extracts features from the source language text and machine-translated text. Th
作者: farewell    時(shí)間: 2025-3-26 03:25
,Effective Data Augmentation Methods for?CCMT 2022,slation (CCMT 2022) evaluation tasks. We submitted the results of two bilingual machine translation (MT) evaluation tasks in CCMT 2022. One is Chinese-English bilingual MT tasks from the news field, the other is Chinese-Thai bilingual MT tasks in low resource languages. Our system is based on Transf
作者: ostensible    時(shí)間: 2025-3-26 08:11
,NJUNLP’s Submission for CCMT 2022 Quality Estimation Task, CCMT 2022 quality estimation sentence-level task for English-to-Chinese (EN-ZH). We follow the DirectQE framework, whose target is bridging the gap between pre-training on parallel data and fine-tuning on QE data. We further combine DirectQE with the pre-trained language model XLM-RoBERTa (XLM-R) w
作者: 使絕緣    時(shí)間: 2025-3-26 11:48
,ISTIC’s Thai-to-Chinese Neural Machine Translation System for?CCMT’ 2022,hina (ISTIC) for the 18th China Conference on Machine Translation (CCMT’ 2022). ISTIC participated in a low resource evaluation task: Thai-to-Chinese MT task. The paper mainly illuminates its system framework based on Transformer, data preprocessing methods and some strategies adopted in this system
作者: medieval    時(shí)間: 2025-3-26 13:27
Pengcong Wang,Hongxu Hou,Shuo Sun,Nier Wu,Weichen Jian,Zongheng Yang,Yisong Wang‘ several years of experience in teaching linear models at various levels. It gives an up-to-date account of the theory and applications of linear models. The book can be used as a text for courses in statistics at the graduate level and as an accompanying text for courses in other areas. Some of th
作者: 不幸的人    時(shí)間: 2025-3-26 18:29

作者: 上坡    時(shí)間: 2025-3-26 22:40
Zhanglin Wu,Daimeng Wei,Xiaoyu Chen,Ming Zhu,Zongyao Li,Hengchao Shang,Jinlong Yang,Zhengzhe Yu,Zhiqsearchers seeking a thorough mathematical treatment of the fThis book is designed as a textbook for graduate students and as a resource for researchers seeking a thorough mathematical treatment of its subject. It develops the main results of regression and the analysis of variance, as well as the ce
作者: 酷熱    時(shí)間: 2025-3-27 02:34
Min Zhang,Xiaosong Qiao,Hao Yang,Shimin Tao,Yanqing Zhao,Yinlu Li,Chang Su,Minghan Wang,Jiaxin Guo,Ysearchers seeking a thorough mathematical treatment of the fThis book is designed as a textbook for graduate students and as a resource for researchers seeking a thorough mathematical treatment of its subject. It develops the main results of regression and the analysis of variance, as well as the ce
作者: 影響深遠(yuǎn)    時(shí)間: 2025-3-27 07:31

作者: Endearing    時(shí)間: 2025-3-27 09:43
Shuo Sun,Hongxu Hou,Nier Wu,Zongheng Yang,Yisong Wang,Pengcong Wang,Weichen Jianications of linear models. The book can be used as a text for courses in statistics at the graduate level and as an accompanying text for courses in other areas. Some of the highlights in this book are as follows. A relatively extensive chapter on matrix theory (Appendix A) provides the necessary to
作者: 有常識(shí)    時(shí)間: 2025-3-27 14:59
Yisong Wang,Hongxu Hou,Shuo Sun,Nier Wu,Weichen Jian,Zongheng Yang,Pengcong Wangverview of generalizations.New edition has been extensivley Thebookisbasedonseveralyearsofexperienceofbothauthorsinteaching linear models at various levels. It gives an up-to-date account of the theory and applications of linear models. The book can be used as a text for courses in statistics at the
作者: 縮短    時(shí)間: 2025-3-27 17:51
Zongheng Yang,Hongxu Hou,Shuo Sun,Nier Wu,Yisong Wang,Weichen Jian,Pengcong Wangp the relationships between the many variables. It might be convenient if the 25 variables could be reduced to a more manageable number. Clearly, it is easier to work with 4 or 5 variables than with 25. Of course, one cannot reasonably expect to get a substantial reduction in dimensionality without
作者: Perineum    時(shí)間: 2025-3-27 22:13

作者: 不容置疑    時(shí)間: 2025-3-28 04:05
Binhuan Yuan,Yueyang Li,Kehai Chen,Hao Lu,Muyun Yang,Hailong Caohat respond to pressing educational and psychological measu- mentproblems. Thedevelopmentofcriterion-referencedassessmentwasthe ?rst, beginning in the late 1960s with the important work of Robert Glaser and Jim Popham, in response to the need for assessments that considered candidate performance in
作者: Measured    時(shí)間: 2025-3-28 08:53

作者: ALLEY    時(shí)間: 2025-3-28 12:29

作者: WITH    時(shí)間: 2025-3-28 16:54
Jing Wang,Lina Yangst theory, test construction, or applied statistics.IncludesOver my nearly forty years of teaching and conducting research in the ?eld of psychometric methods, I have seen a number of major technical advances that respond to pressing educational and psychological measu- mentproblems. Thedevelopmento
作者: 摻和    時(shí)間: 2025-3-28 20:10
Yu Zhang,Xiang Geng,Shujian Huang,Jiajun Chenst theory, test construction, or applied statistics.IncludesOver my nearly forty years of teaching and conducting research in the ?eld of psychometric methods, I have seen a number of major technical advances that respond to pressing educational and psychological measu- mentproblems. Thedevelopmento
作者: Adrenal-Glands    時(shí)間: 2025-3-29 01:02
Shuao Guo,Hangcheng Guo,Yanqing He,Tian Lanst theory, test construction, or applied statistics.IncludesOver my nearly forty years of teaching and conducting research in the ?eld of psychometric methods, I have seen a number of major technical advances that respond to pressing educational and psychological measu- mentproblems. Thedevelopmento
作者: Harness    時(shí)間: 2025-3-29 03:29

作者: certain    時(shí)間: 2025-3-29 09:39
Shuo Sun,Hongxu Hou,Nier Wu,Zongheng Yang,Yisong Wang,Pengcong Wang,Weichen Jianpted to provide a uni?ed theory of inference from linear models with minimal assumptions. Besides the usual least-squares theory, alternative methods of estimation and testing based on convex loss fu- tions and general estimating equations are discussed. Special emphasis is given to sensitivity anal
作者: FLAG    時(shí)間: 2025-3-29 14:25
Zongheng Yang,Hongxu Hou,Shuo Sun,Nier Wu,Yisong Wang,Weichen Jian,Pengcong Wang the original variables that are best linear predictors of the full set of variables. This predictive approach to . seems intuitively reasonable. We emphasize this interpretation of principal component analysis rather than the traditional motivation of finding linear combinations that account for mo
作者: 中國紀(jì)念碑    時(shí)間: 2025-3-29 17:30
Bin Li,Yixuan Weng,Bin Sun,Shutao Li the original variables that are best linear predictors of the full set of variables. This predictive approach to . seems intuitively reasonable. We emphasize this interpretation of principal component analysis rather than the traditional motivation of finding linear combinations that account for mo
作者: MORPH    時(shí)間: 2025-3-29 21:43

作者: Isolate    時(shí)間: 2025-3-30 03:42

作者: 在駕駛    時(shí)間: 2025-3-30 07:56

作者: NOVA    時(shí)間: 2025-3-30 08:44

作者: 有危險(xiǎn)    時(shí)間: 2025-3-30 16:01

作者: 詳細(xì)目錄    時(shí)間: 2025-3-30 17:20

作者: 吞下    時(shí)間: 2025-3-30 20:49

作者: 障礙    時(shí)間: 2025-3-31 02:39
,A Multi-tasking and?Multi-stage Chinese Minority Pre-trained Language Model,pre-training tasks and two-stage strategies are adopted during pre-training for better results. Experiments show that our model outperforms the baseline method in Chinese Minority language translation. At the same time, we released the first generative pre-trained language model for the Chinese Mino




歡迎光臨 派博傳思國際中心 (http://www.pjsxioz.cn/) Powered by Discuz! X3.5
临汾市| 旺苍县| 沅陵县| 宜宾县| 项城市| 龙胜| 鄄城县| 天峻县| 游戏| 清原| 博客| 博客| 灵山县| 建始县| 溧水县| 鄄城县| 海兴县| 疏附县| 壶关县| 潞西市| 高州市| 平阳县| 察哈| 牡丹江市| 广灵县| 惠州市| 达孜县| 丹东市| 黄陵县| 射阳县| 民乐县| 安徽省| 扶沟县| 湘阴县| 广南县| 福清市| 卫辉市| 博罗县| 梁平县| 赞皇县| 讷河市|