找回密碼
 To register

QQ登錄

只需一步,快速開始

掃一掃,訪問微社區(qū)

打印 上一主題 下一主題

Titlebook: Man-Machine Speech Communication; 17th National Confer Ling Zhenhua,Gao Jianqing,Jia Jia Conference proceedings 2023 The Editor(s) (if appl

[復(fù)制鏈接]
樓主: Forbidding
31#
發(fā)表于 2025-3-26 22:07:24 | 只看該作者
,Adversarial Training Based on?Meta-Learning in?Unseen Domains for?Speaker Verification,fer from poor performance when apply to unseen data with domain shift caused by the difference between training data and testing data such as scene noise and speaking style. To solve the above issues, the model we proposed includes a backbone and an extra domain attention module, which are optimized
32#
發(fā)表于 2025-3-27 01:34:55 | 只看該作者
,Multi-speaker Multi-style Speech Synthesis with?Timbre and?Style Disentanglement, the disentanglement of timbres and styles, TTS systems could synthesize expressive speech for a given speaker with any style which has been seen in the training corpus. However, there are still some shortcomings with the current research on timbre and style disentanglement. The current method eithe
33#
發(fā)表于 2025-3-27 05:29:34 | 只看該作者
,Multiple Confidence Gates for?Joint Training of?SE and?ASR,es on improving the auditory quality of speech, but the enhanced feature distribution is changed, which is uncertain and detrimental to the ASR. To tackle this challenge, an approach with multiple confidence gates for jointly training of SE and ASR is proposed. A speech confidence gates prediction m
34#
發(fā)表于 2025-3-27 11:22:32 | 只看該作者
35#
發(fā)表于 2025-3-27 14:37:13 | 只看該作者
36#
發(fā)表于 2025-3-27 20:47:45 | 只看該作者
37#
發(fā)表于 2025-3-27 21:56:41 | 只看該作者
Interplay Between Prosody and Syntax-Semantics: Evidence from the Prosodic Features of Mandarin Tagof Mandarin sentence prosody, on the other hand, is still limited. To bridge this gap, this study probed the prosodic features of Mandarin tag questions in comparison with those from the declarative counterparts. The aim was to verify the hypothesis that the statement parts in the tag questions woul
38#
發(fā)表于 2025-3-28 05:41:22 | 只看該作者
,Improving Fine-Grained Emotion Control and?Transfer with?Gated Emotion Representations in?Speech Syhe lack of fine-grained emotion strength labelling data, emotion or style strength extractor is usually learned at the whole utterance scale through a ranking function. However, such utterance-based extractor is then used to provide fine-grained emotion strength labels, conditioning on which a fine-
39#
發(fā)表于 2025-3-28 09:32:44 | 只看該作者
,Violence Detection Through Fusing Visual Information to?Auditory Scene,olve the present issue of the lack of violent audio datasets, we first created our own audio violent dataset named VioAudio. Then, we proposed a CNN-ConvLSTM network model for audio violence detection, which obtained an accuracy of 91.5% on VioAudio and a MAP value of 16.47% on the MediaEval 2015 da
40#
發(fā)表于 2025-3-28 11:04:49 | 只看該作者
,Source-Filter-Based Generative Adversarial Neural Vocoder for?High Fidelity Speech Synthesis,t various temporal resolutions and finally reconstructs the raw waveform. The experimental results show that our proposed SF-GAN vocoder outperforms the state-of-the-art HiFi-GAN and Fre-GAN in both analysis-synthesis (AS) and text-to-speech (TTS) tasks, and the synthesized speech quality of SF-GAN is comparable to the ground-truth audio.
 關(guān)于派博傳思  派博傳思旗下網(wǎng)站  友情鏈接
派博傳思介紹 公司地理位置 論文服務(wù)流程 影響因子官網(wǎng) 吾愛論文網(wǎng) 大講堂 北京大學(xué) Oxford Uni. Harvard Uni.
發(fā)展歷史沿革 期刊點評 投稿經(jīng)驗總結(jié) SCIENCEGARD IMPACTFACTOR 派博系數(shù) 清華大學(xué) Yale Uni. Stanford Uni.
QQ|Archiver|手機版|小黑屋| 派博傳思國際 ( 京公網(wǎng)安備110108008328) GMT+8, 2025-10-20 14:19
Copyright © 2001-2015 派博傳思   京公網(wǎng)安備110108008328 版權(quán)所有 All rights reserved
快速回復(fù) 返回頂部 返回列表
卫辉市| 穆棱市| 溧水县| 辽源市| 富蕴县| 奉节县| 宁城县| 香港| 萨迦县| 容城县| 昌宁县| 灌南县| 和顺县| 丹凤县| 社旗县| 大悟县| 新绛县| 张家界市| 赞皇县| 和平县| 大方县| 平凉市| 双鸭山市| 保德县| 丰镇市| 青川县| 准格尔旗| 贵溪市| 宁远县| 贵德县| 唐山市| 林甸县| 长兴县| 分宜县| 郎溪县| 霍邱县| 林西县| 黔西| 新竹市| 雅安市| 铅山县|