標(biāo)題: Titlebook: Advances in Robot Learning; 8th European Workhop Jeremy Wyatt,John Demiris Conference proceedings 2000 Springer-Verlag Berlin Heidelberg 20 [打印本頁(yè)] 作者: 日月等 時(shí)間: 2025-3-21 18:08
書目名稱Advances in Robot Learning影響因子(影響力)
書目名稱Advances in Robot Learning影響因子(影響力)學(xué)科排名
書目名稱Advances in Robot Learning網(wǎng)絡(luò)公開度
書目名稱Advances in Robot Learning網(wǎng)絡(luò)公開度學(xué)科排名
書目名稱Advances in Robot Learning被引頻次
書目名稱Advances in Robot Learning被引頻次學(xué)科排名
書目名稱Advances in Robot Learning年度引用
書目名稱Advances in Robot Learning年度引用學(xué)科排名
書目名稱Advances in Robot Learning讀者反饋
書目名稱Advances in Robot Learning讀者反饋學(xué)科排名
作者: engender 時(shí)間: 2025-3-21 20:56 作者: 煩躁的女人 時(shí)間: 2025-3-22 00:42
Advances in Robot Learning978-3-540-40044-8Series ISSN 0302-9743 Series E-ISSN 1611-3349 作者: 最后一個(gè) 時(shí)間: 2025-3-22 04:52
0302-9743 Overview: Includes supplementary material: 978-3-540-41162-8978-3-540-40044-8Series ISSN 0302-9743 Series E-ISSN 1611-3349 作者: landmark 時(shí)間: 2025-3-22 10:30
Kwan-Hee Lee,Hyo-Jung Bae,Sung-Je Hongcognitive map builds up a graph linking together reachable places. We first demonstrate that this map may be used for the control of the robot speed assuring a convergence to the goal. We show afterwards that this model enables to select between different goals in a static environment and finally in a changing environment.作者: Palter 時(shí)間: 2025-3-22 13:13
A Planning Map for Mobile Robots: Speed Control and Paths Finding in a Changing Environment,cognitive map builds up a graph linking together reachable places. We first demonstrate that this map may be used for the control of the robot speed assuring a convergence to the goal. We show afterwards that this model enables to select between different goals in a static environment and finally in a changing environment.作者: 從容 時(shí)間: 2025-3-22 18:06 作者: 遵循的規(guī)范 時(shí)間: 2025-3-22 21:45 作者: 全部逛商店 時(shí)間: 2025-3-23 02:20
I.V. Bychkov,A.D. Kitov,E.A. Cherkashin. obstacle avoidance)..The simplest method to achieve navigation in mobile robot is to use path integration. However, because this method suffers from drift errors, it is not robust enough for navigation over middle scale and large scale distances..This paper gives an overview of research in mobile 作者: Erythropoietin 時(shí)間: 2025-3-23 06:53
M. Dumbser,T. Schwartzkopff,C.-D. Munzin a controlled environment and to collect objects using its gripper. Our aim is to build a control system that enables the robot to learn incrementally and to adapt to changes in the environment. The former is known as multi-task learning, the latter is usually referred to as continual ‘lifelong’ l作者: 溫和女人 時(shí)間: 2025-3-23 12:01
Egon Krause,Yurii I. Shokin,Nina Shokinaul properties such as the ability to generalize or to be noise-tolerant. Since the process to evolve such controllers in the real-world is very time-consuming, one usually uses simulators to speed up the evolutionary process. By doing so a new problem arises: The controllers evolved in the simulator作者: 革新 時(shí)間: 2025-3-23 16:37 作者: 水獺 時(shí)間: 2025-3-23 20:59
Alexey Androsov,J?rn Behrens,Sergey Danilovic programming (ILP). The method repeatedly applies induction from examples collected by using previously induced results. This method is effective in a situation where we can only give an inaccurate teacher. We examined this method by applying it to robot learning, which resulted in increasing the 作者: MELD 時(shí)間: 2025-3-24 00:25
Kwangcheol Shin,Ajith Abraham,Sang Yong Hanamically updated as information comes to hand during the learning process. Excessive variance of these estimators can be problematic, resulting in uneven or unstable learning, or even making effective learning impossible. Estimator variance is usually managed only indirectly, by selecting global lea作者: 組成 時(shí)間: 2025-3-24 02:32 作者: RACE 時(shí)間: 2025-3-24 10:24 作者: Dendritic-Cells 時(shí)間: 2025-3-24 12:32 作者: 拾落穗 時(shí)間: 2025-3-24 18:48 作者: Ankylo- 時(shí)間: 2025-3-24 20:30
Learning a Navigation Task in Changing Environments by Multi-task Reinforcement Learning,re of the robot. Finally, we investigate the capabilities of the learning algorithm with respect to the transfer of information between related reinforcement learning tasks, like navigation tasks in different environments. It is hoped that this method will lead to a speed-up in reinforcement learnin作者: 譏笑 時(shí)間: 2025-3-25 00:34 作者: 煩擾 時(shí)間: 2025-3-25 03:25 作者: 鳴叫 時(shí)間: 2025-3-25 08:59 作者: monologue 時(shí)間: 2025-3-25 14:24 作者: 食物 時(shí)間: 2025-3-25 18:15 作者: CAND 時(shí)間: 2025-3-25 21:09
Reinforcement Learning in Situated Agents: Theoretical Problems and Practical Solutions,amically updated as information comes to hand during the learning process. Excessive variance of these estimators can be problematic, resulting in uneven or unstable learning, or even making effective learning impossible. Estimator variance is usually managed only indirectly, by selecting global lea作者: cushion 時(shí)間: 2025-3-26 00:40 作者: FILLY 時(shí)間: 2025-3-26 07:01 作者: 繁榮中國(guó) 時(shí)間: 2025-3-26 08:36 作者: crockery 時(shí)間: 2025-3-26 15:52 作者: 邪惡的你 時(shí)間: 2025-3-26 20:24 作者: Myocarditis 時(shí)間: 2025-3-26 21:11
How Does a Robot Find Redundancy by Itself?,with respect to the given task dynamically. The controller is derived based on a least-mean-square method. A simulation result of a camera-manipulator system is shown to demonstrate that the proposed method can find redundancy automatically.作者: 滲入 時(shí)間: 2025-3-27 03:45 作者: cataract 時(shí)間: 2025-3-27 06:23
Map Building through Self-Organisation for Robot Navigation,robot navigation at Manchester University, using mechanisms of self-organisation (artificial neural networks) to identify perceptual landmarks in the robot’s environment, and to use such landmarks for route learning and self-localisation, as well as the quantitative assessment of the performance of such systems.作者: dapper 時(shí)間: 2025-3-27 12:50 作者: GRAVE 時(shí)間: 2025-3-27 14:19 作者: 我沒有強(qiáng)迫 時(shí)間: 2025-3-27 17:49 作者: JAUNT 時(shí)間: 2025-3-28 00:42
Kwangcheol Shin,Ajith Abraham,Sang Yong Hanand non-Markovian domains, and present a direct approach to managing estimator variance, the ccBeta algorithm. Empirical results in an autonomous robotics domain are also presented showing improved performance using the new ccBeta method.作者: 金桌活畫面 時(shí)間: 2025-3-28 05:00
https://doi.org/10.1007/11751595es between all the different types on comparing the resulting maps. In the case of comparing occupancy labels, no differences were found between the following pairs of methods: RATE and SUM (. ? . = 0.157), ELFES and RATE (. ? . = 0.600) and ELFES and SUM (. ? . = 0.593).作者: opalescence 時(shí)間: 2025-3-28 07:49 作者: 領(lǐng)導(dǎo)權(quán) 時(shí)間: 2025-3-28 13:28 作者: Engaged 時(shí)間: 2025-3-28 18:21 作者: 硬化 時(shí)間: 2025-3-28 18:50 作者: 移植 時(shí)間: 2025-3-28 23:52 作者: 比目魚 時(shí)間: 2025-3-29 03:11