作者: ROOF 時間: 2025-3-21 23:28 作者: Creditee 時間: 2025-3-22 03:38
Autoassociative memory with high storage capacity,y with the number of inputs per neuron is far greater than the linear growth in the famous Hopfield network [2]. This paper shows that the GNU attains an even higher capacity with the use of pyramids of neurons instead of single neurons as its nodes. The paper also shows that the storage capacity/co作者: 小畫像 時間: 2025-3-22 06:35 作者: 易改變 時間: 2025-3-22 08:51 作者: Concerto 時間: 2025-3-22 15:54 作者: 繁榮地區(qū) 時間: 2025-3-22 21:04
Bayesian inference of noise levels in regression,puts, together with additive Gaussian noise having constant variance. The use of maximum likelihood to train such models then corresponds to the minimization of a sum-of-squares error function. In many applications a more realistic model would allow the noise variance itself to depend on the input v作者: 持續(xù) 時間: 2025-3-22 23:47
Complexity reduction in probabilistic neural networks,tionally prohibitive, as all training data need to be stored and each individual training vector gives rise to a new term of the estimate. Given an original training sample of size . in a .-dimensional space, a simple binned kernel estimate with .+4) terms can be shown to attain an estimation accura作者: mettlesome 時間: 2025-3-23 02:03 作者: exquisite 時間: 2025-3-23 05:36
Regularization by early stopping in single layer perceptron training,scriminant function. On the way between these two classifiers one has a regularized discriminant analysis. That is equivalent to the “weight decay” regularization term added to the cost function. Thus early stopping plays a role of regularization of the network.作者: Hot-Flash 時間: 2025-3-23 12:37 作者: Congeal 時間: 2025-3-23 15:49 作者: 他很靈活 時間: 2025-3-23 18:25
,Evolutionary computation — History, status, and perspectives,作者: arabesque 時間: 2025-3-23 22:11
,SEE-1 — A vision system for use in real world environments,作者: concise 時間: 2025-3-24 05:12
Towards integration of nerve cells and silicon devices,作者: 充氣女 時間: 2025-3-24 09:46
Unifying perspectives on neuronal codes and processing,作者: Incorruptible 時間: 2025-3-24 14:11
Visual recognition based on coding in temporal cortex: Analysis of pattern configuration and genera作者: 山崩 時間: 2025-3-24 18:07
Christoph Malsburg,Werner Seelen,Bernhard Sendhoff作者: 賄賂 時間: 2025-3-24 20:46 作者: EXUDE 時間: 2025-3-25 00:17
Incorporating invariances in support vector learning machines,r, so far there existed no way of adding knowledge about invariances of a classification problem at hand. We present a method of incorporating prior knowledge about transformation invariances by applying transformations to support vectors, the training examples most critical for determining the classification boundary.作者: 笨重 時間: 2025-3-25 07:18 作者: 商店街 時間: 2025-3-25 08:14 作者: musicologist 時間: 2025-3-25 14:24 作者: 暴露他抗議 時間: 2025-3-25 15:57 作者: Climate 時間: 2025-3-25 21:45 作者: Uncultured 時間: 2025-3-26 00:48
Artificial Neural Networks - ICANN 96978-3-540-68684-2Series ISSN 0302-9743 Series E-ISSN 1611-3349 作者: Alveolar-Bone 時間: 2025-3-26 06:46
Measuring Areas of Rectangular Fields,r, so far there existed no way of adding knowledge about invariances of a classification problem at hand. We present a method of incorporating prior knowledge about transformation invariances by applying transformations to support vectors, the training examples most critical for determining the classification boundary.作者: insecticide 時間: 2025-3-26 10:45 作者: MAOIS 時間: 2025-3-26 16:23
Measuring Areas of Rectangular Fields,ace. We specify an algorithm to generate representative weight vectors in a specific fundamental domain. The analysis of the metric structure of the fundamental domain enables us to use the . information of weight vector estimates, e. g. for cluster analysis. This can be implemented efficiently even for large networks.作者: EPT 時間: 2025-3-26 17:00
Dividing Fields Among Partners,ask shows a tendency to strong overfitting on the other hand its optimal training scheme is known. In the regime, where overfitting occurs, on-line training outperforms batch training quite easily. Asymptotically, off-line training is better but if the learning rate is chosen carefully on-line training remains competitive.作者: Expertise 時間: 2025-3-26 21:52
J. Ronald Gonterman,M. A. Weinsteinion and function approximation. This network type is best suited for a hardware implementation and special VLSI chips are available which are used in fast trigger processors. Also discussed are self-organizing networks for the recognition of features in large data samples. Neural net algorithms like作者: ABIDE 時間: 2025-3-27 04:39
Fibers for Protective Textiles,opfield-type associative memories, the proposed encoding method computes the connection weight between two neurons by summing up not only the products of the corresponding two bits of all fundamental memories but also the products of their neighboring bits. Theoretical results concerning stability a作者: Chauvinistic 時間: 2025-3-27 06:22
Fibonacci Numbers and Search Theory,y with the number of inputs per neuron is far greater than the linear growth in the famous Hopfield network [2]. This paper shows that the GNU attains an even higher capacity with the use of pyramids of neurons instead of single neurons as its nodes. The paper also shows that the storage capacity/co作者: DUST 時間: 2025-3-27 10:54
Fibonacci Numbers and Search Theory,rk with high information efficiency, but only if the patterns to be stored are extremely sparse. In this paper we report how the efficiency of the net can be improved for more dense coding rates by using a partially-connected net. The information efficiency can be maintained at a high level over a 2作者: VICT 時間: 2025-3-27 13:40
Measuring Areas of Rectangular Fields,r, so far there existed no way of adding knowledge about invariances of a classification problem at hand. We present a method of incorporating prior knowledge about transformation invariances by applying transformations to support vectors, the training examples most critical for determining the clas作者: 類人猿 時間: 2025-3-27 21:26
Dividing Fields Among Partners,timation of a confidence value for a certain object. This reveals how trustworthy the classification of the particular object by the neural pattern classifier is. Even for badly trained networks it is possible to give reliable confidence estimations. Several estimators are considered. A .-NN techniq作者: 硬化 時間: 2025-3-27 22:53 作者: 上下連貫 時間: 2025-3-28 04:52
Fibonacci‘s De Practica Geometrietionally prohibitive, as all training data need to be stored and each individual training vector gives rise to a new term of the estimate. Given an original training sample of size . in a .-dimensional space, a simple binned kernel estimate with .+4) terms can be shown to attain an estimation accura作者: jettison 時間: 2025-3-28 07:00 作者: diathermy 時間: 2025-3-28 13:50 作者: 過份艷麗 時間: 2025-3-28 18:04 作者: Blood-Clot 時間: 2025-3-28 22:37 作者: 枯燥 時間: 2025-3-28 23:20
https://doi.org/10.1007/3-540-61510-5Controller Area Network (CAN); artificial intelligence; artificial neural network; biologically inspire作者: Carcinogenesis 時間: 2025-3-29 06:12 作者: Assignment 時間: 2025-3-29 11:04
A novel encoding strategy for associative memory,nd attractivity are given. It is found both theoretically and experimentally that the proposed encoding scheme is an ideal approach for making the fundamental memories fixed points and maximizing the storage capacity which can be many times of the current limits.作者: Lament 時間: 2025-3-29 11:27 作者: 暫停,間歇 時間: 2025-3-29 17:46
Complexity reduction in probabilistic neural networks,cy only marginally inferior to the standard kernel method. This can be taken to indicate the order of complexity reduction generally achievable when a radial basis function style expansion is used in place of the probabilistic neural network.作者: 青少年 時間: 2025-3-29 21:27
Asymptotic complexity of an RBF NN for correlated data representation,of such building blocks has to be paid for by the relatively large numbers of units needed to approximate the density of correlated data. We define two scalar parameters to describe the complexity of the data to be modelled and study the relationship between the complexity of the data and the complexity of the . approximating network.作者: Calculus 時間: 2025-3-30 00:21
0302-9743 he topics and areas covered are a broad spectrum of theoretical aspects, applications in various fields, sensory processing, cognitive science and AI, implementations, and neurobiology.978-3-540-61510-1978-3-540-68684-2Series ISSN 0302-9743 Series E-ISSN 1611-3349 作者: 展覽 時間: 2025-3-30 07:52 作者: jumble 時間: 2025-3-30 11:59
Fibers for Protective Textiles,nd attractivity are given. It is found both theoretically and experimentally that the proposed encoding scheme is an ideal approach for making the fundamental memories fixed points and maximizing the storage capacity which can be many times of the current limits.作者: consolidate 時間: 2025-3-30 14:30
https://doi.org/10.1007/978-0-387-72931-2ariables. However, the use of maximum likelihood for training such models would give highly biased results. In this paper we show how a Bayesian treatment can allow for an input-dependent variance while overcoming the bias of maximum likelihood.作者: lethargy 時間: 2025-3-30 19:44
Fibonacci‘s De Practica Geometriecy only marginally inferior to the standard kernel method. This can be taken to indicate the order of complexity reduction generally achievable when a radial basis function style expansion is used in place of the probabilistic neural network.作者: Aqueous-Humor 時間: 2025-3-30 21:17
Dividing Fields Among Partners,of such building blocks has to be paid for by the relatively large numbers of units needed to approximate the density of correlated data. We define two scalar parameters to describe the complexity of the data to be modelled and study the relationship between the complexity of the data and the complexity of the . approximating network.作者: Ceramic 時間: 2025-3-31 03:53 作者: 剛毅 時間: 2025-3-31 06:55 作者: 殘廢的火焰 時間: 2025-3-31 12:51 作者: 恃強(qiáng)凌弱 時間: 2025-3-31 17:16 作者: Irrepressible 時間: 2025-3-31 19:48
Fibonacci Numbers and Search Theory, an even higher capacity with the use of pyramids of neurons instead of single neurons as its nodes. The paper also shows that the storage capacity/cost ratio increases, giving further support to this node upgrade. This analysis combines the modular approach for storage capacity assessment of pyramids [3] and of GNUs [4].作者: Apraxia 時間: 2025-3-31 22:50 作者: 亂砍 時間: 2025-4-1 02:17
Application of Artificial Neural Networks in Particle Physics,fast trigger processors. Also discussed are self-organizing networks for the recognition of features in large data samples. Neural net algorithms like the Hopfield model have been applied for pattern recognition in tracking detectors.