Sciweavers

641 search results - page 11 / 129
» Training Methods for Adaptive Boosting of Neural Networks
Sort
View
IJIT
2004
14 years 11 months ago
A Comparison of First and Second Order Training Algorithms for Artificial Neural Networks
Minimization methods for training feed-forward networks with Backpropagation are compared. Feedforward network training is a special case of functional minimization, where no expli...
Syed Muhammad Aqil Burney, Tahseen Ahmed Jilani, C...
ISBI
2009
IEEE
15 years 4 months ago
Automatic Markup of Neural Cell Membranes Using Boosted Decision Stumps
To better understand the central nervous system, neurobiologists need to reconstruct the underlying neural circuitry from electron microscopy images. One of the necessary tasks is...
Kannan Umadevi Venkataraju, António R. C. P...
IJCAI
2007
14 years 11 months ago
Simple Training of Dependency Parsers via Structured Boosting
Recently, significant progress has been made on learning structured predictors via coordinated training algorithms such as conditional random fields and maximum margin Markov ne...
Qin Iris Wang, Dekang Lin, Dale Schuurmans
NECO
2006
157views more  NECO 2006»
14 years 9 months ago
Experiments with AdaBoost.RT, an Improved Boosting Scheme for Regression
The application of boosting technique to the regression problems has received relatively little attention in contrast to the research aimed at classification problems. This paper ...
Durga L. Shrestha, Dimitri P. Solomatine

Tutorial
3234views
15 years 5 months ago
Nguyen-Widrow and other Neural Network Weight/Threshold Initialization Methods
Neural networks learn by adjusting numeric values called weights and thresholds. A weight specifies how strong of a connection exists between two neurons. A threshold is a value,...
Jeff Heaton