BACKGROUND: Defect predictors learned from static code measures can isolate code modules with a higher than usual probability of defects. AIMS: To improve those learners by focusi...
Motivation: As next generation sequencing is rapidly adding new genomes, their correct placement in the taxonomy needs verification. However, the current methods for confirming cl...
Rao M. Kotamarti, Michael Hahsler, Douglas Raiford...
This paper discusses building complex classifiers from a single labeled example and vast number of unlabeled observation sets, each derived from observation of a single process or...
In machine learning problems with tens of thousands of features and only dozens or hundreds of independent training examples, dimensionality reduction is essential for good learni...
Semi-naive Bayesian classifiers seek to retain the numerous strengths of naive Bayes while reducing error by weakening the attribute independence assumption. Backwards Sequential ...