Sciweavers

1950 search results - page 83 / 390
» Informative sampling for large unbalanced data sets
Sort
View
JMLR
2012
13 years 4 months ago
Conditional Likelihood Maximisation: A Unifying Framework for Information Theoretic Feature Selection
We present a unifying framework for information theoretic feature selection, bringing almost two decades of research on heuristic filter criteria under a single theoretical inter...
Gavin Brown, Adam Pocock, Ming-Jie Zhao, Mikel Luj...
NAR
2000
103views more  NAR 2000»
15 years 1 months ago
The Homeodomain Resource: a prototype database for a large protein family
The Homeodomain Resource is an annotated collection of non-redundant protein sequences, three-dimensional structures and genomic information for the homeodomain protein family. Re...
Sharmila Banerjee-Basu, Joseph F. Ryan, Andreas D....
KDD
2012
ACM
238views Data Mining» more  KDD 2012»
13 years 4 months ago
Multi-source learning for joint analysis of incomplete multi-modality neuroimaging data
Incomplete data present serious problems when integrating largescale brain imaging data sets from different imaging modalities. In the Alzheimer’s Disease Neuroimaging Initiativ...
Lei Yuan, Yalin Wang, Paul M. Thompson, Vaibhav A....
BMCBI
2004
169views more  BMCBI 2004»
15 years 1 months ago
A power law global error model for the identification of differentially expressed genes in microarray data
Background: High-density oligonucleotide microarray technology enables the discovery of genes that are transcriptionally modulated in different biological samples due to physiolog...
Norman Pavelka, Mattia Pelizzola, Caterina Vizzard...
128
Voted
INFORMATICALT
2008
162views more  INFORMATICALT 2008»
15 years 1 months ago
Vague Rough Set Techniques for Uncertainty Processing in Relational Database Model
Abstract. The study of databases began with the design of efficient storage and data sharing techniques for large amount of data. This paper concerns the processing of imprecision ...
Karan Singh, Samajh Singh Thakur, Mangi Lal