|




















|
|
 |
|
 |
Clustering Methods for Large Databases: From the Past to the Future
|
Alexander Hinneburg and
Daniel A. Keim
View Paper (PDF)
Return to Tutorials
Because of the fast technological progress, the amount of information which is stored in databases is rapidly increasing. In addition, new applications require the storage and retrieval of complex multimedia objects which are often represented by high-dimensional feature vectors. Finding the valuable information hidden in those databases is a difficult task. Cluster analysis is one of the basic techniques which is often applied in analyzing large data sets. Originating from the area of statistics, most cluster analysis algorithms have originally been developed for relatively small data sets. In the recent years, the clustering algorithms have been extended to efficiently work on large data sets, and some of them even allow the clustering of high-dimensional feature vectors. Many such methods use some kind of an index structure for an efficient retrieval of the required data; other approaches are based on preprocessing for a more efficient clustering.
@inproceedings{DBLP:conf/sigmod/HinneburgK99,
author = {Alexander Hinneburg and
Daniel A. Keim},
editor = {Alex Delis and
Christos Faloutsos and
Shahram Ghandeharizadeh},
title = {Clustering Methods for Large Databases: From the Past to the
Future},
booktitle = {SIGMOD 1999, Proceedings ACM SIGMOD International Conference
on Management of Data, June 1-3, 1999, Philadephia, Pennsylvania,
USA},
publisher = {ACM Press},
year = {1999},
isbn = {1-58113-084-8},
pages = {509},
crossref = {DBLP:conf/sigmod/99},
bibsource = {DBLP, http://dblp.uni-trier.de} } },
Copyright(C) 2000 ACM
|
|
|
|
|
|
|