@inproceedings{DBLP:conf/vldb/Wiederhold89, author = {Gio Wiederhold}, editor = {Peter M. G. Apers and Gio Wiederhold}, title = {Knowledge to Mediate from User's Workstations to Databases}, booktitle = {Proceedings of the Fifteenth International Conference on Very Large Data Bases, August 22-25, 1989, Amsterdam, The Netherlands}, publisher = {Morgan Kaufmann}, year = {1989}, isbn = {1-55860-101-5}, pages = {2}, ee = {db/conf/vldb/Wiederhold89.html}, crossref = {DBLP:conf/vldb/89}, bibsource = {DBLP, http://dblp.uni-trier.de} }BibTeX
Information systems that are now becoming available are reaching the capabilities envisaged by Vannevar Bush in 1945 when he specified the 'memex'. We can sit at our workstations and view data from many different sources. But any integration of the information still takes place in our minds, and we now have a common complaint of information overload. New hardware and software technologies address aspects of this problem. In this panel we want to develop a vision which integrates new directions to carry us beyond 'Memex'. We use the term 'Mediation' for the software which fits between the users' agent - the workstation, and the diverse data sources.
Specifically, the ability of computers to communicate effectively opens up possibilities for information processing which greatly exceed those on our immediate horizon. User's on their workstations should be able to get information, not just data, from the private and public databases. To achive such services mediation modules are needed, which combine expert knowledge about accessing and processing data, so that only relevant information is fed to the users' stations. The mediators must be maintainable, and hence limited in scope, just like the experts which they represent. Mediation encompasses concepts as views, computations involving closure, ranking of results, generalization, navigational search, etc.
The user, mediator, and server modules can be mapped onto the powerful distributed hardware that is becoming available, so that it forms an architecture for information processing into the foreseeable future. We will argue that the modularity is not only a goal, but also enables the goal to be reached, since these systems will need autonomous modules to permit growth and enable them to survive in a rapidly changing world.
The discussion should not be oriented towards any specific design or implementation, but focus on the architecture and critical sub-tasks needed to devlop a new conceptual framework.
Copyright © 1989 by the VLDB Endowment. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the VLDB copyright notice and the title of the publication and its date appear, and notice is given that copying is by the permission of the Very Large Data Base Endowment. To copy otherwise, or to republish, requires a fee and/or special permission from the Endowment.