Benefits of Content-based Visual Data Access in Radiology
Abstract
The field of medicine is often cited as an area for which content-based visual retrieval holds considerable promise. To date, very few visual image retrieval systems have been used in clinical practice; the first applications of image retrieval systems in medicine are currently being developed to complement conventional text-based searches. An image retrieval system was developed and integrated into a radiology teaching file system, and the performance of the retrieval system was evaluated, with use of query topics that represent the teaching database well, against a standard of reference generated by a radiologist. The results of this evaluation indicate that content-based image retrieval has the potential to become an important technology for the field of radiology, not only in research, but in teaching and diagnostics as well. However, acceptance of this technology in the clinical domain will require identification and implementation of clinical applications that use content-based access mechanisms, necessitating close cooperation between medical practitioners and medical computer scientists. Nevertheless, content-based image retrieval has the potential to become an important technology for radiology practice.
© RSNA, 2005
Introduction
Content-based visual data access without the use of textual descriptions is a very active research topic in computer vision and image processing. Many applications exist in the research domain as well as in commercial systems. The field of medicine is often cited as one of the principal areas in which content-based visual queries can be beneficial. Still, only a very few visual image retrieval systems have been used in clinical practice; most remain prototypes.
References ,1–,3 provide more information on general image retrieval systems for nonmedical applications such as journalists’ image archives or trademark retrieval. Most current image retrieval systems formulate queries by using example images (QBE, or “query by example,” an approach that requires a proper starting image to formulate the query). Other systems formulate queries by selecting regions from presegmented images (,4) or with the combined use of text and images (,5). In general, the images are represented in databases by automatically extracted visual features that are supposed to correspond to the visual content of the image or to the way the viewer perceives the image. The visual features most commonly used for image retrieval are gray levels and color descriptors (local or global), texture descriptors, and shapes of segmented objects.
The gray levels in an image and their distribution or layout throughout the image are often represented with histograms that can be compared with a simple intersection or a Euclidean distance. Local gray-level descriptors can be represented by the most commonly occurring gray level in a certain area or by local gray-level histograms. Textures can be described with wavelet filter responses (,6), which measure the changes in the gray levels in various directions and scales throughout an image, or on the basis of features derived from co-occurrence matrices, which help determine the frequency of occurrence of neighboring gray levels in various directions and distances to describe a texture. These approaches allow description of the texture in terms of scale, principal directions, and whether texture changes are rapid or gradual. Texture descriptors are especially helpful when they are extracted from a region that is homogeneous in texture. Shape features can be used to characterize identifiable or segmented objects and include mathematical moments of the shape as well as features that describe the roundness of the form or the number of changes between convex and concave segments of the contour. Often, the goal is to extract features that are invariant with respect to object size or rotation. By comparing the features of two images, one can calculate a similarity score between the two. Different distance measures for comparisons exist, such as the simple Euclidean or the “city block” distance.
In general, all of the features are on a fairly low semantic level compared with text that might come with the images. These features also often contrast with the high-level semantic concepts that users are mainly looking for, such as a specific object (eg, tumor) or a certain texture representing a disease process (eg, emphysema). This semantic difference between image representation and image content is called the semantic gap. Another gap or information loss is the sensory gap that already exists between the original physical structure and the digital image—as, for example, with three-dimensional structure represented by a two-dimensional image (eg, chest radiograph). The limited resolution of digital images also contributes to this sensory gap.
Most published articles on content-based medical image retrieval seem to have been written either in a medical department, where a clear need for image retrieval systems exists and is often defined (,7–,9), or in a computer science department, where medical data sets are used but there is no link with clinical practice (,10,,11). Only a few active research projects with clear clinical goals and operational prototypes currently exist. The ASSERT project (Automatic Search and Selection Engine with Retrieval Tools) at Purdue University is rather active, focusing on the analysis of textures in high-resolution computed tomographic (CT) scans of the lung (,12). A clinical test in which the system was used as a diagnostic aid showed an improvement in diagnostic quality, especially for less experienced practitioners (,13). Eleven persons were asked to make a diagnosis by viewing the same set of images on two occasions, once with the help of ASSERT and once without its help. The viewings were separated by several weeks so that the 11 diagnosticians could not recall the previous results. The percentage of correct diagnoses improved from 29% without the help of ASSERT to 62% with its help. The degree of improvement was greater for general radiologists than for chest specialists. In no group did diagnostic performance erode. The Image Retrieval in Medical Applications (IRMA) project is also very active (,14). The project is concerned with visual similarity retrieval and automatic image classification. In IRMA, a multidimensional code was created to annotate image databases (,15) with axes for modality, body orientation, body region examined, and biologic system under control. An image database is currently being annotated that contains 10,000 clinical images, mainly conventional radiographs.
Sometimes, medical images are retrieved on the basis of text only (,9,,16). Such a system does not truly represent content-based retrieval, but rather context-based retrieval, since the text describes the context in which the image was obtained or evaluated and only partly the visual content of the image. The radiology report or the text supplied in the teaching file (if available with the image in digital form) can be used for text-based retrieval. These texts are generally edited to remove frequent “stop” words (eg, “the,” “a”). Then, “stemming” is used to remove the unimportant word endings (eg, “contained” and “containing” both become “contain”); the remaining words can then be indexed and used for retrieval. The imageCLEF competition ( http://ir.shef.ac.uk/imageclef2004/) shows that both text and visual features have an important influence on retrieval quality. Best results can be obtained when the two are combined. Whereas text has the advantage of covering semantics, it also has the disadvantage of being task and user dependent, and even the same person will annotate an image in slightly different words when performing the same task again. When a new image is produced, there is no text available and the radiologist will need to formulate queries. Automatically extracted visual features are “objective” for one image and can be obtained without additional work.
In this article, we discuss and illustrate various applications for visual data access in radiology, including applications in teaching, research, and diagnostics. In addition, we discuss the integration of content-based access into various clinical applications such as the picture archiving and communication system (PACS) and electronic patient records. We also describe the limitations of automatic visual retrieval and the development and features of the medGIFT retrieval system.
Study Overview
The medGIFT retrieval system described in this article is an image retrieval engine (,17) that is based on the open source system GIFT (GNU Image Finding Tool) ( http://www.gnu.org/software/gift/), an outcome of the Viper (Visual Information Processing for Enhanced Retrieval) project at the University of Geneva, Switzerland ( http://viper.unige.ch/). MedGIFT differs from IRMA and ASSERT in that it uses a very large feature space and is based on well-known techniques from text retrieval. Relevance feedback and user interaction are very important components. Another difference is that the medGIFT retrieval system does not require classification and a priori knowledge for retrieval. The features themselves are supposed to model the visual similarity of the images. A detailed review of content-based medical image retrieval systems can be found in reference ,18.
Our casimage teaching file system ( http://www.casimage.com/) offers integration into the visual retrieval framework and gives us access to large teaching files. Casimage is an in-house development, and medGIFT is likewise being developed and specialized for use with medical images in the Medical Informatics Service of the University and the University Hospitals of Geneva, Switzerland. The medGIFT system retrieves images on the basis of local and global similarities in gray level and texture. The system was evaluated for medical image retrieval in the context of the image-CLEF competition (,19,,20). Some of the evaluation results concern the number of gray levels that delivers the best retrieval results. This number is surprisingly low for optimal image retrieval. The system was used with several gray-level quantizations, and its performance was evaluated against a standard of reference generated by a radiologist.
Applications for Visual Data Access
Teaching
Teaching can be the first domain to really profit from content-based access methods (,21). Many teaching files such as casimage (,22) or the online system myPACS ( http://www.mypacs.net/) exist. These systems are supposed to allow maximum flexibility for the practitioner entering cases, being as well integrated into the clinical work flow as possible. In this way, interesting or typical cases can be exported directly from the PACS or viewing station without the need for complex transformations. The inclusion of images and texts into presentations should be equally easy by “dragging and dropping.” Such an easy-to-use system gives practitioners flexibility but also prohibits strict control of the entered data for validity. As a consequence, the data are often of mediocre quality, containing spelling errors and nonstandardized abbreviations. The records stored in the casimage database also contain multilingual entries, which pose more problems. Sometimes, single records are multilingual—as, for example, if data were copied and pasted from a French document for a World Wide Web demonstration, along with a translation that was started but never finished. A content-based search can be an easy option for complementing text-based or hierarchic methods of accessing the data, allowing students to browse the available data themselves in an easy and straightforward fashion by clicking on “show me similar images.” Doing so can stimulate self-learning and a comparison of similar cases and their particularities. On the other hand, lecturers can find optimal cases for teaching, even in parts of the database that they did not generate themselves, that might be annotated differently from their own cases or simply described in another language. Visually similar cases with a different diagnosis, which can be important for teaching, may also be found. Good starting images can still be found using a text-based or hierarchic search. In particular, very large databases (60,000 images) such as casimage can instantly profit from a new way of browsing with automatically extracted visual features. Evaluation of retrieval based on the exact same diagnoses is not necessary for navigation in teaching files; thus, current retrieval quality is sufficient for this kind of research.
The Radiological Society of North America (RSNA) already created the Medical Imaging Resource Center (MIRC) ( http://mirc.rsna.org/) standard for sharing image data for teaching. Currently, several large databases can be queried by means of a Web page with textual queries. It would be very interesting and useful to index all of these images in a visual form and extend the MIRC standard to allow visual queries with example images.
Research
The situation in research is very similar to that in teaching. The quality of retrieval does not always have to be on a diagnostic level, and a little time can be spent browsing to optimize query results. Content-based methods can be used in a variety of applications to complement text-based methods. They are an option for retrieving certain kinds of images to be included in a study. Visual access can also be used for quality control to find images that might have been misclassified. Images of newly-discovered diseases can be sought in old databases, even when it is not clear exactly how the images were indexed textually. Visual features can be incorporated directly into medical studies. What are the visual features of various stages of certain diseases? Data mining on the basis of visual features can be used to find potentially important visual characteristics of specific diseases or visual differences between diseases. One goal is real visual knowledge management, whereby images and associated textual data can be analyzed together. Multimedia data mining can lead to the discovery of previously unknown links between visual features and diagnoses or other patient demographics. Thus, the implicit information that is stored in an image, the textual case description, and treatment and outcome information can be used to improve similar future cases.
Diagnostic Aid
Most currently available systems are tools destined for use in aiding diagnosis. Visual features have been used to aid lung diagnostics (,12), to classify pathology slides (,23) and melanoma images (,24), and for many more applications. ,Figure 1 illustrates image retrieval as a diagnostic aid in lung CT, with a typical result.
An image retrieval system can help when the diagnosis depends strongly on direct visual properties of images in the context of evidence-based medicine (,25) or case-based reasoning (,26). The main problem is the evaluation of systems in terms of diagnostic aid. Most often, only a very small database is extracted and the system is optimized on the basis of this database and then evaluated. This approach cannot lead to good results because the algorithms used need to have a much larger database for optimization; otherwise, they will not work with other images because they are too specialized. Another “problem” is the advancement of medical imaging techniques. New techniques deliver other, often better images. With respect to automatic retrieval and analysis of images, this means that the algorithms might not work with the new images in the same way they did with the old ones.
There is a clear need for tools that can easily be adapted for various applications and that can “learn” the relevant features on the basis of a new group of images and imaging techniques. Systems need to be frameworks of reusable components in which each component can easily be replaced. The basis for evaluation is the availability of image databases and ground truth for various tasks. Initiatives for such evaluation databases are underway in various organizations (,27). The need for standard data sets cannot be overestimated. In areas such as text retrieval, standard test sets and databases have led to a significant improvement in retrieval quality. The problems with and importance of reference databases were first discussed in the 1970s (,28). Research in content-based medical image retrieval can profit largely from such databases. Ground truth needs to be available to advance research by means of a comparison of techniques on the same basis.
PACS and Electronic Patient Records
Of course, the goal of image retrieval has to be integration of content-based access into various clinical applications such as the PACS and electronic patient records. Such integration has already been proposed several times (,10,,11,,18). Still, the main problem of integration into a PACS is the sheer amount of data that are being produced in hospitals. Without a proper selection algorithm for cases and sections, the indexed data will quickly become unmanageable, especially with use of modern multisection devices that produce hundreds or even thousands of images in a single series. These problems are often neglected in the literature.
Integration of content-based access into electronic patient records and access to all cases by means of both content-based and textual retrieval would, of course, be the ultimate solution, allowing use of all implicit knowledge stored in images as well as the accompanying textual information. However, for such a scenario to become a reality, multiple problems— including patient privacy, since their treatment data are used to improve treatment in new cases—will need to be solved and appropriate retrieval algorithms for all sorts of images implemented.
Limitations of Automatic Visual Retrieval
Many possible means of image retrieval have already been discussed, but there are also several limitations and problems. Most problems are linked to the low-level visual features being used. The system does not “know” its limitations, and searching is not based on semantics but on broad visual appearance only.
Figure 2 illustrates a partially failed retrieval. Although the first retrieved image is relevant, several of the remaining images are not at all scintigraphic images, but are images produced by other modalities. This is due to the rather unsharp lines and light gray background in these images. Other images with the same gray background and similarly unsharp objects are found, such as classic x-ray images.
Only with relevance feedback, marking several images as either relevant or nonrelevant, can the system refine the search and find only scintigraphic images. The system does not know which part of the image is a priori the most important part for the user. Only when feeding back several images can the system adapt to user needs.
MedGIFT Retrieval System
MedGIFT was developed to work together with casimage, which has been in daily routine use for several years now (,22,,29). More than 60,000 images from more than 10,000 medical cases have been indexed so far. The database is available on the intranet of our hospital, with a smaller database being available to the public through the Internet and MIRC. The GIFT system on which MedGIFT is based offers components for content-based indexing and image retrieval, such as feature extraction algorithms, feature indexing structures, and a communication interface called MRML (Multimedia Retrieval Mark-up Language, http://www.mrml.net/). The interface allows easy integration into various applications such as teaching files, document management systems, and tools for diagnostic aid. GIFT uses techniques that are well known from text retrieval such as frequency-based feature weights, inverted file indexing structures, and relevance feedback mechanisms (,30). With frequency-based feature weights, the importance of visual features is determined by their frequency of occurrence in an image and, indeed, in the entire collection of images, similar to the weighting of words in text retrieval engines: Rare words (and image features) contain more information and are more important than frequently used words and commonly occurring image features. With inverted file indexing structures, the index is not based on documents that refer to certain imaging features but on the features that point to the documents in which they appear. Inverted file indexing structures are also commonly used in textual search engines such as Google, which includes an index of all words and, for each word, a list of Web pages that contain it. Visual features used to represent images fall into four categories: local and global texture features based on Gabor filter responses, and color or gray-scale characteristics on a global image scale and locally within image regions.
Gabor filters measure the change on an image in a certain direction or on a certain scale, thereby describing a texture with respect to (a) direction and (b) size and strength. Small or slow changes can easily be distinguished from large or quick changes. Local features are obtained as follows: The image is divided into four equal regions, each of which is subdivided into four subregions, which are in turn subdivided. The mode color of each resulting subregion is extracted, thereby creating a multiscale representation of the image (,Fig 3).
Local Gabor filters allow determination of which shapes or textures occur in each region. The potential feature space is very large (85,000 possible visual features). Each image contains roughly 1,000–2,000 features. The frequency of occurrence of visual features is similar to that of words in text. The weighting scheme gives greater weight to rare visual features than to frequently occurring features (similar to textual search engines). More details about GIFT technology can be found in reference ,30.
To improve results with medical images that are primarily gray scale, the number of gray levels was increased from the four gray levels of GIFT. For color photographs, the gray levels are relatively unimportant for retrieval because the human visual system is less sensitive to gray levels than to colors. The number of texture descriptors based on Gabor filter responses was equally increased, since textures are expected to be more important on medical images than on color photographs. In early tests, the best overall results were obtained with approximately 4–16 gray levels, which is a surprisingly small number. To fine-tune the number of gray levels, we simply indexed the database in various ways and evaluated the results for each quantization against a standard of reference defined by a radiologist (,31). More tests are needed to define the optimal number of gray levels for each query task. Use of a much larger number of gray levels seems to create overly specific queries, so relevant images are missed. This small number of gray levels is far below the 256 gray levels that JPEG (Joint Photographic Experts Group) offers and even further from the resolution of computed radiography (CR) or digital radiography (DR) images in DICOM (Digital Imaging and Communications in Medicine). Still, the use of low-level features for retrieval works better when queries are less specific. A new user interface based on php, a scripting language for creating Web-based interactive applications ( http://www.php.org/), was developed that shows the diagnosis of retrieved images under the thumbnail image and is linked with the teaching file (,Fig 4).
The retrieval engine allows submission of an unlimited number of images combined as query and also of images as negative examples or negative feedback to focus and refine the research further. On-screen, the retrieved images are sorted by visual similarity to the query image and a similarity score is displayed under each image, along with the diagnosis. Clicking on one of these images brings up the corresponding case from the case database, including a textual description and additional images in full resolution. The system is an interactive tool; consequently, response times need to be less than 1 second (,32). On a current Pentium 4 computer (Intel, Santa Clara, Calif) with 2.8 GHz and a database of 9,000 images, the response times for single-image queries are always less than 0.5 seconds.
Discussion
The foregoing descriptions and illustrations of the casimage-medGIFT combination show that content-based image retrieval can be used to manage medical image data. Still, several questions and problems remain. One important question concerns the evaluation of medical image retrieval systems that use both textual and visual retrieval. A benchmarking event for image retrieval system comparison has been established at the CLEF (Cross Language Evaluation Forum) conference. A medical image retrieval task was added in 2004, and 11 research groups from Europe, North America, and Asia participated. Initial evaluation of our system showed that, with one step of relevance feedback, an average of 14 of the first 20 images are relevant; thus, the technology can be used in noncritical domains such as a search for interesting teaching cases.
Current applications are often extremely specialized for a very small application domain and hard to adapt to new requirements and new types of images. Conversely, they may be extremely general without the possibility of being used for diagnosis-based (specialized) retrieval. New image retrieval projects will need to be based on common platforms to allow both important specialization for clinical domains using as much a priori information as possible, and very general retrieval in PACS-like databases or teaching files with a large variety of images. Such a platform should be shared among research groups. In addition, source code should be shared among several research groups so that new technologies and features can be easily implemented and compared. Reimplementation of basic functionalities needs to be avoided. Specialization is very important so that applications can be created and integrated into clinical practice for tasks with which radiologists can use help. Extremely important is the algorithmic evaluation of real-life data. Evaluation databases will need to be generated for specialized retrieval, including ground truth for the task being evaluated. The importance of evaluation cannot be overestimated. Projects for the identification of interesting medical imaging problems and the generation of reference image data sets are underway in the United States and Europe (,27). One important factor in image retrieval research is evaluation of the behavior of retrieval system users. It is important to adapt systems to user needs with interfaces that users can accept (,33).
With respect to content-based data access, it is important to explain the technology, along with its potential and problems, to users so that they have realistic expectations. System improvements are possible only through several loops of feedback so as to incorporate as much medical knowledge as possible into the retrieval engine for a given task. Close cooperation between radiologists and computer scientists will be necessary for projects to be successful.
Conclusions
The first applications of image retrieval systems in the medical domain are now being developed to complement the conventional text-based search. These image retrieval systems allow access to and navigation of extremely large visual archives and extraction of hidden information without the high cost of manual annotation and codification of databases. Although visual access to databases will remain complementary to text-based search and will not, at least in the foreseeable future, replace it, the two should be developed closely together. If visual access is to gain acceptance in the clinical domain, real clinical applications that use content-based access mechanisms must be forthcoming. Only practical, operational clinical applications will help gain acceptance in the medical community for more than “playing” with a retrieval system. To this end, systems will need to include as much medical knowledge as possible. Close cooperation between medical practitioners and medical computer scientists will be necessary to achieve this goal. Promising applications will need to be identified and implemented on the basis of a framework of components for image retrieval, so that redevelopment of software is obviated and easy adaptation of the software is made possible.
TAKE-HOME POINTS
Content-based image retrieval is a technique for retrieving images on the basis of automatically derived features such as texture and shape.
Content-based image retrieval has the potential to be an important factor in radiology.
Content-based image retrieval complements the conventional text-based search.
Figure 1. Content-based image retrieval as a diagnostic aid with use of medGift and the casimage database. The query image (left) shows emphysematous lesions with multiple confluent centrilobular and paraseptal areas of low attenuation without visible walls. The search results show six images, including five cases of emphysema (right), with each image accompanied by a link to the complete case description. One image demonstrates unilateral emphysema (MacLeod [Swyer-James] syndrome), and two images show a small area of consolidation in the pulmonary parenchyma (cryptogenic organized pneumonia [COP] and pulmonary embolism). The typical pattern of pulmonary parenchyma destruction seen on these five images strongly suggests the diagnosis of emphysema for the query image. Figure 2. Partly failed retrieval due to insufficient information in the query image (top left) with respect to varying gray-level changes or strong textures. There is no sharply lined object in the query image, which would have made retrieval easier. Figure 3. Extraction of local image features. The image is first partitioned into four equal regions (red lines), and this process is repeated for each successive subregion (blue, green, and yellow lines) to extract local image characteristics. Figure 4. Screen shows the interface of medGIFT and casimage. Clicking on an image in the medGIFT interface brings up the corresponding textual case description.



The authors thank Julien Vignali for finishing our user interface in time for the RSNA presentation.
References
- 1
, Worring M, Santini S, Gupta A, Jain R. Content-based image retrieval at the end of the early years. IEEE Trans Pattern Anal Machine Intell2000; 22: 1349–1380. Crossref, Google ScholarSmeulders AW - 2
. Pictorial information retrieval (progress in documentation). J Doc1995; 51: 126–170. Crossref, Google ScholarEnser PG - 3
, Sawhney H, Niblack W, et al. Query by image and video content: the QBIC system. IEEE Comput1995; 28: 23–32. Google ScholarFlickner M - 4
, Thomas M, Belongie S, Hellerstein JM, Malik J. Blobworld: a system for region-based image indexing and retrieval–Conference on Visual Information Systems, Amsterdam, the Netherlands, 1999; 509–516. Google ScholarCarson C - 5
. Image retrieval: content versus context. In: Content-Based Multimedia Information Access, RIAO 2000. Paris, France, 2000; 276–284. Google ScholarWesterveld T - 6
, Kovacevic J. Wavelets and subband coding. Englewood Cliffs, NJ: Prentice Hall, 2000. Google ScholarVetterli M - 7
, Antipov I, Hersh W, Smith CA. Towards knowledge-based retrieval of medical images: the role of semantic indexing, image content representation and knowledge-based retrieval. Proc AMIA Symp1998; 882–886. Medline, Google ScholarLowe HJ - 8
, Jaffe C, Duncan J. Medical image databases: a content-based retrieval approach. J Am Med Inform Assoc1997; 4: 184–198. Crossref, Medline, Google ScholarTagare HD - 9
, Xu H, Kabuka MR. Content-based retrieval in picture archiving and communication systems. J Digit Imaging2000; 13: 70–81. Crossref, Medline, Google ScholarEl-Kwae E - 10
, Chronaki C, Vamvaka D. I2C net: content-based similarity search in geographically distributed repositories of medical images. Comput Med Imaging Graph1996; 20: 193–207. Crossref, Medline, Google ScholarOrphanoudakis S - 11
, Tadeusiewicz R. Semantic-oriented syntactic algorithms for content recognition and understanding of images in medical databases. Presented at the IEEE International Conference on Multimedia and Expo, Tokyo, Japan, 2001. Google ScholarOgiela MR - 12
, Brodley CE, Kak AC, Kosaka A, Aisen AM, Broderick LS. ASSERT: a physician-in-the-loop content-based retrieval system for HRCT image databases. Comput Vis Image Understanding1999; 75: 111–132. Crossref, Google ScholarShyu CR - 13
, Broderick LS, Winer-Muram H, et al. Automated storage and retrieval of thin-section CT images to assist diagnosis: system description and preliminary assessment. Radiology2003; 228: 265–270. Link, Google ScholarAisen AM - 14
, Güld MO, Thies C, et al. IRMA: content-based image retrieval in medical applications. Medinfo2004; 2004: 842–846. Google ScholarLehmann TM - 15
, Schubert H, Keysers D, Kohnen M, Wein BB. The IRMA code for unique classification of medical images. In: Proc SPIE. Vol 5033. Bellingham, Wash: SPIE, 2003; 440–451. Google ScholarLehmann TM - 16
, Jaulent MC, Zapletal E, Degoulet P. Unified modeling language and design of a case-based retrieval system in medical imaging. Proc AMIA Symp 1998. Google ScholarLe Bozec C - 17
, Müller H, Martins M, Dfouni N, Vallée JP, Ratib O. Casimage project: a digital teaching files authoring environment. J Thorac Imaging2004; 19: 103–108. Crossref, Medline, Google ScholarRosset A - 18
, Michoux N, Bandon D, Geissbuhler A. A review of content-based image retrieval applications: clinical benefits and future directions. Int J Med Inform2004; 73: 1–23. Crossref, Medline, Google ScholarMüller H - 19
, Geissbuhler A, Ruch P. Report on the CLEF Experiment: combining image and multilingual search for medical image retrieval. Proceedings of the Cross Language Evaluation Forum. Springer Lecture Notes in Computer Science, 2005 (in press). Google ScholarMüller H - 20
, Sanderson M, Müller H. The CLEF Cross Language Image Retrieval Track (Image-CLEF) 2004. Proceedings of the Cross Language Evaluation Forum. Springer Lecture Notes in Computer Science, 2005 (in press). Google ScholarClough P - 21
, Cagnoni S, de Dominicis R. Integrating content-based image retrieval in a medical reference database. Comput Med Imaging Graph1996; 20: 231–241. Crossref, Medline, Google ScholarBucci G - 22
, Rosset A, Vallée JP, Geissbuhler A. Integrating content-based visual access methods into a medical case database. In: Proceedings of the Medical Informatics Europe Conference (MIE 2003), St Malo, France, May 2003; 480 – 485. Google ScholarMüller H - 23
, Hanka R, Ip HHS, Lam R. Extraction of semantic features of histological images for content-based retrieval of images. Presented at the IEEE Symposium on Computer-based Medical Systems (CBMS), Houston, Tex, 2000. Google ScholarTang LH - 24
, Guillodb J, Thiran JP. Towards a computer-aided diagnosis system for pigmented skin lesions. Comput Med Imaging Graph2003; 27: 65–78. Crossref, Medline, Google ScholarSchmid-Saugeona P - 25
, Taira RK, Dioniso JD, Aberle DR, El-Saden S, Kangarloo H. Evidence-based radiology: requirements for electronic access. Acad Radiol2002; 9: 662–669. Crossref, Medline, Google ScholarBui AA - 26
, Manickam S. Leveraging XML-based medical records to extract experimental clinical knowledge: an automated approach to generate cases for medical case-based reasoning systems. Int J Med Inform2002; 68: 187–203. Crossref, Medline, Google ScholarAbidi SS - 27
, Prinz M, Schneider S, et al. Establishing an international reference image database for research and development in medical image processing. Methods Inf Med2004; 43: 409–415. Crossref, Medline, Google ScholarHorsch A - 28
, van Rijsbergen CJ. Report on the need for and the provision of an ideal information retrieval test collection: British Library Research and Development Report 5266. Computer Laboratory, University of Cambridge, 1975. Google ScholarSparck Jones K - 29
, Ratib O, Geissbuhler A, Vallée JP. Integration of a multimedia teaching and reference database in a PACS environment. RadioGraphics2002; 22: 1567–1577. Link, Google ScholarRosset A - 30
, Müller H, Müller W, Marchand-Maillet S, Pun T. Design and evaluation of a content-based image retrieval system. In: Rahman SM, ed. Design and management of multimedia information systems: opportunities and challenges. Hershey, Pa: Idea Group Publishing, 2001; 125–151. Google ScholarSquire DMG - 31
, Rosset A, Vallée JP, Terrier F, Geiss-buhler A. A reference data set for the evaluation of medical image retrieval systems. Comput Med Imaging Graph2004; 28: 295–305. Crossref, Medline, Google ScholarMüller H - 32
. Usability engineering. San Francisco, Calif: Morgan Kaufmann, 1993. Google ScholarNielsen J - 33
, Patel VL. Cognitive and usability engineering methods for the evaluation of clinical information systems. J Biomed Inform2004; 37: 56–76. Crossref, Medline, Google ScholarKushniruk AW








