Visualizing Search Results:
Some Alternatives To Query-Document Similarity
Lucy Terry Nowell*, Robert K. France**, Deborah Hix, Lenwood S. Heath, and Edward A. Fox**
Virginia Tech Department of Computer Science and **Computing Center
Blacksburg, VA 24061 USA
*Also with Lynchburg College in Virginia
A digital library of computer science literature, Envision provides powerful information visualization by displaying search results as a matrix of icons, with layout semantics under user control. Envision's Graphic View interacts with an Item Summary Window giving users access to bibliographic information, and XMosaic provides access to complete bibliographic information, abstracts, and full content. While many visualization interfaces for information retrieval systems depict ranked query- document similarity, Envision graphically presents a variety of document characteristics and supports an extensive range of user tasks. Formative usability evaluation results show great user satisfaction with Envision's style of presentation and the document characteristics visualized.
A variety of systems, a few of which are discussed below, have begun to bring together the advantages of information visualization and current information retrieval technology. Many visualization interfaces for information retrieval systems visualize ranked query-document similarity and clustering. The VIBE [Olsen et al., 1993] information retrieval system moves toward the goal of visualizing clustering patterns in a document space. VIBE allows users to specify several terms, each to be associated with a portion of the visualization window, with document icons distributed so as to reveal the relevance of documents to the selected terms. Both Vineta [Krohn, 1995] and Bead [Chalmers & Chitson, 1992] reveal clustering patterns using a three-dimensional scatterplot visualization. LyberWorld [Hemmje et al., 1994] also depicts clustering in three dimensions but constrains the visualization to a sphere. Kohonen's feature maps also have been used to visualize semantic distributions of a collection [Lin, 1992].
A recent development at Xerox PARC is the TileBars interface, which depicts distribution of query terms within each document of a retrieval set so that the most relevant parts of a work can be located quickly [Hearst, 1995]. TileBars is part of a larger digital library effort [Rao et al., 1995] that includes such developments as Snippet Searching [Pederson et al., 1991] and Scatter/Gather [Cutting et al., 1993] [Cutting et al., 1992] to assist in query reformulation, and the Butterfly workspace [Mackinlay et al., 1995] for searching and browsing. InfoCrystal [Spoerri, 1993], which works with both Boolean and vector-space queries, uses a graphical query language consisting of lines joining geometric shapes to reveal the number of query terms present in each retrieved document.
Only a few systems visualize other document attributes, usually for databases other than bibliographic or library collections. Building on users' recollections of objects that interest them, RightPages supports browsing with iconic representations of document title pages [Hoffman et al., 1993]. Both Dynamic HomeFinder and FilmFinder [Ahlberg & Shneiderman, 1994] use a starfield display, or matrix of rectangular cells, to present the content of a database and support dynamic querying. For HomeFinder, each icon represents a home for sale. The starfield overlays a city map so that icon position shows geographic location. Built around a collection of videos available for rental, FilmFinder's icons are color-coded to depict genre, while lateral position shows year of release. The Table Lens [Rao & Card, 1994] visualizes multi-dimensional sorts to reveal patterns in a relational database of baseball statistics.
A primary goal in the interviews was to elicit imaginative ideas about how the interviewees would like to work with literature, if limitations of known systems could be overcome. Interviewees responded to questions about current use of information sources, their future information needs, and their wish lists for the electronic library of the future. Beyond ready access from their offices, chief among interviewees' wishes was the ability to identify and explore patterns in the literature. Some asked for visual representations, while others wanted ways to see connections not visible with current tools. Users wanted to see documents their way - to explore the literature along dimensions of their choosing, to home in on particular areas of interest and explore those in detail, then move on to broader or sometimes very different views. Our interviewees wanted to:
|View shown is 35% size of original.|
|Next reference to figure.|
|Figure 1: Envision search results display.|
The Envision Graphic View Window design is modeled after scatterplot graphs. Each document in an Envision search results set is shown in the Graphic View Window as an icon. Space in the upper part of the Graphic View provides both a legend to explain current layout semantics and pop-up menu controls that allow the user to change the semantics. (See Figure 1.) When the user selects an icon, bibliographic information about the represented item appears in the Item Summary Window. Documents deemed useful may be marked by the user via boxes in the Item Summary Window lines or by selecting the icons and making menu choices in the Graphic View. Items thus marked might be used as the basis for a feedback search, or saved or printed in summary form. Double-clicking an icon or Item Summary line automatically launches XMosaic and provides access to any additional information about that document in the Envision database, including the full bibliographic entry, related descriptive information, abstract, and full content.
In Figure 1, the circular "bubbles" in the Graphic View represent single documents, with relevance ranks shown as labels below the circles. Elliptical cluster icons, discussed further in section 3.2, represent sets of documents that would display at the same point in the graph. The number of represented documents is shown inside the ellipse. Labels below these cluster icons show the ranks of the two most relevant documents in the cluster. The Item Summary Window at the bottom shows a text presentation of bibliographic data for documents whose Graphic View icons are selected by the user, indicated by bold outlines (e.g., documents with icon numbers 1 and 2 in Figure 1). Item Summary text lines are related to their icons by icon label.
These graphical devices may be used to present a variety of document characteristics, serving various user purposes. Like several other systems, Envision can present estimated query-document similarity, or estimated relevance, as well as relevance rank. Author names and publication years are central to citations and clearly of interest. Index terms are useful in query reformulation. Representation of document type allows users to easily locate such relatively rare objects as videos, but also allows users to distinguish publications of the type they believe to be most useful for their current task. Finally, a document's Envision database ID, while not inherently interesting to users, may be useful in comparing retrieval sets and can be used to retrieve a document directly.
The list of document characteristics associated with each icon attribute is extensible, pending further usability evaluation and user input. Design decisions to date include:
For several reasons, estimated relevance and relevance rank may be represented by more
graphical devices than any other document attribute. Our interviewees and users have told
us they fear being overwhelmed by large retrieval sets. Ranked results make them more
comfortable with large results sets and more willing to use a computerized search tool.
Though some users initially voice discomfort with our vector-space retrieval system, even
brief experience with the system leads to praise for it. From using other systems, as well as
from reading, many users are aware that estimated relevance values exist and want to see
those values, even though they may not understand the numbers' significance. More
sophisticated users believe the estimated relevance values give them insight into the
collection and the nature of the retrieval system. Finally, it is simpler to visualize estimated
relevance and relevance rank than most other document attributes, so it seems appropriate
to allow users to select the visualization most comfortable for them.
3.2 Graphic View Layout Algorithm
The Graphic View Window provides a two-dimensional display of the set of documents
retrieved by a query. The x- and y-axes of the display have two different scales
representing document attributes such as author, year, or estimated relevance. Some scales
(e.g., estimated relevance) vary continuously and do not partition the corresponding axis.
Most scales (e.g., author and year) take on discrete values and partition the corresponding
axis into strips. The result of this partitioning is a matrix of cells such that each document in
the results set belongs to a single cell. (For later versions of the Graphic View, when
multiple authors and multiple index terms are visualized, each document may belong to
multiple cells. See discussion in section 5.) Each document in a cell is represented by an
icon. As a cell may have numerous icons to display, we devised a layout algorithm to place
the icons within a particular cell.
Because of window size and user layout preferences, a cell has limited dimensions and may be unable to display all its icons. This limitation and other factors led to these requirements for the layout algorithm:
To meet these requirements, we designed a special elliptical cluster icon to represent all the
documents that cannot be displayed individually due to the non-overlap requirement. The
number of documents it represents is shown within the cluster icon, while the label shows
the ranks or identifiers of the two most relevant represented documents. As shown in
Figure 2, use of the cluster icon does have drawbacks - its
use masks information about represented documents that is encoded using icon shape and
icon size, while the cluster icon's color is that of the highest ranked document in the
represented shape (see Salton, G., in 1983).
3.3 Graphic View Visualizations And User Tasks
The Graphic View supports users in making decisions about which works to examine in
large sets of documents. Users of Envision also benefit from having control over the
semantics of each icon attribute, so they can change the layout to reflect document
characteristics of greatest interest for a particular task. In Figures 1 through 6, for example,
two typical results sets are presented in displays appropriate to a variety of tasks. In Figure
1, author names are shown on the y-axis, estimated relevance to the query is shown on the
x-axis, and icon labels show relevance rank. Icon color also shows relevance, with
documents in the top 35% of relevance values coded in orange, the next 35% green, and
bottom 30% blue. These values have been chosen after study of system performance and
display characteristics. (Legends for color and shape appear under the icon controls at the
top of the Graphic View.) Using Figure 1, the user can rapidly identify
authors who have produced many works relevant to the topic or identify the most relevant
works by a single author. The user may also determine whether a highly relevant work is
by an author already known and respected, or otherwise of special interest.
|View shown is 35% size of original.|
|Figure 2: Placing author on the y-axis and publication year on the x-axis reveals individual publication patterns. Icon shape reveals document type, and icon color and size both show estimated relevance.|
Researchers interested in comparing publication patterns among authors might choose the layout in Figure 2, showing authors on the y-axis and publication year on the x-axis. This display is fairly sparse, reflecting the reality of publication patterns in the topic and collection, but it presents much information about each document. Using this layout, in which icon shape shows document type, the user who believes that journal articles contain more significant work than proceedings, or that proceedings articles are more likely to contain cutting-edge research, can distinguish these items from books, which might contain more in-depth coverage. The layout uses icon color, size, and label to show relevance. Redundant encodings of this kind aid in quick, reliable perception of important features [Carswell & Wickens, 1987] [Wickens & Andre, 1990]. There are thus a total of four characteristics revealed for each document, yet the display remains aesthetically pleasing and uncrowded.
|View shown is 35% size of original.|
|Figure 3: A year-by-relevance layout reveals peaks and valleys in related research.|
Putting publication year on the y-axis and estimated relevance on the x-axis, as in Figure 3, creates a graphic picture of increasing research within the area. Icon color again shows relevance, so that icons for the most relevant documents are orange and further right than other icons. In this display, the icons labeled 2 and 8 have been marked useful by the user and are thus colored red, while icon 38 has been marked not useful and is colored white.
|View shown is 35% size of original.|
|Figure 4: Showing index terms on an axis may facilitate query revision. Relevance here is redundantly encoded with color, shape, and x-axis location.|
A user seeking more terms to use in query revision might choose the layout in Figure 4, with assigned index terms on the y-axis and estimated relevance on the x-axis. Clustering of relevant documents in different index categories may reveal relationships among the categories. Pairing index terms with either author or publication year in the Graphic View (not shown) can reveal other commonalities among indexed topics. Utility of visualizing index terms obviously depends on the quality of indexing. Envision currently visualizes only index terms or keywords that have been assigned by authors or editors - clearly a major limitation, especially since our vector-space search system does full-text searching. Furthermore, both prevalence and quality of assigned index terms vary widely among segments of the collection, from copious to completely absent, and from controlled descriptors through ordinary language to cryptic abbreviations. Additionally, since Envision currently visualizes only one index term per document (the first listed) neither the full range of assigned terms nor the true amount of overlap among them is available to users. Visualizing multiple index terms per document presents a number of usability problems, discussed in section 5.
In addition to color and x-axis position, shape also encodes relevance in Figure 4 - an encoding included to support Envision-based perceptual research. The partition is the same as that used for color: documents in the top 35% of relevance values are shown as stars, the next 35% as diamonds, and the bottom 30% as triangles. Icon size is uncoded.
|View shown is 35% size of original.|
|Figure 5: Presenting relevance on both x- and y-axes shows the entire results set, revealing drop-offs in estimated relevance.|
Finally we present two configurations that allow the user to view the entire results set without scrolling. In the first (Figure 5), both x- and y-axes have been set to show estimated relevance. This display reveals drop-offs in the estimates, giving a researcher insight into performance of the underlying search engine on the query, and allowing an end-user to pick a highly ranked subset to examine. The second, shown in Figure 6, shows document type on the x-axis and estimated relevance on the y-axis. Putting relevance on the y-axis rather than the x-axis invokes a different metaphor: that the most relevant items, like cream, are rising to the top. Giving users control over layout allows them to chose comfortable metaphors. This may be one reason that users report high satisfaction with the Envision interface.
|View shown is 35% size of original.|
|Figure 6: Placing relevance on the y-axis changes the metaphor, allowing the most-relevant documents to rise to the top.|
4. Formative Usability Evaluation Of Envision
The first response of many people on seeing Envision is that it is "busy" - three windows,
with the Graphic View Window alone presenting many symbols. Nevertheless, our
usability evaluations showed that the design is an effective, easy-to-use product that
conveys much information in a dense display. In our early cycles of usability evaluation,
we were specifically concerned with proof of concept: Could users understand
relationships among the windows and the graphical objects within them? Did users make
sense of the complex display, based on graphical devices used in the design, and find this a
desirable way to view search results?
We used a SuperCard (Supercard is a registered trademark of Allegiant Technologies, Inc.) prototype for our earliest usability evaluations. Participants performed assigned tasks (e.g., finding three works published by a given author in a specified year; locating the title and author of the most relevant work) and then responded to a subjective questionnaire. Details of that evaluation are in [Nowell & Hix, 1993]. None of the participants had any difficulty in recognizing relationships among the windows and objects in them, while all participants commented positively on the power the Envision interface provides to the user.
For our latest usability evaluations we used the X-Windows implementation of Envision. In addition to assessing design changes resulting from earlier prototype evaluations, we focused on features that were not fully implemented in the SuperCard prototype: controlling the number of items in a results set via buttons at the bottom of the query window, changing which document characteristics were represented by each icon attribute using pop-up menus, and more extensive exploration of relationships among windows. Participants also were asked numerous questions about their understanding of Envision search results and user interface features reflecting system behavior, such as displayed relevance values. We focus here on tasks and issues pertaining specifically to the Graphic View.
Figure 7: Sample task using the Graphic View.
Because the Envision user interface was designed for a computer science library, all participants were computer scientists: one faculty member, two graduate students, and two undergraduates. Each was given a one-page "Getting Started" handout and was allowed ten minutes to explore Envision's features before performing 11 tasks, each consisting of several steps. A typical task required the participant to create a query meeting specified criteria, have Envision complete the search, and then use Envision's search results display to locate documents fulfilling various requirements. To ensure that participants used various aspects of the Graphic View, some tasks required participants to change the semantics of icon attributes (e.g., changing the x-axis setting to show publication years instead of estimated relevance), while for other tasks various icon semantics were left to participant discretion. A sample task is shown in Figure 7 above. In this example, items a, b, and d require use of the Graphic View; performance of item c may depend on either the Graphic View or the Item Summary Window. In all, 16 items required use of the Graphic View. Use of the Item Summary Window was required for some tasks, but many others could be completed using only the Graphic View. This is typical of the sorts of real research tasks Envision was designed to support. Upon completion of all tasks, users were given time for additional free exploration of the system. Throughout each usability evaluation session we recorded verbal protocol and critical incidents.
Six items were designated as benchmark tasks for objective measurement of user performance. Five benchmark tasks focused on initial use of a design feature, while the sixth studied learning curve. Performance measures included task completion time, number of errors, and number of questions asked (since the on-line Help system was not yet implemented). For task completion time, our goal was that mean participant time should not exceed the time required for one of the interface designers to complete the same task. For the initial task using a particular feature, we aimed for a mean error count and a mean number of participant questions equal to 0.2 - allowing for only one of the five participants to experience difficulty. Overall, each of the five participants performed 16 subtasks using the Graphic View, making a total of only five errors. (Time to task completion and use of help were not measured for non-benchmark tasks.) For the two benchmark subtasks using the Graphic View, participants made no errors, asked no questions, and all required less time than expected to complete the benchmark tasks - to our delight, surpassing the performance of an Envision designer!
Figure 8: Sample items from usability evaluation questionnaire.
We plan further usability evaluation to determine the desirability of allowing users to change the document characteristic represented by an icon attribute during use of the display. That is, what happens to user performance when users are allowed to change layout semantics? Given results of our latest usability evaluation and other studies [Carswell & Wickens, 1987] [Wickens & Andre, 1990], we expect some temporary loss of speed and accuracy in use of the display immediately after users change the layout.
Issues of scalability pertain to the size of results sets the Graphic View can display. We have tested the current version with results sets as large as 500 documents. We found that for some icon attribute settings (e.g., authors on the y-axis and index terms on the x-axis, not shown), the display is quite sparse, reflecting the need for a "zoom" feature that is planned but not yet implemented. Zoom will allow users to see a larger area of the scatterplot in less detail or a smaller area in greater detail. Ideally, the Graphic View could then be used as a browser for the entire collection, allowing users to zoom in on selected areas of interest.
Full use of Envision's Graphic View requires access to a number of document characteristics that are infrequently available in a bibliographic database or library system, such as document size, the number of citations contained in a work, and the number of times a document has been cited in other works. Even when these characteristics are represented in the database in some form, a visualization may be difficult or misleading. Visualizing document size appears to be a straight-forward matter, dependent on page count, word count, or storage required. However, in a multimedia database, none of these is a consistent indicator of time required to use different types of works. For example, a video that can be viewed in five minutes may occupy more storage space than a book, and may have no word or page count. Some means of converting raw size values to a meaningful common scale is needed.
Visualizing "times cited by others" also presents challenges. We are developing a database of citation links for Envision that will ultimately provide not only the number of citations but hypertext links among related documents. Even so, our database will only provide information about citation links among documents in the database - a small percentage of the total number of documents about computer science. Since a visualization of "times cited by others" will show only citations from works in our collection, works heavily cited by publications not in the collection may appear to be less significant than they are. Accessing a citation index might be a solution to this problem.
One of the more interesting issues we are exploring is presentation of multi-sets -- those instances when a single document belongs to multiple categories on either axis. For example, a document frequently has more than one author and is usually assigned more than one index term. Yet presenting multiple icons for one document has the potential to greatly increase display clutter, and we have questions about such a display: How will users respond when selecting or marking one icon causes several others to highlight or change color because they represent the same document? What about a document that occurs both as a single icon and as part of a cluster?
During usability evaluation and demonstrations of Envision, users have told us they especially like the flexibility and power of the Graphic View, they want many more visualizations. For example, we have been asked to reveal who cites whom by placing citing author on one axis and cited authors on the other - thus depicting communities of discourse, as users requested during our initial interviews. Musicians want to visualize by genre, style, and instruments required. For a medical collection, visualizations might present key symptoms, effectiveness of medications suggested per symptom, risk of drug interactions, etc. This user feedback, even more than success in formal usability evaluation, convinces us that library systems have much more to visualize than query-document similarity or semantic content and that the Envision Graphic View is a powerful, flexible design for increasing the range of characteristics visualized by a retrieval system.
Research related to this paper may be found in Lucy Terry Nowell's PhD dissertation. Readers at Virginia Tech may view it by clicking here.
[Nowell, 1997] Nowell, Lucy Terry (1997). Graphical encoding for information visualization: using icon color, shape and size to convey nominal and quantitative data. Virginia Tech, PhD Dissertation, 1997.