Performance Indicators for the Electronic Library

June 1998

[Links updated 23rd October 1999]

Jenny Kena

jkena@ozemail.com.au


 Contents

 Summary

 1. What is the electronic library?

 2. Measuring performance of libraries

 3. What is different/difficult about measuring the performance of the electronic library?

 4. Performance indicators for the electronic library - MIEL2

 5. Applying performance indicators in one library sector - measuring the performance of electronic library services in NSW public libraries

 6. Conclusion

 7. References
 
 

Summary
 
 

The electronic library is a combination of electronic resources, infrastructure and associated services. It is a service that is generally now provided alongside traditional, physical library services i.e. within a "hybrid library" service. Performance measurement is of growing importance to libraries operating in today's competitive environment. Performance indicators need to be both quantitative and qualitative and measure inputs, outputs and impacts. Although the performance indicators of the traditional library can in many cases be adapted for the electronic library e.g. number of enquiries can include number of email enquiries, some new measures need to be put in place. Although there is the potential for performance indicators to be collected automatically by the technology of the electronic library, statistics such as 'hits' on web sites need to be used with care as on closer examination they can be almost meaningless. There have been some major attempts at devising performance indicators for electronic libraries, most recently by Peter Brophy in the UK. He has devised a framework that can potentially be used in all types of libraries. His work is described in detail here and then used to illustrate the progress in collecting and reporting performance data for electronic library services in NSW public libraries.

1. What is the electronic library?
 
 

The electronic library, which can also be referred to as the "digital library", the "networked library" or the "virtual library" is often defined in narrow terms as just a collection or repository of electronic resources and the technology needed to provide access to it (Sloan, 1997). However, as Sloan points out, this leaves out the service aspect of electronic libraries, services that can include searching, categorisation, filtering, translation, publishing, help in finding information, user education, email enquiry services and managing copyright, licenses and electronic redistribution.
 
 

Electronic libraries being more than just an "agglomeration of datasets" is also discussed by Brophy (1998, pp222-223). Brophy's research mainly concentrates on academic libraries. He states that just as traditional libraries are more than just a collection of books, electronic libraries also have other dimensions. He lists the functions of the electronic library as -

Although these functions are also part of the traditional library, in the electronic library they take on new levels of complexity e.g. when information resources are no longer owned by the library, cataloguing becomes far more complex.
 
 

Bertot describes some of the new roles for public libraries that are part of the electronic library service (1998, p28). These include introducing technology to the community, demonstrating applications, being a local access point for government information, creating, maintaining and organising electronic community information, training in use of electronic resources and providing access to electronic resources.
 
 

The electronic library is now part of library services in all sectors of the library industry - academic, special, public and school services.
 
 

When evaluating electronic libraries, it is important to take these service aspects into account. It is also important to consider electronic libraries in the context of "hybrid library" services. This means that the electronic resources and services are usually being provided by a library that is also providing traditional, physical resources and services. In other words, except in a minority of cases, the electronic library is not a separate entity and its performance will need to be evaluated as part of a total library service.
 
 
 
 
 
 

The other aspect of the electronic library important to clarify in the context of performance evaluation, is that the electronic library collection is made up of a number of individual electronic resources that can also be evaluated. The evaluation of individual electronic titles is important for the purposes of collection development. One system for doing this has been devised by library staff at the University of NSW (Cargnelutti et al, 1998). In this paper, I will not be considering performance indicators for electronic libraries at the level of individual resources. Instead, I will be looking at indicators that provide information about the library service as a whole that specifically relate to its performance goals for electronic services.
 
 
 
 

2. Measuring performance of libraries
 
 

Libraries have traditionally reported their performance using statistics such as number of items in the collection, number of items issued per annum, number of registered users, number of visits made to the library and numbers of enquiries - quantitative rather than qualitative statistics. Although most library managers now recognise the need to go beyond these types of measures to get a true picture of performance, other stakeholders may still cling to them as a measure they can understand. These statistics have never been able to tell the whole story about library performance and, with the advent of electronic library services they have become even less useful. Rowena Cullen, in her discussion of performance measurement in reference services (1992, p12) uses the example of reference enquiry numbers to illustrate the problem of using these types of statistics -
 
 

"One thing is certain, no-one's goal is to increase the number of enquiries answered each year…..Increased business in the reader services area may indicate …..increasing frustration."
 
 
However, administrators seeing falling reference enquiry numbers may tend to see this as an indication of falling demand and therefore falling performance. In fact it could mean that the user education service has been so successful that users no longer need to ask for so much help. A simple example of how electronic library services could affect this same statistic can be provided by the situation of having the library catalogue available through the Internet. If this resulted in fewer people ringing up for catalogue checks, phone enquiry statistics would fall. What should show up as better performance may actually appear as reduced performance because the indicator (number of phone enquiries) has decreased.
 
 

There have been other pressures on libraries that have influenced the development of improved performance indicators. Libraries now operate in a competitive environment where services are being contracted out or privatised. The current environment in which economic rationalism and quality management are dominant ideologies has demanded of libraries that they provide stronger evidence that their services are effective, efficient, customer-driven and meet world's best practice. The need for libraries to be able to benchmark their services has resulted in international standards for library performance becoming a priority. Recently, the final version of ISO 11620 Information and documentation - library performance indicators was released. This provides a standard that can be used to compare all different types of libraries.
 
 

Performance indicators can be categorised in various ways. Building on the work of McClure & Lopata (1996), Brophy (1997, p14) places performance indicators within a framework based on managerial tasks -

Although he acknowledged that there can be many perspectives on performance indicators including those of stakeholders such as customers, institutional managers, funding councils, government, other library managers, student advisors, heads of academic departments and posterity, he decided to focus on the library manager's requirements in formulating his performance indicators, on the basis that these provide the most comprehensive approach (Brophy, 1997, p13).
 
 

Another way of categorising performance indicators is to divide them into -

(Lakos, 1997, "Assessment..")
 
 
Despite the recognition of the need for improved indicators, it is still common for libraries to report input measures quite well, output measures in a limited way and have great difficulty in reporting impact measures. Output measures and impact measures for physical library services have been difficult to devise and standardise for national and international comparisons. However, this has been achieved in some cases. Brophy refers to some of the national and international guidelines for performance indicators in different types of libraries (1997, pp 18-34). He found that none of the existing guidelines, including draft ISO 11620, dealt satisfactorily, if at all, with measuring the performance of electronic services.
 
 

Another purpose of performance indicators that will be relevant in discussing evaluation of electronic libraries, is their use in funding arrangements or accreditation of institutions or courses. When the level of funding for a library depends on statistics for number of loans, enquiries or visits, it is important that these are presenting a true picture i.e. that there is an opportunity to include the electronic equivalents. Or, when accreditation of a course depends on the number of items in the library collection relating to the course discipline, more than just owned items need to be taken into account. Daly discusses this problem in regard to legal education and legal libraries (Daly, 1995).
 
 

3. What is the difference/difficulty about measuring the performance of the electronic library?
 
 

Because of the difference in the way services are provided in the electronic library, there are difficulties in trying to apply to them the performance indicators used for the physical library. New or modified indicators are needed. However, these have not been easy to devise and some that may appear to be useful, e.g. web usage statistics, have been found to be very unreliable (Goldberg).
 
 

The differences/difficulties in devising performance indicators for the electronic library include -
 
 

4. Performance indicators for the electronic library - MIEL2
 
 

Although there is not yet an international standard set of performance indicators for electronic libraries, or even national standards that cover electronic services, the final report of the Management Information Systems and Performance Measurement for the Electronic Library: eLib Supporting Study (MIEL2), part of the Management Information for the Electronic Library (MIEL) Programme (Brophy & Wynne, 1997), provides the basis for an international standard.

Brophy and Wynne have considered the major national and international standards for library performance in devising their proposed standards for electronic libraries including those used in academic and public libraries. Their report includes an excellent summary of these standards and related studies. As mentioned previously, their study drew heavily on the work of Charles McClure whose manual Assessing the academic networked environment: strategies and options (McClure & Lopata, 1996) is an attempt to formulate performance measurements for networked services that are the equivalent of those used by library managers for paper-based services. The manual addresses the whole academic environment, not just library services. Both qualitative and quantitative measures are included. The six areas suggested for qualitative assessment are -

As also mentioned, the other important basis for their work is EAL. The performance indicators in MIEL2 are an extension of those contained in the EAL report with the new list of indicators known as EAL+. The full list is included as an appendix to MIEL2.
 
 

The performance measurement model used in MIEL2 has two dimensions. One dimension is the managerial tasks of the library manager - operational management, forward planning and evaluation and review. The other dimension is the functions of the electronic library - resource discovery, resource delivery, resource utilisation, infrastructure provision and resource management. This model is used to identify decision areas. The need for more qualitative measures and measures that demonstrate impact rather than extent is emphasised. Also, the need for simplicity - the number of indicators should be kept to a minimum (Brophy & Wynne, 1997, p13-17).
 
 
 
 
 
 
 
 

The following is a brief summary of the performance indicators recommended in MIEL2 -

(i) Indicators for operational management -

These provide information to assist in management of resources and assessing quality of resources. Items downloaded could not be used for web pages and there are problems in defining the concept of an 'item' or 'document' in the context of electronic resources. For this reason, the 'hits' per service is provided as an alternative, although as already discussed there are many problems using hits as a measure. This is to provide information on the availability of tools which users need to exploit resources, such as personal bibliographic software, and the extent of use of these tools. Availability refers to the situation where immediate access is not possible e.g. on the basis of a limited number of concurrent users. These provide an indicator of efficiency.

(ii) Forward planning

The information from these measures is needed to predict future trends based on current usage. Examples given are -

(iii) Evaluation and review

The indicators for this function are an adaptation of EAL and assume that libraries will operate a hybrid service for the foreseeable future. It is proposed that all of the EAL indicators need to be modified to include aspects of electronic library provision and suggestions are made for doing this. Examples of some of these and how they would be modified are -

Need to differentiate between traditional and electronic sources. The Help desk function for electronic services needs to be included here and should cover volume of requests, type of requests, response time, accuracy of response and courtesy of staff Will need to include induction into IT facilities and skills as well as information content of electronic services. As discussed above, there are difficulties in defining what electronic documents should be counted here. It is better to concentrate on the range and depth of resources available. Should include email enquiries. For the electronic library, it is suggested that the library's access to electronic resources be mapped against a national standard set - for the UK, JISC datasets. Otherwise, this measure is meaningless for electronic resources. Add email enquiries. Include expenditure on electronic resources Becomes less relevant for electronic library services - need to consider the cost of remote access.
 
 

However, there are some additional indicators needed to complete the picture for evaluation of electronic services and these are listed below -

Needs to include satisfaction with equipment, software and network performance including availability and response time.

It is acknowledged that the list may need to be refined further although there is also a need to develop impact measurements. The next stage in the development of these indicators is devising the actual tools for measurement and testing these in a number of libraries.
 
 

5. Applying performance indicators in one library sector - measuring the performance of electronic library services in NSW public libraries
 
 

As a practical exercise in applying performance indicators to electronic library services, the final part of this report considers how proposed indicators might be used in public libraries in NSW. It is important for public libraries to be able to measure their performance in supplying electronic library services in order to demonstrate their productivity and justify their role in the digital environment. As with other types of libraries, traditional public library statistics are not giving the full picture of the services being provided.
 
 

The recently published UK report New library: the people's network (1997), discusses the issue of performance evaluation of networked electronic library services -

"The public library networking programme will need to draw on the research currently being undertaken in other sectors, especially the higher education sector" (p93) Both McClure's work and MIEL2 are referred to.
 
 

In the US, Smith and Rowland have grappled with the issue of adding output measures for electronic services to the standard Output measures for public libraries (1982). This is also the standard used for evaluation of NSW public libraries. Some of the process has been described above. Although no new output measures have yet resulted, Smith and Rowland suggest the following interim measures for public libraries -

Bertot also suggests that public libraries use web server statistics to get a sense of who is using services but to combine these with other statistics and regular surveys (Bertot & Mackenzie, 1998, pp22-23). The problems in relying too heavily on server statistics have already been discussed.
 
 

Some input measures have now been added to the statistics collected under the US Library Statistics Cooperative Program namely -

Also, McClure and Bertot have surveyed public libraries and the Internet to assess the quality and extent of access available in the US. The purpose of their study is to provide policy makers, various stakeholder groups, and the library community with the ability to determine the relationships between public library Internet-related costs, services, information technology infrastructure, and types of population served for public library electronic networked services (1997 National Survey). Although it will produce data that could be used for performance indicators, this is not its main purpose.
 
 

At present, there are three state-wide collections of performance indicators for NSW public libraries. These are -

How does the information being collected in these surveys compare to what is required in MIEL2 or the US LSCP input measures?
 
 

MIEL2 compared to NSW public library statistics
 
MIEL2 performance indicator NSW public library statistics collected
Sessions per service per month Total number of sessions using booked PCs could be calculated as the following figures are collected -Internet access - number of bookings 

CD-ROM workstations - number of bookings (does not cover individual CD-ROM titles)

Personal computers - number of bookings (software such as Word)

Internet workstations - number of bookings

OPACs - number of bookings not generally booked

User satisfaction with service results PLEG customer satisfaction survey asked about satisfaction with Internet access and CD-ROMs
Items downloaded per service per month Not collected
Number of 'hits' per service per month Not collected
User satisfaction with resource utilisation tools Not collected
Percentage of users using each tool Not collected
Queuing time for access to workstations Not collected
Downtime per month Not collected
Availability per month Not collected
Pages of print per month Not collected
Number of help desk enquiries received per day Not distinguished from general enquiries
User satisfaction with IT infrastructure Customer satisfaction survey asked about satisfaction with equipment
Number of sessions on each service/subscription cost Not collected
Number of help desk enquiries per staff day Not collected
Number of students, number of staff users of electronic resources Translated as number of users - not collected
Proportion of students and staff as active users Not collected, not relevant
Available budget for services Not collected
Total number of sessions, number of sessions per service type over a number of years Internet and CD-ROM bookings
User satisfaction - document delivery services PLEG Item delivery rate survey measures time taken to deliver but no differentiation between print and electronic items
User satisfaction - information services Customer satisfaction survey asked about enquiry service but did not distinguish help desk type enquiries
User satisfaction - information skills programme Not collected
Documents delivered per FTE student during a year Statistical return includes a figure for inter-library loan requests and number of registered users, Benchmarking data includes residential population so a similar figure could be calculated however, no measure for electronic documents
Enquiries answered per FTE student Information requests per annum is collected but there is no requirement to include email enquiries at this stage
Volumes in collection per FTE student Benchmarking data includes library resources per head of population. Library resources includes CD-ROM titles. 
Enquiries answered per library staff FTE This could be calculated but there is no requirement for email enquiries to be included at present
Total library expenditure per library staff FTE This could be calculated from existing figures and expenditure on electronic resources is included
Library staff FTE per number of libraries This could be calculated but not really relevant to electronic services
PC hours used per annum divided by FTE students Hours of CD-ROM, Internet and Public Access PC workstation use is collected. This figure could be calculated
Proportion of JISC datasets available No equivalent - however, it may be possible to work out a core collection of electronic resources for a NSW public library
Total major electronic subscriptions Only CD-ROM titles are included 
Total library expenditure/PC hours used per annum This could be calculated
Total major electronic subscriptions/FTE staff Could be calculated for CD-ROM titles but would be fairly meaningless
PC hours available per annum per FTE student Could be calculated by adding up number of opening hours by number of PCs and dividing by population served
FTE students per network PC Could be calculated

 

US Library Statistics Cooperative Program compared with NSW Public Library Statistics for electronic services

Internet access - the statistical return asks whether the library has Internet access

Internet use - whether access is for staff or the public is also asked

Number of items in electronic format owned by the library - CD-ROM titles are specified

Operating expenditure for electronic access - this figure is not collected in any of the surveys.
 
 

There is also no collection of 'hits' information on a statewide basis. It is possible that some libraries are collecting this information for their own web sites.
 
 

Other information on electronic library services that is collected in the annual statistical return is -

This exercise demonstrates that although there is some information being collected that can be used to report performance of electronic library services in NSW public libraries, there are many gaps. There is a need to review the Statistical Return and Benchmarking Data to take these services into account. Also, there is the opportunity to use existing data to produce some useful benchmarks for electronic services e.g. number of PC hours available, number of sessions per month.
 
 
 
 
 
 

6. Conclusion
 
 

There is still a long way to go in arriving at a standard set of performance indicators for electronic library services. Although MIEL2 has provided a basis for a standard, its proposals still need to be further specified and tested. If it is to be used as the basis for all types of libraries, further work needs to be done to ensure that different library requirements are taken into account. It will be interesting to see how MIEL2 is adapted for use in UK public libraries.
 
 

Although some progress has been made on input and output measures, very little has been done in the formulation of impact measures. However, it should be noted that this is not just a problem with evaluation of electronic library services. Performance indicators for traditional library services are still heavily skewed towards input measures rather than output or impact measures and, quantitative rather than qualitative measures. Also, if electronic library services are to have performance indicators, there must also be performance standards for these services. In the rapidly changing environment of the electronic library, this is a challenge.
 
 

As libraries have to make decisions about deployment of resources into electronic services and justify these to their stakeholders, the need for performance indicators that show that money has been well spent, that customer satisfaction has been improved and that libraries are continuing to play a key role in the provision of information, is an urgent one.
 
 

7.References
 
 

Bertot, John Carlo (1997) "ALA/NCLIS Survey on Public Library Use of the Internet" [form]

http://research.umbc.edu/~bertot/1997survey.final.pdf

(downloaded 9th June 1998)
 
 

Bertot, John Carlo & Christine Mackenzie (1998) "Victoria and U.S. public libraries and the Internet: issues and strategies for the networked environment", in VALA Biennial Conference and Exhibition, 9th, Melbourne. Robots to knowbots: the wider automation agenda; Conference proceedings. Melbourne, pp 11-31
 
 

Brophy, Peter (1998) "It may be electronic but is it any good? Measuring the performance of electronic services", in VALA Biennial Conference and Exhibition, 9th, Melbourne. Robots to knowbots: the wider automation agenda; Conference proceedings. Melbourne, pp 217-230
 
 

Brophy, Peter & Peter M Wynne (1997) Management Information for the Electronic Library (MIEL) Programme: Final Report, University of Central Lancashire, Centre for Research in Library & Information Management, 1997.
 
 

Cargnelutti, Tony, et al (1998) "Finding one's web feet. Revisiting KIN: key indicators of electronic resource usage in the web environment", in VALA Biennial Conference and Exhibition, 9th, Melbourne. Robots to knowbots: the wider automation agenda; Conference proceedings. Melbourne, pp 279-

296
 
 

Cullen, Rowena (1992) "Evaluation and performance measurement in reference services" New Zealand Libraries, v47(1), March 1992, pp11-15
 
 

Daly, Gail M (1995) "Law library evaluation standards: how will we evaluate the virtual library?", Journal of Legal Education, v45(1), March 1995, pp61-78
 
 

Goldberg, Jeff (199-?) "Why web usage statistics are (worse than) meaningless"

http://www.cranfield.ac.uk/docs/stats/

(downloaded 7th June 1998)
 
 

Joint Funding Councils' Ad-hoc Working Group on Performance Indicators for Libraries (1995) The Effective Academic Library: a framework for evaluating the performance of UK academic libraries. Lond, HEFCE, 1995
 
 

Lakos, Amos (1997) "Assessment of library networked services - issues and options", paper based on one presented at Ontario Library Association Super Conference, Toronto.

http://www.lib.uwaterloo.ca/~aalakos/Present/Olita97/olita97a.html

Updated 2nd December 1997 (downloaded 7th June 1998), Link updated 31st October 1998.
 
 

Lakos, Amos (1997) "Identifying and assessing library clients in a networked environment - issues and possibilities" paper presented at Northumbria International Conference on Performance Measurement in Libraries and Information Services, 2nd, Longhirst Hall.

http://www.lib.uwaterloo.ca/~aalakos/Present/North97/noruse1.html

(downloaded 7th June 1998), Link updated 31st October 1998.
 
 

Lakos, Amos (1997) "The 2nd Northumbria International Conference on Performance Measurement in Libraries and Information Services: Report"

http://www.lib.uwaterloo.ca/~aalakos/Present/North97/norsum.html

Updated 5th February 1998 (downloaded 7th June 1998), Link updated 31st October 1998.
 
 

McClure, Charles & Cynthia L Lopata (1996), Assessing the academic networked environment: strategies and options, Coalition for Networked Information, 1996
 
 

Metropolitan Public Libraries Association (NSW), (1996) "Benchmarking data: introduction"

http://www.slnsw.gov.au/plb/stats/benchmrk/1996/intro/intro.htm

(downloaded 8th June 1998)
 
 

New Library: the people's network (1997) Lond, Library and Information Commission, 1997

http://www.ukoln.ac.uk/services/lic/newlibrary/full.html

(downloaded 16th April 1998)
 
 

"1997 National Survey of Public Libraries and the Internet announced" [press release] 31st March 1997.

http://www.nclis.gov/news/pr97/97study.html

(downloaded 9th June 1998)
 
 

"Public libraries evaluation group (PLEG)" (1998)

http://www.slnsw.gov.au/plb/libs/pleg/

(downloaded 8th June 1998)
 
 

Sloan, Bernie (1997) "Service perspectives for the digital library: remote reference services", PHD paper, Graduate School of Library and Information Science, University of Illinois at Urbana-Champaign.

http://www.lis.uiuc.edu/~b-sloan/e-ref.html

(downloaded 13th May 1998, link updated 23rd October 1999)
 
 

Smith, Mark & Gerry Rowland (1997) "To boldly go: searching for output measures for electronic services", Public Libraries, v36(3), May/June 1997, pp168-172
 
 

State Library of New South Wales. Public Libraries Branch (1997) "Statistical return for period 1 July 1996 to 30 June 1997" [form]
 
 

Van House, Nancy A et al (1982) Output measures for public libraries: a manual of standardised procedures 2nd ed, Chicago, ALA, 1982