7 Bibliometrics in Assessing Productivity and Impact of Research
S L Sangam
I. Objectives
The objectives of this module are
• To know the Bibliometrics impact in the evaluation of scientific research.
• To identify the impact of Citation analysis study on scientific research.
• To study the implications of Bibliometrics Law’s Impact in different contexts of research.
• To understand the meaning of Impact Factor, Activity index and other measures.
• To have a consolidated view of impact factor in a country’s research productivity.
II. Module Structure
1. Introduction
2. Bibliometric criteria for evaluating research productivity
3. Impact of Citation Analysis
4. Individual Productivity and Impact
5. Impact related authorship phenomena
6. The impact of research and Ranking of journals
7. Institutional Productivity and Impact
8. Bibliometrics and Country’s analysis of impact of research
9. Impact of Bibliometric Laws
10. Impact of H-Index
11. Impact of the obsolescence rate of documents in different subjects
12. Summary
13. References
1. Introduction
The two books authored by de Solla Price, “Science since Babylon” and “Little Science, Big Science” published in 1961 and 1963 respectively made a beginning for the quantitative measure of scientific growth and productivity and their impact. The Quantitative measures of research productivity can be applied to products/techniques developed and to the extent of the organization’s informing activities. Impact may be judged by the rate of adoption of products and techniques and by various measures of the quality of the informing activities. The outcomes of the research are really its impact. For many types of research, obviously, the impact is a long-term affair. Research in education may be directed to improving the general education level among a particular group of people. For other types of research, the impact is less tangible – it may simply better understand. An obvious example is historical research, which seeks to achieve a better understanding of some event or individual from the past. The outputs of the research process are the results achieved. For many types of research, these results are manifested as a new product or technique. Research results have little value in and of themselves. They become valuable only when they are made known to individuals or organizations that can help them. The research group makes its results known by reporting them in various ways: in internal reports, in reports published and distributed in several forms viz. books, journals, monographs etc. If the research cannot be evaluated by its long-term impact of benefit for the society is futile.
2. Bibliometric criteria for evaluating research productivity
The extent and type of publication of the research results is the most obvious and immediate impact of a particular research activity. Presumably, the more widely disseminated the results of some project, the greater the impact that project or product is likely to have. But publications themselves have different levels of impact. Some formats are more widely distributed than others; some enjoy a grater reputation, some reach out to wider communities and thus may have more profound effects.
Bibliometric criteria for evaluating research productivity include (Lancaster, 1991).
• How many publications are produced;
• How many publications of what types are produced;
• The quality of the sources (e.g., journals) in which the publications appear;
• How much the work of an individual, group or organization is cited;
• What is the quality of the citation (e.g., as judged by the quality of the citing journal);
• How many publications are produced per individual, per man hour expended; and
• How many citations are received per individual, per man hour expended, per $ expended.
The Bibliometric methods therefore have many possible applications in the management of research, including:
• Evaluation of the productivity of a particular researcher (perhaps for appointment or promotion);
• Evaluation of the impact of the work of an institution or research group;
• Identification of possible new research areas on the basis of interdisciplinary citation linkages;
• Identification of institutional linkages (i.e., which institutions draw most heavily on each other’s work); and
• Assisting in the establishment of research policies or priorities in resource allocation.
3. Impact of Citation Analysis
The primary function of a citation is to provide a connection between documents – one which cites and the other which is cited. Citation is the best available indicator of the use of a document. The first use of Shepherd’s citations published in 1873. This technique of ‘Citation’ has been perfected by Eugene Garfield and others since the early 1960s (Garfield, 1963) . It is a fact that compilation of bibliographies in new fields is really difficult. In such circumstances, analysis of citations of articles may be one of the ways to gather information on a particular subject field. The very fact that the citations have been verified, evaluated and recommended by authors who are experts in their own fields make them all the more acceptable for inclusion in a bibliography. Citations given may be of books, journal articles, reports, standards, theses/dissertations etc. the relative use of each of these types can be ascertained based on the frequency of citations. For example, various citation studies have shown that journal articles are the most preferred source consulted by scientist since they constitute about 70-80% of the total citations. Similarly citation practice among social scientists indicates that they give equal importance to books and journals..
4. Individual Productivity and Impact
However, purely quantitative measures are inadequate indicators of research output. The type of publication and the reputation of the publisher should also be taken into account. Various attempts have been made to assign some type of numerical score to the output of a researcher based on what is published and where, and this is how the individual productivity and impact is measured. Narin deals with this topic, and more recent scoring methods have been presented by him (Narin, 1976). In general, such scoring methods take into account some or all of the following factors:
• The type of publication;
• The size of the publication;
• The reputation of the publisher; and
• The amount of work going into creating publications of various types.
The different fields of scholarship will adopt different standards for scoring research output. The research monograph in the humanities, which may represent many years of work, will probably earn more credit (relative to other forms of publication) than would be true for a research monograph in the sciences. In some fields, monographic publications may be considered less important than journal articles, and even other forms, and may receive less credit accordingly. This factor is well demonstrated by Sabarathnam who tried to achieve some consensus among a panel of experts, on the scoring of publications in the field of agricultural research. The Panel members the author remarks ranked seven possible publication forms. Among them first, the Research papers received the highest score (6.32 on the 7-point scale), then books scored 5.38, and finally popular articles 4.16 and review articles 4.06 (Sabarthnam, 1987).
5. Impact related authorship phenomena
It is now widely recognized that scholarly productivity, as measured by number of publications produced, is an elitist phenomenon: most authors contributing to a particular body of literature contribute very little and the number of authors who are highly productive is very small indeed. Following the original work of Lotka (Lotka, 1926), this phenomenon is now popularly referred to as Lotka’s Law. It is an inverse square law: if X authors contribute one paper each to a field, the number contributing two papers should be approximately X/2, the number producing three papers should be X/3 and so on. Lotka found that about 60% of the authors contributing papers to a field contribute only one paper, so the percentage contributing two would be 60/2, about 15%, the percentage contributing three papers would be 60/3, or below 7%, and so on, so the highly productive authors form a very small proportion of the total. In the latest studies Lotka’s law is not applicable due to change in research pattern.
Gupta and Sangam(2009) who studied the top 20 most productive authors of Karnatak University who together have contributed papers during 1999-2008, accounting for 81.87% of the total university output (Gupta & Sangam, 2009), average output per author was 49.9. Further only four out of 20 authors have published more than the average output per author. These authors are Tejraj M Aminabhavi (260 papers), followed by Sharanappa Totappa Nandibewoor (133 papers), Jaldappa Seetharamappa (56 papers) and Hosakatte Niranjana Murthy (50 papers). Similarly the average citation per paper recorded by all 20 authors of the university was 3.78. Only five out of 20 authors have recorded the average citation per paper more than the average value. These are: Kumaresh S Soppimath with the average citation per paper value of 12.47, followed by Anandrao R Kulkarni (7.70), Mrityunjaya Aralaguppi (6.73), Mahadevappa Y Kariduraganavar (5.63), Tejraj M Aminabhavi (5.59) and Jaldappa Seetharamappa (3.82). The average h-index of these 20 authors during 1999-08 was 5.85. Only four authors have scored the h-index value more than the average value of all authors. These authors are Hosakatte Niranjana Murthy with h-index value of 41, followed by Tejraj M Aminabhavi (28), Kumaresh S Soppimath (7), and Anandrao R Kulkarni (6) (Table 1). It is found in this study there are more number of multi authored papers than single authored papers, this is due to the impact of trend in collaboration.
Authors Name | TP | TC | h-index | ICP |
99-08 | 99-08 | 99-08 | 99-08 | |
Tejraj M Aminabhavi | 260 | 1454 | 26 | 28 |
Sharanappa Totappa Nandibewoor | 133 | 266 | 11 | 1 |
Jaldappa Seetharamappa | 56 | 214 | 11 | 3 |
Hosakatte Niranjana Murthy | 50 | 125 | 10 | 41 |
Kalagouda B. Gudasi | 45 | 48 | 7 | 2 |
Guru S Gadaginamath | 40 | 63 | 6 | 4 |
Srinivas K Saidapur | 39 | 76 | 8 | 4 |
Kulkarni, Anandrao R | 40 | 308 | 2 | 6 |
Shivamurti A Chimatadar | 35 | 58 | 6 | 0 |
Manohar V Kulkarni | 38 | 40 | 5 | 3 |
Mahadevappa Y Kariduraganavar | 35 | 197 | 10 | 2 |
Bhagyashri A Shanbhag | 34 | 58 | 7 | 3 |
Kumaresh S Soppimath | 32 | 399 | 17 | 7 |
B. Mulimani | 31 | 34 | 5 | 4 |
Basappa Basavanneppa Kaliwal | 28 | 47 | 6 | 0 |
Ramesh S Vadavi | 27 | 64 | 1 | 2 |
Sangamesh Amarappa Patil | 26 | 91 | 7 | 2 |
Ravindra B Malabadi | 25 | 45 | 6 | 5 |
Kallappa Mahadevappa Hosamani | 24 | 36 | 5 | 0 |
Mrityunjaya Aralaguppi | 22 | 148 | 1 | 0 |
Total | 1020 | 3771 | 3.79 | 5.85 |
Table-1. Productivity and Impact of Top 20 Authors, 1999-2008
TP – Total Papers TC- Total Citations
ICP – International Collaborative Papers
6. The impact of research and Ranking of journals
Journals are an important vehicle for scholarly communication. Garg & Rao9 observed that Impact Factors are widely used to rank and evaluate journals (Garg & Rao, 1988). They are also often used inappropriately as surrogates in evaluation exercises. The impact factor for ranking journals was first used for the inclusion of journals in Science Citation Index (SCI) and the originator of SCI warns against the indiscriminate use of these data.
Lancaster has given some possible criterions for the evaluation of scholarly journals as below (Lancaster, 1991):
• Size – Includes No. of papers, no. of pages and no. of words
• Circulation (Sales)
• [Uses]
• Impact – Citation, Citation per paper, Immediacy and Influence
• Age of sources cited
• Exclusiveness
• Coverage in Databases (May be in Abstracting and Indexing Services)
The Impact Factor is generally calculated on the basis of a 2 year’s period. For example, the 2007
Impact factor for a journal would be calculated as follows:
A = Number of times articles published in 2005-06 were cited in tracked journals during 2007
B = Number of articles published in 2005-06
Impact Factor (2007) = A/B
There are some nuances to this: Institute for Scientific Information (ISI) excludes certain article types (such as news items, correspondence, and errata) from the denominator. New journals, which are indexed from their first published issue, will receive an Impact Factor after the completion of two years’ indexing; in this case, the citations to the year prior to Volume 1, and the number of articles published in the year prior to Volume 1 are known as zero values. Journals that are indexed starting with a volume other than the first volume will not have an Impact Factor published until three complete data-years are known.
Annuals and other irregular publications will sometimes publish no items in a particular year, affecting the count. The impact factor relates to a specific time period; it is possible to calculate it for any desired period and the Journal Citation Report (JCR) also includes a 5-year impact factor.
It is regarded by many editors and editorial management committees as a measure of the “importance” of particular journals, and to some extent this is true: journals such as Nature, Science and New England Journal of Medicine have a high impact factor The impact factor, often abbreviated as IF, is a measure reflecting the average number of citations to articles published in science and social science journals.
For example the same journal i.e. New England Journal of Medicine published 366 “citable” articles in 2003 and 378 “citable” articles in 2002. Citations in 2004 to any articles published in 2003 and 2002 are 14147 and 14549, respectively. Following formula used to calculate, the (IF) for this journal in 2004:
IF = 14147+14549 = 38.6 in 2004
366 +378
7. Institutional Productivity and Impact
The research output of an institution, or of a single group or a team within an institution, can be considered as the sum of the output of the individual members, so the various factors discussed in relation to individual productivity and impact are also relevant to institutional productivity and impact. Nevertheless, the evaluation of an institution does present additional problems that merit special consideration.
Garg and Rao (Garg & Rao, 1988), in evaluating an Indian physics laboratory, recognize four categories of journal:
• Indian journals covered by the Science Citation Index,
• Non-Indian journals covered by the SCI,
• Indian journals not covered by the SCI, and
• Non-Indian journals not covered by the SCI.
The implication is that, on the average, a SCI-covered journal is likely to be better, on the average, than Indians ones. At the very least, one can assume that the international journals will give the Indian researcher greater exposure than the national journals, especially those international journals covered by the SCI.
It is a common understanding that the science journals covered by the Science Citation Index are usually considered to be the mainstream journals of science research (and likewise, presumably, for the Social Science Citation Index) and the journals not covered are regarded as of less importance. The criterion coverage in the SCI, then, can be used to divide into two categories – the journals in which the researchers of a particular institution have published. Presumably one would give more weight to publication in an SCI-covered journal.
But one would probably prefer a weighting scheme that has more than two values. It is obviously possible to use one that incorporates some form of numerical value for the journals in which papers appear. Thus, the impact of the work of a group might be represented by a numerical value that takes into account how many papers they have published in journals.
Again taken from the study of the faculty of Karnatak University who had published their total research output in 238 Indian and foreign journals during 1999-2008, the contribution of top 20 most productive journals is listed in Table-2. The cumulative output of these top 20 journals consists of 179 papers during 1999-2003, 301 papers during 2004-2008 and 480 papers during 1999-2008, accounting for 38.66%, 39.81% and 39.38% of the total output of Karnatak University.
S.No. | Journal | Total Papers | ||
1999-03 | 2004-08 | 1999-08 | ||
1 | Journal of Applied Polymer Science | 28 | 60 | 88 |
2 | Transition Metal Chemistry | 16 | 26 | 42 |
3 | Polymer News | 23 | 13 | 36 |
4 | Indian Journal of Chemistry Section B. Organic | 15 | 18 | 33 |
& Medicinal Chemistry | ||||
5 | Indian Journal of Chemistry Section A. Inorganic | 17 | 12 | 29 |
Physical Theoretical & Analytical Chemistry | ||||
6 | Journal of Basic and Clinical Physiology and | 10 | 13 | 23 |
Pharmacology | ||||
7 | Act a Crystallographic a Section E Structure | 7 | 15 | 22 |
Reports Online | ||||
8 | Oxidation Communications | 12 | 10 | 22 |
9 | Journal of the Indian Chemical Society | 9 | 13 | 22 |
10 | Journal of Membrane Science | 0 | 21 | 21 |
11 | Spectrochimic a Act a Part A Molecular & | 5 | 16 | 21 |
Biomolecular Spectroscopy | ||||
12 | Indian Journal of Heterocyclic Chemistry | 2 | 18 | 20 |
13 | Journal of Chemical and Engineering Data | 5 | 14 | 19 |
14 | Current Science | 10 | 8 | 18 |
15 | Carbohydrate Polymers | 0 | 11 | 11 |
16 | Separation and Purification Technology | 0 | 11 | 11 |
17 | Ecology Environment and Conservation | 5 | 6 | 11 |
18 | Journal of Advanced Zoology | 5 | 6 | 11 |
19 | Analytical Sciences | 9 | 1 | 10 |
20 | Indian Journal of Experimental Biology | 1 | 9 | 10 |
Total | 179 | 301 | 480 | |
Karnatak University Output | 463 | 756 | 1219 | |
Share of Top 20 journals in Karnatak University Output | 38.66 | 39.81 | 39.38 |
Table-2. Contribution of Karnataka University Faculty in Top 20 Journals During 1999-2008
Based on the data of faculty publishing in 20 top journals as listed above in Table -2 the productivity of the University may be assessed. However the impact factor of these journals is considered first as per the criterions enlisted by Lancaster and mode of calculation of IF by SCI. But the study is confined to the Chemistry subject field, and it may vary with subject field because researchers in chemistry are more prolific than others even in sciences.
8. Bibliometrics and Country’s analysis of impact of research
In the previous sections it has been seen how bibliometric data can be used in the assessment of productivity and impact of research by individuals and then institutions. This section discusses that how bibliometric data can be used to for the country analysis and to make the comparison of the productivity and impact of research by various countries and regions. Lancaster has stated that “when countries are ranked by productivity of scientific papers, most advanced countries will be at the top of the list; however India ranks as the most productive among the developing countries (Lancaster, 1991).
Garfield prefers to use a form of impact factor in comparing the relative influence of papers produced by scientists from various countries. For example, in his analyses of Latin American research, he shows that Chilean papers have the highest impact: the 312 papers published by Chilean scientists in 1978 earned 1017 citations in 1978-1982, for an average impact of 3.3, whereas papers from Peru achieved an impact of only 1.5. In contrast, he points out, Scandinavian papers achieve an impact of 6.4 and US papers an impact of 5.7. This is just to exemplify how impact of research can be assessed.
Narin has described a cross-country citation measure that compares the actual citations received by papers from a particular country to the expected number of citations. The expected number of citations is based on the proportion of the world’s papers that are contributed by that country. To illustrate this in the probabilistic manner, if country X publishes 2% of the world’s science papers, it should receive 2% of the world’s science citations. If it receives more, it is cited more than expected. This can be illustrated by looking at the relationships between two countries.
For simplicity, let us assume two countries, A and B. Country A publishes 90 papers in a year and country B publishes 10 papers. Since A publishes 90% of the world’s papers (in this simple model); it should receive 90% of its own citations and 90% of the citations from B. That is, in probabilistic sense, the equation should equal to 1. Let us examine this by computing the data for A and B.
Number of references from B to A / Number of references from B
Number of A publications (90) / Total number of publications (100)
This would be true if the papers published by B contained 150 bibliographic references, 135 of which were to A’s papers;
135/150 = 0.9 = 1.0
90/100 0.9
Suppose, on the other hand, that 140 of B’s references are to A’s papers,
140/150 = 0.93 = 1.03
Country | Number of Publications | World Share (%) | ||
1987-1989 | 1997 -1999 | 1987-1989 | 1997 –1999 | |
Australia | 140 | 214 | 1.44 | 2.07 |
Bangladesh | 138 | 167 | 1.42 | 1.62 |
Belgium | 90 | 76 | 0.92 | 0.74 |
Brazil | 134 | 111 | 1.37 | 1.07 |
Canada | 342 | 294 | 3.51 | 2.85 |
China | 297 | 351 | 3.05 | 3.40 |
Egypt | 114 | 80 | 1.17 | 0.77 |
England | 169 | 150 | 1.73 | 1.45 |
France | 312 | 327 | 3.20 | 3.16 |
India | 501 | 483 | 5.14 | 4.67 |
Indonesia | 147 | 74 | 1.51 | 0.72 |
Italy | 149 | 201 | 1.53 | 1.95 |
Japan | 191 | 167 | 1.96 | 1.62 |
Mexico | 142 | 231 | 1.46 | 2.24 |
Netherlands | 196 | 216 | 2.01 | 2.09 |
Pakistan | 69 | 63 | 0.71 | 0.61 |
Poland | 131 | 84 | 1.34 | 0.81 |
South Africa | 65 | 64 | 0.67 | 0.62 |
Spain | 95 | 150 | 0.97 | 1.45 |
United States | 1560 | 1809 | 15.99 | 17.51 |
Total | 4,982 | 5,312 | 51.09 | 51.41 |
Table 3 Publication output and World share of major countries
It is obvious that the country data are very interesting when they are used for the comparison of publication output of different countries over some periods and it may also vary from subject to subject. The latter can be an indicator to assess and tot the study the impact of research in different subjects by different countries. Now that the Nobel Prizes are shared in most cases and subjects except in literature, so the country studies can also be used to oversee the share of nobel prize winners from different countries and by subject fields. It may be possible to relate the data with the population and GNP of a country and also budget for the research by a country. As Lancaster has remarked, Israel produces a lot of research relative to its size, prosperity and stage of development (Lancaster, 1991)
9. Impact of Bibliometric Laws
As it is well known the three fundamental laws of bibliometrics are its foundation. In simple terms they are; Bradford’s Law of Scattering, Zipf’s Law on frequency of word occurrence and the Lotka’s Law of Scientific Productivity. The Bradford’s distribution often can be fruitfully used to estimate the total size of a bibliography and the periodicals that should necessarily be included in the list of items to be covered in a library and information centre, and more precisely Ranking of Periodicals, and suggesting core periodicals a library should subscribe. In the beginning and at a stage when bibliometrics studies were very popular, every paper would use Bradford’s Law for ranking periodicals and was one of the common denominator of study in most of the papers.
So, naturally this law is applied to study not only the scattering of publications, but also in other spheres of activity also. By analyzing the R & D expenditure, there is a heavy concentration of manpower deployed, papers published, patents field, processes/products developed in the core in-house R & D units. This shows the superiority of the core not only in the R & D expenditure but also in other yardsticks too.
Zipf’s law can be effectively used in the generation of semi-automatic indexes useful for an information retrieval system. Its use has increased tremendously with the emergence of natural language indexing of textual matter especially in electronic form. Zipf’s law provides a measure of the richness in vocabulary of an author. This technique can be used for deciding the correct authorship of disputed works. For example, if there is difference of opinion as to the correct author of a work, the word predilections of the attributed authors can be analyzed either manually or using a computer. Once the frequencies of occurrence of favorite words are decided, the disputed text can be analyzed to see similarity and thereby decide the author conclusively. The law is also used for identifying words are more frequently used in different foreign languages. These words are taught first in the instructional programmes of foreign languages.
Lotka’s proposition led to a whole gamut of studies on scientific productivity. Such productivity studies have gained momentum in the post-second world war period. Scientific productivity studies have been made from different angles. The impact of social change on scientific productivity, relationship of publication output on scientific recognition, identification of elites in different disciplines, occurrence of discoveries in different cultures etc.
The last two laws are finding their application more in the current stage of web and electronic environment whereas bibliometrics is gradually giving a way to scientometrics and other metric studies on the evaluation and assessment of individual, institutional and national productivity and impact of scientific research.
10. Impact of H-Index
The h-index (known as Hirsch index) is “an index that qualifies both the actual scientific productivity and the apparent scientific impact of a scientist” e.g. an h-index of 20 means the researcher has 20 papers each of which has been cited 20+ times. (Whitton, 2013). The index is based on the set of the scientist’s most cited papers and the number of citations that they have received in other people’s publications. H-index eliminates the disadvantage of considering only single number criteria such as total number of papers or numbers of significant papers etc. H-index is proportional to academic age of the researches.
For e.g.: If a scientist has written 50 papers. 30 of which have achieved 30 or more citations, his or her h-index is 30. Therefore the h-index of an individual scientist is defined as the number of his/her publication cited more than h times in scientific literature.
11. Impact of the obsolescence rate of documents in different subjects
Citations in subsequent literature and usage pattern in libraries are considered as two indicators of the obsolescence of literature. Analysis of citations by age of the cited document can show the useful life of a document. In order to measure the decay or obsolescence rate of documents, the concept of ‘half life’ has been borrowed from Nuclear Physics. Using this measure Burton and Kebler had suggested a range of half-lives for different subjects. The fast growing subject would have lesser half lives compared to established disciplines. The above study had shown the half-life of Metallurgical Engineering as 3.9 while that of Botany is 10 years. These time scales are highly useful in the planning of library holdings.
12. Summary
The use of Bibliometrics methods and especially bibliometric laws have been made since; the advent of the field and has attracted the interest not only of LIS professionals but also a quite good measure by pure scientists as well. The best example is the prolific writings of Prof. P. Balaram in his editorials to Current Science. It is a common established feeling that the bibliometric techniques can be used to assess and evaluate scientific research and products and productivity. However a caution must be observed that they are only empirical studies based on Quantitative data and cannot intrinsically evaluate the quality and application of research. Now the bibliometrics is gradually giving way to its incarnated fields like Informetrics and Scientometrics which are more on the study of indicators and collaboration and the public policy– precisely the study of science of science. However the longtime research output in these areas cannot be ignored and has some indications of their fruitful and more rational applications in the assessment and evaluation of productivity and impact of research. In this context it is important to observe not only the quantitative data of scientific output but also social, economical and educational and even political conditions of countries when assessing them and not just taking them for granted on the basis of quantitative data on research productivity and its impact on various awards and qualifications.
Now the time has come for Bibliometricians, Informetricians and Scientometricians to come together and engage in a fruitful exchange of ideas with an objective of promoting research culture in the areas of quantitative studies in Library and Information Science, including the quantitative studies of science in general and particularly of science policy, science programmes and science administration and the socio-economic and educational culture of the nations.. It is all the more important to give serious thought to practice them in day-to –day activities in relation to impact on research. Econometrics, Psychometrics, and Sociometrics are well defined disciplines and key subjects because of their practical applications and their usefulness to the society, hence evaluative bibliometrics too has to play a vital role in assessing productivity and impact on research.
you can view video on Bibliometrics in Assessing Productivity and Impact of Research |
13. References
- Price, D.J. Science since Babylon, New Haven, Yale University Press, 1961.
- Price, D.J. Little Science, Big Science. New York, Columbia University Press,1963
- Lancaster, F.B. Bibliometric methods in assessing productivity and impact of research. Bangalore, Sarada Ranganathan Endowment of Library Science, 1991.
- Garfield, Eugene and Sher, I.H. : New factors in the evaluation of scientific literature through citation indexing. American Documentation 14(4) 1963: pp. 195-201.
- Narin, F. Evaluative Bibliometrics: the use of publication and citation analysis in the evaluation of scientific activity. New Jersey: Cherry Hill, 1976.
- Sabarthnam, V.E. A study on the process of development and dissemination of dry land agricultural technology. Ph.D. Thesis. Andrapradesh Agricultural University, Rajendranagar, Hyderabad, 1987.
- Lotka, A.J. The frequency distribution of scientific productivity. Journal of the Washington Academy of Sciences.16, 1926 .pp 317-323.
- Gupta, B.M. and Sangam, S. L. Contribution and Impact of Karnataka University Publications during 1999-2008: A Comparative Study with Three Other Universities of Karnataka. Dharwad, KUD, 2009.
- Garg, K.C and Rao, M.K.D. Bibliometric Analysis of Scientific productivity: a case study of an Indian Physics laboratory. Scientometrics 13, 1988,pp 261-269.
- Sangam, S. L. and others. Indicators for Demographic research: A cross-national assessment. Journal of Library and Information science. 30(1&2) 2005. p.
- Whitton, Michael (Jan 2013) University of Southampton fact sheet on Google Scholar and calculation of h-Index.