21 Explanation of Quantitative Data

Sudarshana Sen

epgp books
  1. Objective

 

In this module you will learn the philosophies that help explain quantitative data. It ponders on the epistemological issues the researcher should keep in mind while trying to understand quantitative data.

 

  1. Introduction

 

In quantitative research, particularly in the context of explaining the data, causation, reliability, validity, prediction and, sometimes, experimental control are crucial to fulfil the demands of a science. However, there is little consensus on the nature of theory to be used to explain quantitative data. Some researchers take up a deductive model, others prefer inductive theoretical perspective. These are two widely used models of analyzing data (Baker 1996). One approach is called ‘understanding’ as espoused by Weber, Dilthey and Windelband. The other is the positivist tradition where prediction is the key to explanation. The proponents of this perspective argue that all theories must have predictive powers. As per this approach, the key component to achieve prediction is ‘control’. If the researcher is able to control the effects of strategic variables, s/he must equally be able to determine the events and record the details of the conditions in which these events take place. The knowledge of necessary and sufficient conditions is important to manipulate the conditions leading to causality.

 

But most social scientists have had limited success in predicting and controlling events. This is despite that fact that effective prediction may lead to more adequate theoretical formulations and research procedures. However, the nature of data in social research very often imposes limits upon prediction and control (Bailey 1994).

 

The function of prediction is to resolve a puzzle on which our knowledge is poor or doubtful; or to test the explanatory power of a scientific theory. When a historian sets out to explain a historical event, s/he seeks information about what people related to that event believed, their purpose, motivation, and goals behind acting in a particular way. A historian can adequately explain when s/he can see the resemblance between the particular people’s action and a general notion of what a person can do in the given situation. So explanation tries to connect between beliefs, motives and actions. Sociologists also seek to objectify and analyse the take for granted rules of everyday living. The analysis is judged whether or not it coheres with the ideology and normative structure of the society. In a broad sense, they explain the utility of scientific method in terms of its logical and meaningful relationships to the society at large (May 2009).

 

This module will seek to uncover the nuances of the perspective that tries to understand quantitative data. The possibilities of finding causation and predicting outcomes help the researcher to locate the possible cause(s) of an event and predict the future. This helps in policy decisions and initiating changes in society.

 

  1. Learning Outcome

 

This module will try to enlighten students on the possibilities of available philosophies to explain quantitative data. It ponders on what are the epistemological issues that the researcher should keep in mind while trying to understand data gathered from a quantitative methodological perspective.

 

  1. What is Quantitative Data?

 

In order to understand what constitutes quantitative data, we first need to know what its sources are. The common methods employed in quantitative research are surveys and experiments. The three other techniques of data collection used in such research are secondary analysis of official document, structured observation and content analysis. All of these methods are used with a desire to produce data which can be analysed in a logical, structured way similar to that of the natural sciences. Quantitative data is gathered with the assumption that the social world can be observed, recorded, and effectively measured (Baker 1996). This is carried out with operationalised concepts so as to produce measures or indicators. The use of questions in a survey, measurement scales in an experiment and counts in structured observation are those tools that are concerned with closing down of the data into a format which produces numerical data that can be put to statistical analysis. However, not all quantitative data start with numerals unless the researcher wants to express the data in numerical forms.

 

Let us take an example here. The preference of a group of students for the colour ‘pink’ in any research is not in itself a quantitative data unless the response ‘pink’ have been taken from a list of different colours, each of which has been numbered and where the colour ‘pink’ happens to have numerical code, say ‘6’ (it can be any numeral), attributed to it. When attributing such codes, the researcher may seek to transform the format of the data from textual to numerical so that it can be analysed using statistical analysis. Thus, in this case, the researcher may find out how many students have opted for pink colour as against the other colours. The treatment of data prior to analysis in order to make it numerical suggests that data is not inherently quantitative or qualitative, but the analysis of the data is carried out in a quantitative or qualitative way. While some data may present itself in unprocessed form (like colour pink stated above), it is the decision to quantify the data which makes its analysis quantitative. We may thus say that the quantitative data is such that it gives way to numerical analysis.

 

  1. The Logic of Science

 

Research methodology involves presenting the rules of procedures about collection of data and their analysis. The rules are impersonal which means two researchers working with the same approach should arrive at same results. Thereby it is hoped that individual biases can be excluded from the research process. In this way, methodology tries to standardize the practice of social sciences (Henn et al. 2006). It is commonly assumed that theories survive because they explain facts; but a good theory is potentially falsifiable, not just by contradictory facts but by combination of contradictory observations with rival theories that explain them. Theorizing, therefore, is an inevitable part of the research process, gathering facts alone is never enough. Scientific methods systemically seek to annihilate the individual scientists’ standpoint not by impossible effort to substitute objectivity, but by substituting rules for inter-subjective criticism, debate, measurement, estimating parameters, logically inducing and deducing, etc., the scales of measurement, bases for criticizing or rejecting acts as the primary base for the objectivity propounded. So criticism is not directed towards what an item of information says but to the method by which the item was produced. The scientific method rests on dual appeals, one being the method and the other the events that are observable. From these two levels science strikes at individual bias to espouse a value-free, neutral, objective standpoint (Cargan 2008, Sjoberg 2009).

 

Self-Check Exercise – 1

  1. What is a methodology?

A methodology is the philosophy of using methods. It is the rules that govern the methods of data collection and analysis.

  1. What is Quantitative data?

Quantitative data  are data  that  can be expressed and analyzed in terms  of numbers. Quantitative data can be structured in terms of logic used in natural sciences.

  1. What is the logic of science?

Science seeks to arrive at the same result using impersonal rules and standardized practices. It aims at objective knowledge inducing deductive and inductive logic.

 

  1. Types of Explanation of Data

 

Radcliffe Brown (1963) has listed seven types of explanations which are used in social research. The first, genetic explanation consists of explaining why a phenomenon exists in its present form by tracing its historical development from the earlier form. Second, intentional explanation is simple everyday practice of explaining his/her intentions to act in a particular way. These are innate drives. Third, dispositional explanation explains actions on the part of an individual about why a person is selfish and so on. Fourth, explanation through reasons is also used to explain puzzling actions of individuals. This explanation tries to answer the question ‘why?’ a person acts in a particular way. Fifth, functional explanation explains the functioning of society as a system. A classic example of functional explanation is Davis and Moore’s work of social stratification (1970 [1945]). Sixth, one may explain a relationship between two or more variables through empirical generalization. An empirical generalization about a particular relationship is often generalized to say that the observed relationship holds in all cases. Seventh, formal theory is a theoretical form used in physical sciences. It is a deductive system of theory consisting of axioms. Brown has used two types of statement in formal theory: axioms and theorems. Axioms are treated as ‘givens’ that are assumed to be true and never subjected to test directly. Theorems are not originally stated but can be logically derived from a set of axioms. So when researchers try to test theories they deduce hypotheses from the theories and test the hypotheses.

 

In social science too, it is not the theory but its application which is evaluated. The ultimate criterion of a good theory is that it should be testable. When evaluating two competitive theories explaining the same phenomenon, the clearer, the less ambiguous one is preferred. If both competing theories explain the same phenomenon well, but one is simple and less complexly presented than the other, the said theory is considered better. A researchers has to choose between the paradigms (research frames for reference) and show her/his commitment towards the one s/he is comfortable with and the one which appears to be more logical and clear. Here in this module we will be discussing various strands used in quantitative approach other than Positivism to ensure that the student understands that there are competing paradigms to which s/he can adhere to.

 

  1. The Philosophies of Explanation used in Quantitative Data

 

The philosophy of science is concerned with the underlying logic of the scientific model. It contains two parts: ontology and epistemology. Ontology deals with the question, ‘What exists?’, and epistemology deals with the question, ‘what can be known?’ and ‘how can we know it?’ The answers to these questions are highly contested and are spread over a number of theoretical paradigms. The varieties of positivist approach have two distinguishing features. One of this states that explanation can be of observable and measurable events. This ontological position defends facts where observation acts as a mirror, i.e. what is being measured cannot be changed by being observed and discards any explanation based on authority. This strain of thought flags for any observation that is independent of any theoretical statements that might be constructed around them and contends that beliefs, emotions and values are outside the realm of science and therefore not fit to be part of scientific research. Objectivity along with generalization and explanation are fundamental characteristics of science. Research in social science is also more than mere reflection of opinions and prejudices. It rather tries to substantiate, refute, organize or generate theories and produce evidences which may challenge not only our own beliefs but also those of society in general. Objectivity is, therefore, defined as the basic conviction that there is some permanent, ahistorical framework to which we can ultimately appeal in determining the nature of rationality, knowledge, truth, reality, goodness or rightness (May 2009).

 

The other approach states that explanation can be based on regularity, i.e. one event causes another if it is regularly followed by it. It contends that generalized knowledge is obtained by identifying such constant regularities. Therefore, quantitative research aims at finding out patterns as association between variables. The epistemological outlook prefers universal regularity to causation. In these two approaches, the truth or falsity of theoretical statements is not the issue but that knowledge is judged true because it is useful. The underlying logic is that being able to predict means to be able to explain and being able to predict allows control. The first characteristic of positivist approach is that social phenomenon can be explained through a systematic causal analysis which presupposes that every effect has an antecedent cause and vice versa. This is something which is borrowed directly from natural sciences (Bailey 1994). A social scientist inclined towards the study of social phenomenon in the same way as a physicist or a physiologist does when s/he probes into an unexplored area. Objectivity then is characterized in terms of prediction and explanation of the behaviour of phenomenon and the pursuit of objectivity is to see that the researcher is detached from the topic under study. This produces a set of true, precise and wide ranging laws of human behaviour. This approach tests existing theories by establishing a hypothesis and then collecting data to assess how appropriate the initial theory actually is. Karl popper (1959/1990) called this approach hypothetico-deductive method.

 

This method of inquiry adds to two more important characteristics of positivist approach. These are: first, it is concerned with applying the general (theory) to particular cases, and second, the criterion for valid enquiry is governed by that which can be testable and not by what can be observed. In search for precision, this approach favours quantitative measuring instruments like the survey questionnaire, content analysis etc. The research is highly structured, typically large and based on statistical analysis.

 

In The Logic of Scientific Discovery (1959/1990), Karl Popper outlined the logical basis for empiricism. He pointed out that the researcher should count on how many instances can be gathered for confirming a theory because a single instance opposed to what has been gathered so far can leave the next test of prediction astray. He undermined the theory of induction as it does not allow predicting the yet unknown based on what has been known or to predict the future based on what has happened in the past. Popper reversed the traditional style of establishing empiricism with his argument that one can never verify but rather can only falsify a hypothesis.

 

The aim of positivism is to collect data on the social world from which we can generalize and explain human behaviour pattern through the use of theories. It shares with empiricism the belief that there are facts which we can gather from the social world, independently of how people interpret them. The difference between empiricism and positivism lies in the realm of theory. Data within positivism is theory-driven, while empiricism is a method of research which has not referred explicitly to the theory guiding the data collection procedures.

 

The positivist approach does not confine explanation to observables only. Usually researchers in this paradigm want to use measurement to infer beyond the immediate. Another important area is causal inference. It does not limit itself to co-relational associations and prediction of future events on the basis of past events only. It applies a logical framework for thinking about causality and under what conditions valid inference can be made and how to find evidence that the ‘switching on’ of the process and the ‘holding off’ of others lead to a difference in the outcome (Pearl 2009). Thisquantitative approach appears to be useful in intensive work in determining how things are caused. The positivist approach to the study of social world continued till 1960s. It was not the only approach but it was the one which was most in use. As we have seen, the positivists tend to make a ‘macro’ approach to the study of society; they try to examine the relationships between economy and different parts of the social structure. There is no scope to see the world through the eyes of individuals such as participant observation and unstructured interviews.

 

7.1. Positivism

 

Positivism believes that society is part of a nature and hence it is governed by the same natural laws which are operative in nature. So it wants to study social phenomena with the help of the models and paradigms of natural science. You know that objectivity is one of the prerequisites of any experiment in natural science. This objectivity means complete detachment from the subject of study. For example, when a researcher analyzes the composition of water, it does not matter whether the researcher likes to drink water or not. Similarly, from the positivist perspective it is crucial to ensure that the social science researcher does not allow his or her preferences and values, beliefs and interests to interfere in the process of research. Only if this is achieved, the researcher can be sure of theory-free observation and hence can be confident that knowledge is immunized and protected from the unwanted intrusion of subjectivity. Popper and Kuhn (1970) pointed out that in practice this is simply impossible to achieve. There can be no ‘theory-free’ observation or standing above the influence of the knowing subject. As a result of this ongoing debate, positivism is commonly known as ‘post-positivism’, ‘post-empiricism’ and ‘neo-realism’.

 

7.2. Critical Realism

 

Realism shares with positivism the aim of explanation rather than understanding (May 2009). Realism argues that the knowledge people have of the social world affects their behaviour, and that causes are not simply determining of actions but must be seen as tendencies that produce particular effects. The task of social science research is not simply to collect data by observing the social world, but also to explain these from theoretical frames which examine the underlying mechanisms and inform people’s actions. The radical alternative to positivism and a branch of realism is critical realism which believes that there is an independent reality that can be studied objectively. Critical realism provides deeper levels of explanation as causal necessity which is based on regularity. Critical realism has a multi-layered and stratified ontology. It sees the world and knowledge about the world as three overlapping domains: empirical, actual and real. The ‘empirical’ contains those aspects of reality that can be experienced and observed directly or indirectly. The ‘actual’ contains the aspects of reality which may not be experienced. The ‘real’ is the deeper structure or tendencies that generate a phenomenon. The actual is the outcome of the real, and the actual is the constituent parts of the events that are parts of the empirical. Therefore, the three domains are intransitive in operating independently of our knowledge of them, but our knowledge of them is transitive and capable of being changed. The mechanisms that are operative in the three domains behave in a particular way because of the structure of the underlying object. The aim of critical realism is to uncover the causal powers and structures (May 2009).

 

7.3. Post-Positivists

 

A list of rules for these post-positivists is generally in use in expanding quantitative data, i.e. moving beyond observing, naming and towards knowing (Baker 1996). First, abstraction and classification are key processes and the researcher must initiate to differentiate the necessary from the contingent. This requires more sophisticated theory that aids to realize that both structures and agents have causal powers. Second, open systems which involve guessing what are the underlying operative causal mechanisms. It is the form of reasoning used by detectives. In doing so, the researcher moves fromobserved phenomenon to posit some underlying mechanisms via trial mechanism which is called a model. At this point, the researcher is close to the required evidence or the logic that is wanted to explain the phenomenon. It is necessary to open the Pandora box filled with possibilities and unravel the regularities observable and also be sensitive to the context in which the patterns have been found because the regularities cannot explain themselves. The third is to go back and forth with the theory and the empirical data to find out the empirical patterns suggested by the data. The researcher has to be sure that there is no pretence that the estimate arrived at are the ones that have been fitted and represent universal regularities across time and space. At this point, the researcher is expected to develop a theory. Fourth, the researcher looks for appropriate techniques for doing so. The researcher arriving at the position of expecting an explanation does not mean confirmation. The explorations of the patterns help the researcher to bring the data into sharper focus to see the patterns at a closer look and pay attention to anomalies. Confirmation means the use of significance testing to confirm well-developed hypotheses and avoid unreliable results. Fifth is to find out the areas where assumptions are not met, i.e. where the information could not be explained by the model. The sixth is learning mistakes committed and to unravel the potential areas where the model is not working (Cox and Jones 1981). Seventh, is to guard against chances that can be interpreted as genuine patterns. Eight is to find ‘rules’ that link variables by maximizing the goodness-of-fit between the model and the observed data. Ninth, is to undertake experiments to allow formulating precise causal questions, deciphering what is needed to be controlled and strategize on this. If experimental designs are not possible the best is to choose the most efficient design that requires the least resources to get evidential data (Jones and Subramanian 2000). The tenth explanation is in itself a social process where organized distrust produces trustworthy results (Campbell 1984). Rational grounds for preferring one theory over another are their explanatory power, comprehensiveness, degree of supporting evidence and coherence with other bodies of knowledge. Such criteria of judgment have a long history and cannot be limited to goodness-of-fit between observed and predicted data. Quantitative analysis can play a part by aiding the collection of reliable evidence, dealing with uncertainty, anomalies and setting out a logical framework to make causal inferences.

 

Hammerseley (2008) has initiated a shift away from the old school towards something new and something less naïve. Some contemporary approaches have taken up more subtle approximations of the ‘truth’. For the more modern positivists, the conventional criteria of validity, reliability and generalizability provide important working model for social science research. The new approach has not abandoned the legacy of the old empiricist tradition, but has transcended positivism in ways that are logically defensive, practically feasible and epistemologically neo-foundational.

 

Self-Check Exercise 2

  1. What methodological positions are employed to explain quantitative data?

The philosophies generally used to explain quantitative data are Positivism, Critical Realism and Post-Positivism.

  1. What is the basic stance of positivist perspective in explaining data?

The two pillars of positivist understanding of social phenomenon are empiricism and objectivity. Besides, positivism is based on the logic employed in natural science and is opposed to the use of subjectivity in explaining social phenomenon. It propagates a value-free position for the researcher with respect to the object of study.

  1. What is Critical Realism?

Critical realism believes that there is an objective structure based on regularity that can be studied from a distance. It believes that the ontological reality is overlap of the empirical, actual and real.

 

  1. Issues in Understanding and Describing Quantitative Data

 

Researchers who adopt a positivist approach engage themselves actively with the categories of analysis within the epistemological debate stated above. This means the researcher has to give close look at description of tools whereby s/he can observe systematically. It means critical reflection on how such systematization can be achieved for the phenomenon under study. It means reflecting on the research process, its location in order to judge how far and in what way the observations that arise may predict events located in other spaces and featuring at other times. It also means finding ways to represent human action and experience that draw upon representational possibilities arising from the use of numbers. It means sampling is important. It does not imply that epistemological commitment to the belief that some absolute success or truth can be achieved in relation to reliability, validity or generalization, i.e. a proof of reliability, validity and generalizability cannot in it prove the authenticity of the research (Baker 1996). There is also no assumption that there can be numerically grounded representations of social phenomena only. But the positivist tradition pressurizes researchers to be reflective about their methods, their relationship with participants, conceptualization of their research contexts etc. It means striving to achieve transparency around the use of research tools and management of data arising from their use. It also entails triangulation between different methodologies.

 

Quantitative researchers require knowledge of a range of methods. Such research can employ a number of different designs, one of which is selected at the outset depending on the kind of research question that is being answered. Experimental designs involve the manipulation of at least one independent variable to see whether or not it has any impact on the dependent variable. A cross-sectional design is often used in a survey research involving collection of quantitative data on at least two variables at one point of time and from a number of cases. These data are used to look for pattern of associations between variables. A longitudinal study as an extension of a cross-sectional study is one where a survey is administered repeatedly at regular time intervals over a number of years to establish causality (Bailey 1994).

 

Another important criterion of quantitative research is sampling design. There are a number of strategies of sampling design out of which the researcher has to decide which one is appropriate for the research question. They should explain the sampling strategy used so that readers can judge about potential bias and limitations of the research project.

 

The other important decision is the designing of the questionnaire. Questionnaire provides the way to gathering structured and unstructured data from the respondents. It is a standardized way to achieve reliability and validity in research. Data from questionnaires can be pre-coded, with clear instructions for the respondents to follow.

 

There are number of types of data that can be analysed in a quantitative research. Nominal data or categorical data do not have any numerical meaning. Ordinal data have rank order and are represented numerically. These are discrete data and continuous data are of two types which have numerical values. These are interval data and ratio data. Interval data have no true zero point and ratio data have true zero point.

 

The research design in a quantitative research has to start with a framed hypothesis. It is the re-formularization of the research question to form a declaration of the work to be done including a prediction of the possible outcomes so that the result can be operationalised and tested statistically. While the research in quantitative terms is planned top-to-down (deductive approach), it is wise to consider the intended outcomes at the outset. The two hypotheses are null hypothesis and alternative hypothesis. The null hypothesis is tested. Statistical tests enable the researchers to reject the null hypothesis based on specific probabilities assuming that there is no difference between the variables as prescribed in the null hypothesis. After collecting data, the researcher has to decide on the statistical procedures to test if the data are reliable, valid and have met the entire criterion required for a statistical test. To do so, the first task is to carry out a range of descriptive tests; the second one is to select and compute appropriate statistical tests, and finally conduct appropriate tests of significance. Interestingly, various tests of significance exist. The most common of all of them is chi-square, which tests for statistical significance in cross-tabulation tables. At this point, the researcher seeks to make inferences from the sample about the population.

 

Here it is important to remember that establishing causality and seeing whether there exists any relationships between variables are not the same. The notion of causality is concerned with a relationship between two variables which is based on cause and effect (Bailey 1994). It means, the values of one variable will determine the value of another variable. Simply identifying a relationship between two variables is not sufficient reason to suggest that one variable affects change in another. The three criterions for establishing causality are:

 

i) It is necessary to establish that there is an apparent relationship between two variables, i.e. it is necessary to show that the distribution of values of one variable corresponds to the distribution of values of another variable;

ii) It is necessary to show that the relationship is non-spurious, i.e. the variation exhibited by each variable is not affected by a common variable;

iii) It is necessary to show that the cause precedes the effect. In a survey research, these are more difficult to achieve since the researcher collects data relating to many variables at one particular time. Here, though association between variables can be proved easily, it is difficult to rule out spurious relations and establishing time-order sequence. In survey research, spurious relationship can be ruled out through a process of multivariate analysis, if only carried out in different time frames. Unfortunately, it is difficult to achieve causality in a survey research.

 

  1. Critiques of Quantitative approach to the analysis of data

 

Positivism does not pay much attention to the details of people’s inner mental state. Realism may refer to people’s consciousness in-so-far as it reflects the conditions under which they live, how structures are reproduced and their desires and needs are frustrated. The researchers, who refer to subjective states or to inner world of experiences than the world ‘out there’, focus on subjectivity where the meanings that people give to their environment and not the environment itself becomes important. Max Weber had introduced a methodological tool to combat the exigencies of objectivity in research (Shils and Finch 1997: 90). Weber argued that an empathic understanding of the subject and a vision to see from the subject’s point of view is essential for doing a research in social science since the subject-matter of physical and social sciences are different. Feminists have challenged the notion of unified body of thought on the social world. They have focused on gendered nature of relations giving central importance to emotion rather than reason behind every explanation. The idea of rigorous research, disengagement, detachment from the subject of study is also of concern to them. They argue against simply seeing people as sources of data. Feminists argue that research is a two-way process where there is no distinction between the researcher and the researched. The two are to see each other from the same plane.

 

The postmodernists, on the other hand, see knowledge as both local and contingent as there are no standards beyond particular contexts by which research findings can be measured, weighed according to potentia l. Postmodern researchers are anti-foundationalists and claim that there cannot be a universal standard against which science may lay claim in order to value its standards.

 

Self-Check Exercise 3

 

  1. What are the different criterions of a quantitative research design?

A quantitative research design is based on the assumptions of positivist approach. It entails knowledge of the range of methods, sampling designs, decisions regarding measurement and the employment of statistical tests. It ensures that the data collected, measured and analysed is based on the logic of natural science.

  1. Who argues that quantitative approach has its limitations?

The critical approach of the Feminists, Interpretivists and Post-Modernists have analysed the limitations of the quantitative methodological position.

  1. How do Feminists look at quantitative study of social phenomenon?

 

Feminists have challenged the notion of unified body of thought on the social world. They have focused on gendered nature of relations giving central importance to emotion rather than reason behind every explanation. The idea of rigorous research, disengagement, detachment from the subject of study is also of concern to them. They argue against simply seeing people as sources of data. To them, research is a two-way process where there is no distinction between the researcher and the researched.

 

  1. Summary

 

Quantitative data is gathered and analysed from a perspective that is laid down on the logic of natural science. The methodological position of this approach is known as positivism. It is based on the logic of empiricism and objectivity. Critical Realism and post-positivism are the other two approaches often employed with or besides the use of positivist philosophy. There have been criticisms of this approach for the study of social phenomenon as it nullifies the use of subjectivity and empathy. Feminists, Interpretivists and Post-Modernists have been critical of the use of logic which states that there is an objective reality ‘out there’ which stands for interpreting it from distance. The role of the researcher as an observer is criticized saying that the researcher and the researched have the same reality to understand, interpret and see.

 

  1. Web Links

 

you can view video on Explanation of Quantitative Data
  1. References
  • Bailey, K. D. Methods in Social Research. New York: Free Press, 1994.
  • Baker, T., Doing Social Research, New York: McGraw Hill Inc, 1994.
  • Brown, R. Explanation in Social Science. Chicago: Aldine, 1963.
  • Cargan, L. Doing Social Research. Jaipur: Rawat Publishers, 2008.
  • Campbell, D. T. “Can we be scientific in applied social science?”, in Evaluation Studies Review Annual, edited by R. F. Corner, D.G. Altman and c. Jackson. Thousand Oaks, CA: Sage, 1984. P. 26-48.
  • Cox N. J. and Jones K. “Exploratory data analysis”. In Quantitative Geography edited by N. Wrigley and R. J. Bennett. London: Routledge, 1981. p. 135-43.
  • Davis, Kingsley and Wilbert E. Moore. “Some principles of stratification.” American Sociological Review, 10, 2 (1970 [1945]): 242-9.
  • Hammerseley, M. Questioning Qualitative Inquiry – Critical Essays. London: Sage, 2008.
  • Henn, M., Weinstein, M., & Foard, N. A short introduction to social research. London: Sage, 2006.
  • Jones, K. and Subramanian, S. V. “Observational studies and design choices”, in Epistemology, edited by G. M. Moon and M. Gould and colleagues. Buckingham: Open University Press, 2000. p. 70-85.
  • Kuhn, T. The Structure of Scientific Revolutions (2nd Edition). Chicago: University of Chicago Press, 1970.
  • May, T. Social research: Issues, Methods and Process (Indian Reprint). New Delhi: Rawat Publishers, 2009.
  • Olsen. W. Realist Methodology. 4 Volumes. London: Sage, 2010.
  • Pearl, J. Causality (2nd Edition). Cambridge: Cambridge University Press, 2009.
  • Popper, K. R. The Logic of Scientific Discovery (14th impression). London: Unwin Hynam, 1990.
  • Shils, Edward A. and Finch, Henry A. (trans. and ed.). The methodology of the social sciences (1903-17). New York: Free Press, 1997, p. 90
  • Somekh, B.  and  Lewin, C. Theory and Methods in Social Research. New Delhi, Sage, 2011.
  • Yu, C.H. Philosophical Foundations of Quantitative Research Methodology. Lanham: University Press of America, 2006.