Science Becoming or Becoming Science: The Relationship Between Science and Political Goals
Mike Williams


Introduction

The purpose of this essay is to consider the philosophical underpinnings of social science and relate this to what it means to validate the methodology that I will use in my research. I shall start this essay by challenging the notion of scientific objectivity and hence the need to validate one's work as scientific. Utilising an instrumentalist perspective I shall argue that in the final analysis, what science becomes or what becomes science is not so much an issue of what methods come closest to the truth, but is an issue of what methods are perceived to be most useful in meeting paradigmatic information requirements linked to the attainment of some wider set of political goals. To illustrate the link between methodology and goals, I will utilise Houses' (1993) attempt to map the methodological evolution of evaluation studies in the USA on to the changing information requirements and goals of the American government. I will then conclude by giving an outline of what I feel to be the main issues in validating methodology in the light of the instrumentalist perspective.

Challenging the notion of scientific objectivity

According to Barnes (1982), a student's cognitive commitment to science is the outcome of processes which are acquired through socialisation and inculcated by authority. This statement probably reflects your own experiences at school, where science was not so much something that you found out, but was something that you were told and consequentially believed in. Kuhn (1996) developed this idea of science as something we come to believe in through his historical analysis of scientific revolutions. He argued from his findings that the development of a scientific methodology is inextricably networked to and predicated on an array of assumptions, contingent hypotheses and facts which together constitute what he called a scientific paradigm (Kuhn, 1996).

Scientific validation of methodology cannot ignore this issue, as it is itself premised on an a priori belief that the object of investigation exists within an out there reality defined by the parameters of truth and nature. Ironically this basis to scientific thought cannot be scientifically (empirically) verified itself, as in order to prove that constancy pervades our existence, one must show that it holds in every place - an unattainable goal in an infinite universe . This bodes ill for anyone wishing to validate their work as scientific, as the activity is underpinned by an unverifiable and hence unscientific belief.

Inductivists for example, posit that universal laws can be attained if the same observation is made consistently across a large enough number of settings. However, to be sure that your sample is large enough to allow you to generalise to all cases requires an a priori postulation of the relationship of the object under observation to the universe in which it exists, the validity of which requires a leap of faith in the absence of any suitable method of verification.

Similarly, by positing that science should proceed by the falsification of hypotheses, positivism relies upon a methodology of hypothesis evaluation which depends on a universal metaphysic for the power of its absolute negation. However, as already argued, proving that something is universally applicable is not something humans are capable of. So in positivist terms, to hold that any hypothesis is best, is to concurrently make a statement of belief in the contingent hypotheses that underpin the falsification methods which establish the original hypothesis as the best. That this is the case is manifest in the way scientific communities tend to react to the falsification of a hypothesis by blaming the anomalous result on the tools, techniques, supporting hypotheses or any other contingency or belief upon which the falsifying methodology lies - rather than the inadequacy of the hypothesis itself (Kuhn, 1996).

Scientific Status

So having questioned the idea that it is possible to objectively prove that one's methods allow oneself a clear view of reality, we should consider the proposition that no essential property or realised principle separates science from non-scientific modes of information production. However, science (as distinct from non-science) continues to maintain an unparalleled cultural authority in having its definitions of reality and value judgements prevail (House, 1993). Consequently funding, influence and eminence all continue to accrue to the community which manages to scientifically validate its work. Understanding science can thus be more accurately considered as a description of the social processes which constitute a will to power which exists within the domain of information production (Barnes, 1982).

vIn trying to understand the social process of science, general observation suggests that adherence of a community of information producers to an ostensibly scientific methodology is a pre-requisite for achieving scientific status . However, as revealed by studies which have focussed upon the maintenance of borders within scientific communities, the ultimate factor in whether or not a community is attributed scientific status is whether significant others perceive it to be producing useful information (Collins & Pinch, 1979) (Dolby, 1979) (Wynne, 1979). Paraphrasing Kuhn, science is not what we know but what we think would be useful to know.

Paradigms may thus be viewed as resources which are used according to their perceived utility in providing information that will further certain aims. Barnes (1982) supports this view by arguing that it is society's wider goals and interests which help us understand the historical development of bodies of knowledge. An analysis of the evolution of evaluation studies serves as a good example of this. Evaluation studies have grown in stature this century as governments have used them to either provide scientific support for their policies or replace the political nature of their decisions with scientific reasoning (House, 1993).

Originally, evaluation studies developed within a zeitgeist of positivism, so in the formative years evaluators would abstract key concepts from the social process in question, designate quantifiable inputs and outputs and then judge the success of a social program to the extent to which the program's inputs correlated with it's outputs. However, despite the ostensibly scientific nature of such studies they rarely produced useful information (House, 1993). The context from which the variables were abstracted was always assumed and never investigated - what you might call an assumed (as opposed to black) box approach to social science investigation. This created a serious problem as, whether positive or negative, evaluators resorted to explaining their results in the light of anecdotes and assumptions rather than in the light of any involved understanding and experience (House). Such an approach failed to grasp the contextual complexity, uniqueness and contingent nature of social process so that many of the positive correlations attributed to social programs by quantitative evaluation studies in the 1960s (in the USA) were rarely explainable and so tended not to be generalisable when applied to similar sites (House). Such studies had plainly failed to realise that depending on the social context, the operation of an identified social mechanism can produce quite different results and that different mechanisms may produce the same empirical results (Sayer, 1992).

More often than not, the simplistic nature of the evaluator's abstraction of key concepts failed to take account of the idiosyncratic and therefore a priori unknowable element of the context from which those concepts were abstracted. Thus another manifestation of quantitative evaluation's over simplistic abstraction of social mechanisms was the overabundance of negative results produced by such studies. Social contexts proved to be far too complex and idiosyncratic to be guessed at via the monolith of hypotheses based research. Evaluators failed to realise that a negative correlation could be just as much the fault of an unintuitive hypothesis as the ineffectiveness of the social programme which they were evaluating. Many social programs were thus written off purely on the basis of a lack of a positive correlation between inputs and outputs and without any useful information being provided as to why this was and how it could be rectified in the future (House, 1993). In the words of House (1993, p.4) '[Evaluation studies] struggled with the seemingly conflicting demands of being scientific on the one hand and being useful on the other.'

According to House the positivist's standard conception of causation as applied to social phenomena came to be challenged and the significance of qualitative methodologies was taken on board (House, 1993). Applied to evaluation studies, qualitative methodologies were not so concerned as to whether in social process Z, inputs A introduced by social program X would lead to outputs B, but were more concerned with a description of the processes and general relationships of the constituents of the contextual domain Z to which both A and B and social program X were hypothesised to belong. The key feature of qualitative approaches was the utilisation of the concept of understanding, so that social process came to be understood via the world of the actors who constituted it rather than via a framework of postulated motives (Young & Mills, 1979). Such an approach allows the researcher to develop a more detailed understanding of the process under evaluation through developing a more interactive relationship with it. On the matter of formulating research questions, qualitative methodologies allowed evaluators to switch from the more monolithic conceptions of social process posited by quantitative methodology to more flexible conceptions, which House (1993) argues helped evaluation studies meet the conflicting information needs of an emerging culturally pluralistic society.

So what does it mean to validate methodology?

The evolution of evaluation studies is an illustration of how the methodologies of a paradigm evolve so as to respond to changing societal information requirements . Relating this point to the purpose of the essay, it suggests that proving the validity of one's methodology is not only an exercise in marrying oneself to the values of one's paradigm, but is also implicitly an exercise in linking a particular brand of paradigmatic information production to a specific information requirement. Thus while ostensibly the validity of a methodology may be measured off some supposed universal scale, in practice methodology is seen as valid to the extent that the paradigm that it constitutes is seen by significant others to be producing useful information.

Thus when one makes an appeal to validity one is ultimately appealing to the assumptions, values and politics of one's research community. This does not necessarily mean that validating methodology is solely a circuitous and self-referential paradigmatic exercise. As previously discussed paradigms are linked to the wider goals of society. But more than that paradigms as areas of thought are coterminous with other paradigms so that any individual positioned in a particular area of human thought will stand in relation to a number of different strands of human thought. There is quite clearly room for meaningful development, syncretism, originality of thought and even ambivalence in the field of methodology. The lack of a universal measure does not send the concept of valid methodology into the dustbin of relativism, but exposes it to a plurality of overlapping and interlocking value positions.

Conclusion

Although I am inexperienced in the world of academia, from mere reflection, I would dare to suggest that nowadays adherence to models of science is no longer necessary to achieve scientific status. Arguably there is no better evidence for this than the primacy of evaluation studies, the field which I will be engaging with in the next three years. Generally speaking, while evaluation studies may draw on different elements of scientific thought, as a paradigm it does not tend to subscribe to any one model of science for validation. And that is because evaluation methods tend to be given authority through other means - by governmental and bureaucratic institutions which perceive utility in the information produced (House, 1993). This perhaps is evidence of the growing awareness that what becomes science and what science becomes is more a matter of what is perceived as yielding useful information to society at large, than a matter of what is true.

Thus in this essay I have argued that there is nothing inherently scientific about science in practice nor in principle and that to attempt to validate my methods as scientifically valid would seem a nonsense . I have taken an instrumentalist viewpoint to explain and I have used the evolution of evaluation studies to illustrate the inextricable relationships between paradigm, methodology, pre-defined information requirements and the need to further some objective. I have argued that the validation of methodology is a dual activity comprising a show of allegiance to a particular paradigm of thought via a justification of its utility in aiding some wider objective.

Therefore, I feel that one of the key tasks in validating methodology is to relate the methodology used to the pre-defined information requirements that it will be used to meet and the wider social goals that it will be used to further. I have argued that there is a strong ceremonial element in the act of validating one's methodology which can be seen as a marriage of the researcher to the body of knowledge to which she or he wishes to subscribe. However, this does not preclude latitude for debate over what are the best methods for producing the kind of information required (which is a second task of validating methodology), but it does mean that ultimately my third and final task will be to justify the methodology I use in terms that will satisfy those whose opinions constitute the paradigmatic constraints within which I work (i.e. the likes of you).

References

Barnes, B. (1982) T.S. Kuhn and Social Science, London and Basingstoke: MacMillan

Collins, H.M. & Wallis, R. (1979) Introduction, in Wallis, R. [Ed.] Reflections on Deviant Science, Keele: Sociological Review Monograph 27, pp 237-270

Dolby, R.G.A. (1979) Reflections on Deviant Science, in Wallis, R. [Ed.] Reflections on Deviant Science, Keele: Sociological Review Monograph 27, pp 9-48

House. E.R. (1993) Professional Evaluation Social Impact and Political Consequences, London: Sage Publications

Kuhn, T.S. (1996) (3rd Edition) The Structure of Scientific Revolutions, Chicago: The University of Chicago Press

Sayer, A. (1992) [2nd Ed] Method in Social Science: A realist approach, London: Routledge

Wynne, B. (1979) A Study in the Legitimisation of Knowledge: The 'Success' of Medicine and the 'Failure' of Astrology, in Wallis, R. [Ed.] Reflections on Deviant Science, Keele: Sociological Review Monograph 27, pp 67-84

Young, K. and Mills, L. (1979) Public Policy Research: A Review of Qualitative Methods, Bristol: University of Bristol




Date Article Put Online: November 2003




So what do you think of what you've just read? Please write and tell us!