logo Welcome, Guest. Please Login or Register.
2024-04-18 14:47:15 CoV Wiki
Learn more about the Church of Virus
Home Help Search Login Register
News: Read the first edition of the Ideohazard

  Church of Virus BBS
  General
  Philosophy & Religion

  Classic Texts: Imre Lakatos - Falsification/Scientific Research Programmes(1970)
« previous next »
Pages: [1] Reply Notify of replies Send the topic Print 
   Author  Topic: Classic Texts: Imre Lakatos - Falsification/Scientific Research Programmes(1970)  (Read 2099 times)
rhinoceros
Archon
*****

Gender: Male
Posts: 1318
Reputation: 8.37
Rate rhinoceros



My point is ...

View Profile WWW E-Mail
Classic Texts: Imre Lakatos - Falsification/Scientific Research Programmes(1970)
« on: 2003-06-13 18:29:32 »
Reply with quote


Imre Lakatos

Falsification and the Methodology of Scientific Research Programmes (1970)








A Methodology of Scientific Research Programmes

I have discussed the problem of objective appraisal of scientific growth in terms of progressive and degenerating problemshifts in series of scientific theories. The most important such series in the growth of science are characterized by a certain continuity which connects their members. This continuity evolves from a genuine research programme adumbrated at the start. The programme consists of methodological rules: some tell us what paths of research to avoid (negative heuristic), and others what paths to pursue (positive heuristic).

Even science as a whole can be regarded as a huge research programme with Popper's supreme heuristic rule: "devise conjectures which have more empirical content than their predecessors." Such methodological rules may be formulated, as Popper pointed out, as metaphysical principles. For instance, the universal anti-conventionalist rule against exception-barring may be stated as the metaphysical principle: "Nature does not allow exceptions". This is why Watkins called such rules "influential metaphysics".

But what I have primarily in mind is not science as a whole, but rather particular research programmes, such as the one known as "Cartesian metaphysics". Cartesian metaphysics, that is, the mechanistic theory of the universe-according to which the universe is a huge clockwork (and system of vortices) with push as the only cause of motion-functioned as a powerful heuristic principle. It discouraged work on scientific theories - like (the "essentialist" version of) Newton's theory of action at a distance-which were inconsistent with it (negative heuristic). On the other hand, it encouraged work on auxiliary hypotheses which might have saved it from apparent counterevidence -like Keplerian ellipses (positive heuristic).

(a) Negative heuristic: the "hard core" of the programme.

All scientific research programmes may be characterized by their "hard core". The negative heuristic of the programme forbids us to direct the modus tollens at this "hard core". Instead, we must use our ingenuity to articulate or even invent "auxiliary hypotheses", which form a protective belt around this core, and we must redirect the modus tollens to these. It is this protective belt of auxiliary hypotheses which has to bear the brunt of tests and get adjusted and re-adjusted, or even completely replaced, to defend the thus-hardened core. A research programme is successful if all this leads to a progressive problemshift; unsuccessful if it leads to a degenerating problemshift.

The classical example of a successful research programme is Newton's gravitational theory: possibly the most successful research programme ever. When it was first produced, it was submerged in an ocean of "anomalies" (or, if you wish, "counterexamples"), and opposed by the observational theories supporting these anomalies. But Newtonians turned, with brilliant tenacity and ingenuity, one counter-instance after another into corroborating instances, primarily by overthrowing the original observational theories in the light of which this "contrary evidence" was established. In the process they themselves produced new counter-examples which they again resolved. They "turned each new difficulty into a new victory of their programme.

In Newton's programme the negative heuristic bids us to divert the modus tollens from Newton's three laws of dynamics and his law of gravitation. This "core" is "irrefutable" by the methodological decision of its protagonists: anomalies must lead to changes only in the "protective" belt of auxiliary, "observational" hypothesis and initial conditions.

I have given a contrived micro-example of a progressive Newtonian problemshift. If we analyse it, it turns out that each successive link in this exercise predicts some new fact; each step represents an increase in empirical content: the example constitutes a consistently progressive theoretical shift. Also, each prediction is in the end verified; although on three subsequent occasions they may have seemed momentarily to be "refuted".

While "theoretical progress" (in the sense here described) may be verified immediately, "empirical progress" cannot, and in a research programme we may be frustrated by a long series of "refutations" before ingenious and lucky content-increasing auxiliary hypotheses turn a chain of defeats -with hindsight- into a resounding success story, either by revising some false "facts" or by adding novel auxiliary hypotheses. We may then say that we must require that each step of a research programme be consistently content-increasing: that each step constitute a consistently progressive theoretical problemshift. All we need in addition to this is that at least every now and then the increase in content should be seen to be retrospectively corroborated: the programme as a whole should also display an intermittently progressive empirical shift. We do not demand that each step produce immediately an observed new fact. Our term "intermittently" gives sufficient rational scope for dogmatic adherence to a programme in face of prima facie "refutations".

The idea of "negative heuristic" of a scientific research programme rationalizes classical conventionalism to a considerable extent. We may rationally decide not to allow "refutations" to transmit falsity to the hard core as long as the corroborated empirical content of the protecting belt of auxiliary hypotheses increases. But our approach differs from Poincare's justificationist conventionalism in the sense that, unlike Poincare's, we maintain that if and when the programme ceases to anticipate novel facts, its hard core might have to be abandoned: that is, our hard core, unlike Poincare's, may crumble under certain conditions. In this sense we side with Duhem who thought that such a possibility must be allowed for but for Duhem the reason for such crumbling is purely aesthetic, while for us it is mainly logical and empirical.

(b) Positive heuristic: the construction of the "protective belt" and the relative autonomy of theoretical science.

Research programmes, besides their negative heuristic, are also characterized by their positive heuristic.

Even the most rapidly and consistently progressive research programmes can digest their "counter-evidence" only piecemeal: anomalies are never completely exhausted. But it should not be thought that yet unexplained anomalies-"puzzles" as Kuhn might call them- are taken in random order, and the protective belt built up in an eclectic fashion, without any preconceived order. The order is usually decided in the theoreticians's cabinet, independently of the known anomalies. Few theoretical scientists engaged in a research programme pay undue attention to "refutations". They have a long-term research policy which anticipates these refutations. This research policy, or order of research, is set out-in more or less detail-in the positive heuristic of the research programme. The negative heuristic specifies the "hard core" of the programme which is "irrefutable" by the methodological decision of its protagonists; the positive heuristic consists of a partially articulated set of suggestions or hints on how to change, develop the "refutable variants" of the research-programme, how to modify, sophisticate, the "refutable" protective belt.

The positive heuristic of the programme saves the scientist from becoming confused by the ocean of anomalies. The positive heuristic sets out a programme which lists a chain of ever more complicated models simulating reality: the scientist's attention is riveted on building his models following instructions which are laid down in the positive part of his programme. He ignores the actual counterexamples, the available "data". Newton first worked out his programme for a planetary system with a fixed point-like sun and one single point-like planet. It was in this model that he derived his inverse square law for Kepler's ellipse. But this model was forbidden by Newton's own third law of dynamics, therefore the model had to be replaced by one in which both sun and planet revolved round their common centre of gravity. This change was not motivated by any observation (the data did not suggest an "anomaly" here) but by a theoretical difficulty in developing the programme. Then he worked out the programme for more planets as if there were only heliocentric but no interplanetary forces. Then he worked out the case where the sun and planets were not mass-points but mass-balls. Again, for this change he did not need the observation of an anomaly; infinite density was forbidden by an (unarticulated) touchstone theory, therefore planets had to be extended. This change involved considerable mathematical difficulties, held up Newton's work-and delayed the publication of the Principia by more than a decade. Having solved this "puzzle", he started work on spinning balls and their wobbles. Then he admitted interplanetary forces and started work on perturbations. At this point he started to look more anxiously at the facts. Many of them were beautifully explained (qualitatively) by this model, many were not. It was then that he started to work on bulging planets, rather than round planets, etc.

Newton despised people who, like Hooke, stumbled on a first naive model but did not have the tenacity and ability to develop it into a research programme, and who thought that a first version, a mere aside, constituted a "discovery". He held up publication until his programme had achieved a remarkable progressive shift.

Most, if not all, Newtonian puzzles leading to a series of new variants superseding each other, were foreseeable at the time of Newton's first naive model and no doubt Newton and his colleagues did foresee them: Newton must have been fully aware of the blatant falsity of his first variants. \Nothing shows the existence of a positive heuristic of a research programme clearer than this fact: this is why one speaks of "models" in research programmes. A "model" is a set of initial conditions (possibly together with some of the observational theories) which one knows is bound to be replaced during the further development of the programme, and one even knows, more or less, how. This shows once more how irrelevant "refutations" of any specific variant are in a research programme: their existence is fully expected, the positive heuristic is there as the strategy both for predicting (producing) and digesting them. Indeed, if the positive heuristic is clearly spelt out, the difficulties of the programme are mathematical rather than empirical.

One may formulate the "positive heuristic" of a research programme as a "metaphysical" principle. For instance one may formulate Newton's programme like this: "the planets are essentially gravitating spinning-tops of roughly spherical shape". This idea was never rigidly maintained: the planets are not just gravitational, they have also, for example, electromagnetic characteristics which may influence their motion. Positive heuristic is thus in general more flexible than negative heuristic. Moreover, it occasionally happens that when a research programme gets into a degenerating phase, a little revolution or a creative shift in its positive heuristic may push it forward again. It is better therefore to separate the "hard core" from the more flexible metaphysical principles expressing the positive heuristic.

Our considerations show that the positive heuristic forges ahead with almost complete disregard of "refutations": it may seem that it is the "verifications" rather than the refutations which provide the contact points with reality. Although one must point out that any "verification" of the n+1-th version of the programme is a refutation of the n-th version, we cannot deny that some defeats of the subsequent versions are always foreseen: it is the "verifications" which keep the programme going, recalcitrant instances notwithstanding.

We may appraise research programmes, even after their "elimination", for their heuristic power: how many new facts did they produce, how great was their capacity to explain their refutations in the course of their growth"? (We may also appraise them for the stimulus they gave to mathematics. The real difficulties for the theoretical scientist arise rather from the mathematical difficulties of the programme than from anomalies. The greatness of the Newtonian programme comes partly from the development- by Newtonians- of classical infinitesimal analysis which was a crucial precondition of its success.)

Thus the methodology of scientific research programmes accounts for the relative autonomy of theoretical science: a historical fact whose rationality cannot be explained by the earlier falsificationists. Which problems scientists working in powerful research programmes rationally choose, is determined by the positive heuristic of the programme rather than by psychologically worrying (or technologically urgent) anomalies. The anomalies are listed but shoved aside in the hope that they will turn, in due course, into corroborations of the programme. Only those scientists have to rivet their attention on anomalies who are either engaged in trial- and-error exercises or who work in a degenerating phase of a research programme when the positive heuristic ran out of steam.(All this, of course, must sound repugnant to naive falsificationists who hold that once a theory is "refuted" by experiment (by their rule book), it is irrational (and dishonest) to develop it further: one has to replace the old "refuted" theory by a new, unrefuted one.)

(d) A new look at crucial experiments: the end of instant rationality.

It would be wrong to assume that one must stay with a research programme until it has exhausted all its heuristic power, that one must not introduce a rival programme before everybody agrees that the point of degeneration has probably been reached. (Although one can understand the irritation of a physicist when, in the middle of the progressive phase of a research programme, he is confronted by a proliferation of vague metaphysical theories stimulating no empirical progress.) One must never allow a research programme to become a Weltanschauung, or a sort of scientific rigour, setting itself up as an arbiter between explanation and non-explanation, as mathematical rigour sets itself up as an arbiter between proof and non proof. Unfortunately this is the position which Kuhn tends to advocate: indeed, what he calls "normal science" is nothing but a research programme that has achieved monopoly. But, as a matter of fact, research programmes have achieved complete monopoly only rarely and then only for relatively short periods, in spite of the efforts of some Cartesians, Newtonians and Bohrians. The history of science has been and should be a history of competing research programmes (or, if you wish, "paradigms"), but it has not been and must not become a succession of periods of normal science: the sooner competition starts, the better for progress. "Theoretical pluralism" is better than "theoretical monism": on this point Popper and Feyerabend are right and Kuhn is wrong.

The idea of competing scientific research programmes leads us to the problem: how are research programmes eliminated? It has transpired from our previous considerations that a degenerating problemshift is no more a sufficient reason to eliminate a research programme than some old fashioned "refutation" or a Kuhnian "crisis". Can there be any objective (as opposed to socio-psychological) reason to reject a programme, that is, to eliminate its hard core and its programme for constructing protective belts? Our answer, in outline, is that such an objective reason is provided by a rival research programme which explains the previous success of its rival and supersedes it by a further display of heuristic power.

However, the criterion of "heuristic power" strongly depends on how we construe "factual novelty". Until now we have assumed that it is immediately ascertainable whether a new theory predicts a novel fact or not But the novelty of a factual proposition can frequently be seen only after a long period has elapsed. In order to show this, I shall start with an example.

Bohr's theory logically implied Balmer's formula for hydrogen lines a consequence. Was this a novel fact? One might have been tempted to deny this, since after all, Balmer's formula was well- known. But this is a half-truth. Balmer merely "observed" B1: that hydrogen lines obey the Balmer formula. Bohr predicted B2: that the differences in the energy levels in different orbits of the hydrogen electron obey the Balmer formula. Now one may say that B1 already contains all the purely "observational" content of B2. But to say this presupposes that there can be a pure "observational level", untainted by theory, and impervious to theoretical change. In fact, B1 was accepted only because the optical, chemical and other theories applied by Balmer were well corroborated and accepted as interpretative theories; and these theories could always be questioned. It might be argued that we can "purge" even B1 of its theoretical presuppositions, and arrive at what Balmer really "observed", which might be expressed in the more modest assertion, B0: that the lines emitted in certain tubes in certain well specified circumstances (or in the course of a "controlled experiment") obey the Balmer formula. Now some of Popper's arguments show that we can never arrive at any hard "observational" rock-bottom in this way; "observational" theories can easily be shown to be involved in B0. On the other hand, given that Bohr's programme after a long progressive development, had shown its heuristic power, its hard core would itself have become well corroborated and therefore qualified as an "observational" or interpretative theory. But then B2 will be seen not as a mere theoretical reinterpretation of B1, but as a new fact in its own right.

These considerations lend new emphasis to the hindsight element in our appraisals and lead to a further liberalization of our standards. A new research programme which has just entered the competition may start by explaining "old facts" in a novel way but may take a very long time before it is seen to produce "genuinely novel" facts. For instance, the kinetic theory of heat seemed to lag behind the results of the phenomenological theory for decades before it finally overtook it with the Einstein-Smoluchowski theory of Brownian motion in 1905. After this, what had previously seemed a speculative reinterpretation of old facts (about heat, etc.) turned out to be a discovery of novel facts (about atoms).

All this suggests that we must not discard a budding research programme simply because it has so far failed to overtake a powerful rival. We should not abandon it if, supposing its rival were not there, it would constitute a progressive problemshift. And we should certainly regard a newly interpreted fact as a new fact, ignoring the insolent priority claims of amateur fact collectors. As long as a budding research programme can be rationally reconstructed as a progressive problemshift, it should be sheltered for a while from a powerful established rival.

These considerations, on the whole, stress the importance of methodological tolerance, and leave the question of how research programmes are eliminated still unanswered. The reader may even suspect that laying this much stress on fallibility liberalizes or, rather, softens up, our standards to the extent that we will be landed with radical scepticism. Even the celebrated "crucial experiments" will then have no force to overthrow a research programme; anything goes.

But this suspicion is unfounded. Within a research programme "minor crucial experiments" between subsequent versions are quite common. Experiments easily "decide" between the n-th and n+1-th scientific version, since the n+1-th is not only inconsistent with the n-th, but also supersedes it. If the n+1-th version has more corroborated content in the light of the same programme and in the light of the same well corroborated observational theories elimination is a relatively routine affair (only relatively, for even here this decision may be subject to appeal). Appeal procedures too are occasionally easy: in many cases the challenged observational theory, far from being well corroborated, is in fact an inarticulate, naive, hidden assumption; it is only the challenge which reveals the existence of this hidden assumption, and brings about its articulation, testing and downfall. Time and again, however, the observational theories are themselves embedded in some research programme and then the appeal procedure leads to a clash between two research programmes: in such cases we may need a "major crucial experiment".

When two research programmes compete, their first "ideal" models usually deal with different aspects of the domain (for example, the first model of Newton's semi-corpuscular optics described light-refraction, the first model of Huyghens's wave optics light-interference). As the rival research programmes expand, they gradually encroach on each other's territory and the n-th version of the first will be blatantly, dramatically inconsistent with the m-th version of the second. An experiment is repeatedly performed, and as a result, the first is defeated in this battle, while the second wins. But the war is not over: any research programme is allowed a few such defeats. All its needs for a comeback is to produce an n+1-th (or n+k-th) content-increasing version and a verification of some of its novel content.

If such a comeback, after sustained effort, is not forthcoming, the war is lost and the original experiment is seen, with hindsight, to have been "crucial". But especially if the defeated programme is a young, fast-developing programme, and if we decide to give sufficient credit to its "pre-scientific" successes, allegedly crucial experiments dissolve one after the other in the wake of its forward surge. Even if the defeated programme is an old, established and "tired" programme, near its "natural saturation point", it may continue to resist for a long time and hold out with ingenious content-increasing innovations even if these are unrewarded with empirical success. It is very difficult to defeat a research programme supported by talented, imaginative scientists. Alternatively, stubborn defenders of the defeated programme may offer ad hoc explanations of the experiments or a shrewd ad hoc "reduction" of the victorious programme to the defeated one. But such efforts we should reject as unscientific.

Our considerations explain why crucial experiments are seen to be crucial only decades later. Kepler's ellipses were generally admitted as crucial evidence for Newton and against Descartes only about one hundred years after Newton's claim. The anomalous behaviour of Mercury's perihelion was known for decades as one of the many yet unsolved difficulties in Newton's programme; but only the fact that Einstein's theory explained it better transformed a dull anomaly into a brilliant "refutation" of Newton's research programme. Young claimed that his double-slit experiment of 1802 was a crucial experiment between the corpuscular and the wave programmes of optics; but his claim was only acknowledged much later, after Fresnel developed the wave programme much further "progressively" and it became clear that the Newtonians could not match its heuristic power. The anomaly, which had been known for decades, received the honorific title of refutation, the experiment the honorific title of "crucial experiment" only after a long period of uneven development of the two rival programmes. Brownian motion was for nearly a century in the middle of the battlefield before it was seen to defeat the phenomenological research programme and turn the war in favour of the atomists. Michelson's "refutation" of the Balmer series was ignored for a generation until Bohr's triumphant research programme backed it up.

It may be worthwhile to discuss in detail some examples of experiments whose "crucial" character became evident only retrospectively. First I shall take the celebrated Michelson-Morley experiment of 1887 which allegedly falsified the ether theory and "led to the theory of relativity", then the Lummer-Pringsheim experiments which allegedly falsified the classical theory of radiation and "led to the quantum theory". Finally I shall discuss an experiment which many physicists thought would turn out to decide against the conservation laws but which, in fact, ended up as their most triumphant collaboration.

4) Conclusion. The requirement of continuous growth.

There are no such things as crucial experiments, at least not if these are meant to be experiments which can instantly overthrow a research programme. In fact, when one research programme suffers defeat and is superseded by another one, we may -with long hindsight- call an experiment crucial if it turns out to have provided a spectacular corroborating instance for the victorious programme and a failure for the defeated one (in the sense that it was never "explained progressively" -or, briefly, "explained"- within the defeated programme). But scientists, of course, do not always judge heuristic situations correctly. A rash scientist may claim that his experiment defeated a programme, and parts of the scientific community may even, rashly, accept his claim. But if a scientist in the "defeated" camp puts forward a few years later a scientific explanation of the allegedly "crucial experiment" within (or consistent with) the allegedly defeated programme, the honorific title may be withdrawn and the "crucial experiment" may turn from a defeat into a new victory for the programme.

Examples abound. There were many experiments in the eighteenth century which were, as a matter of historico-sociological fact, widely accepted as "crucial" evidence against Galileo's law of free fall, and Newton's theory of gravitation. In the nineteenth century there were several "crucial experiments" based on measurements of light velocity which "disproved" the corpuscular theory and which turned out later to be erroneous in the light of relativity theory. These "crucial experiments" were later deleted from the justificationist textbooks as manifestations of shameful shortsightedness or even of envy. (Recently they reappeared in some new textbooks, this time to illustrate the inescapable irrationality of scientific fashions.) However, in those cases in which ostensibly "crucial experiments" were indeed later borne out by the defeat of the programme, historians charged those who resisted them with stupidity, jealousy, or unjustified adulation of the father of the research programme in question. (Fashionable "sociologists of knowledge"- or "psychologists of knowledge"- tend to explain positions in purely social or psychological terms when, as a matter of fact, they are determined by rationality principles. A typical example is the explanation of Einstein's opposition to Bohr's complementarity principle on the ground that "in 1926 Einstein was forty-seven years old. Forty-seven may be the prime of life, but not for physicists".)

In the light of my considerations, the idea of instant rationality can be seen to be utopian. But this utopian idea is a hallmark of most brands of epistemology. Justificationists wanted scientific theories to be proved even before they were published; probabilists hoped a machine could flash up instantly the value (degree of confirmation) of a theory, given the evidence; naive falsificationists hoped that elimination at least was the instant result of the verdict of experiment. I hope I have shown that all these theories of instant rationality - and instant learning - fail. The case studies of this section show that rationality works much slower than most people tend to think, and, even then, fallibly. Minerva's owl flies at dusk. I also hope I have shown that the continuity in science, the tenacity of some theories, the rationality of a certain amount of dogmatism, can only be explained if we construe science as a battleground of research programmes rather than of isolated theories. One can understand very little of the growth of science when our paradigm of a chunk of scientific knowledge is an isolated theory like "All swans are white", standing aloof, without being embedded in a major research programme. My account implies a new criterion of demarcation between "mature science", consisting of research programmes, and "immature science" consisting of a mere patched up pattern of trial and error. For instance, we may have a conjecture, have it refuted and then rescued by an auxiliary hypothesis which is not ad hoc in the senses which we had earlier discussed. It may predict novel facts some of which may even be corroborated. Yet one may achieve such "progress" with a patched up, arbitrary series of disconnected theories. Good scientists will not find such makeshift progress satisfactory; they may even reject it as not genuinely scientific. They will call such auxiliary hypotheses merely "formal", "arbitrary", "empirical", "semi-empirical", or even "ad hoc".

Mature science consists of research programmes in which not only novel facts but, in an important sense, also novel auxiliary theories, are anticipated; mature science - unlike pedestrian trial-and-error- has "heuristic power". Let us remember that in the positive heuristic of a powerful programme there is, right at the start, a general outline of how to build the protective belts: this heuristic power generates the autonomy of theoretical science.

This requirement of continuous growth is my rational reconstruction of the widely acknowledged requirement of "unity" or "beauty" of science. It high-lights the weakness of two- apparently very different- types of theorizing. First, it shows up the weakness of programmes which, like Marxism or Freudism, are, no doubt, unified, which give a major sketch of the sort of auxiliary theories they are going to use in absorbing anomalies, but which unfailingly devise their actual auxiliary theories in the wake of facts without, at the same time, anticipating others. (What novel fact has Marxism predicted since, say, 1917.) Secondly, it hits patched-up, unimaginative series of pedestrian "empirical" adjustments which are so frequent, for instance, in modern social psychology. Such adjustments may, with the help of so-called "statistical techniques", make some "novel" predictions and may even conjure up some irrelevant grains of truth in them. But this theorizing has no unifying idea, no heuristic power, no continuity. They do not add up to a genuine research programme and are, on the whole, worthless.

My account of scientific rationality, although based on Popper's, leads away from some of his general ideas. I endorse to some extent both Le Roy's conventionalism with regard to theories and Popper's conventionalism with regard to basic propositions. In this view scientists (and as I have shown, mathematicians too) are not irrational when they tend to ignore counterexamples or as they prefer to call them, "recalcitrant" or "residual" instances, and follow the sequence of problems as prescribed by the positive heuristic of their programme, and elaborate - and apply - their theories regardless. Contrary to Popper's falsificationist morality, scientists frequently and rationally claim "that the experimental results are not reliable, or that the discrepancies which are asserted to exist between the experimental results and the theory are only apparent and that they will disappear with thee advance of our understanding". When doing so, they may not be "adopting the very reverse of that critical attitude which...is the proper one for the scientist". Indeed, Popper is right in stressing that, the dogmatic attitude of sticking to a theory as long as possible is of considerable significance. Without it we could never find out what is in a theory- we should give the theory up before we had a real opportunity of finding out its strength; and in consequence no theory would ever be able to play its role of bringing order into the world, of preparing us for future events, of drawing our attention to events we should otherwise never observe. Thus the "dogmatism" of "normal science" does not prevent growth as long as we combine it with the Popperian recognition that there is good, progressive normal science and that there is bad, degenerating normal science, and as long as we retain the determination to eliminate, under certain objectively defined conditions, some research programmes.

The dogmatic attitude in science - which would explain its stable periods - was described by Kuhn as a prime feature of "normal science". But Kuhn's conceptual framework for dealing with continuity in science is socio-psychological; mine is normative. I look at continuity in science through "Popperian spectacles". Where Kuhn sees "paradigms", I also see rational "research programmes".

« Last Edit: 2003-06-22 13:25:02 by rhinoceros » Report to moderator   Logged
rhinoceros
Archon
*****

Gender: Male
Posts: 1318
Reputation: 8.37
Rate rhinoceros



My point is ...

View Profile WWW E-Mail
Re:Classic Texts: Imre Lakatos - Falsification/Scientific Research Programmes(19
« Reply #1 on: 2003-06-17 12:18:41 »
Reply with quote


A critical review of Lakatos' "The methodology of scientific research programmes"

Source: http://home.tiac.net/~cri/1999/lakatos.html
Author: Richard Harter
Dated: 2002-01-23



The methodology of scientific research programmes, Imre Lakatos, Philosophical papers, volume I, edited by John Worrall and Gregory Currie, Cambridge University Press, 1995 printing, ISBN 0-521-28031-1, pbk.



Imre Lakatos was one of the major modern philosophers of science. His name and work is often placed in contrast with that of Popper, Kuhn, and Feyerabend. Volume I of the collected papers deals with philosophy of science; volume II contains collected papers on the philosophy of mathematics.

Volume I consists of an introduction and five papers. The introduction, Science and Pseudoscience, was written in 1973 and was delivered as a radio address. The papers are:

1. Falsification and the methodology of scientific research programmes
2. History of science and its rational reconstructions
3. Popper on demarcation and induction
4. Why did Copernicus's research programme supersede Ptolemy's
5. Newton's effect on scientific standards

The entirety of this collection is worth reading and rereading very carefully. The first paper is THE paper, his major position paper. In this review I will go through it and the introduction in some detail.




Introduction

The introduction raises the question: How do we tell science from pseudoscience? It begins with a survey of proposed answers and the problems with those proposals. Thus, some philosophers have drawn the line by saying "a statement constitutes knowledge if sufficiently many people believe it sufficiently strongly." It should be clear that this will not do for delimiting scientific knowledge. He then goes on to quote Hume:

    If we take in our hand any volume; of divinity, or school metaphysics, for instance; let us ask, does it contain any abstract reasoning concerning quantity or number? No. Does it contain any experimental reasoning concerning matter of fact and existence? No. Commit it then to the flames. For it can contain nothing but sophistry and illusion.[p2]

This sounds very well until one asks, what is the nature of this "experimental reasoning". Lakatos continues:
    But what is 'experimental reasoning'? If we look at the vast seventeenth-century literature on witchcraft, it is full of reports of careful observations and sworn evidence - even of experiments. Glanvill, the house philosopher of the early Royal Society, regarded witchcraft as the paradigm of experimental reasoning. We have to define experimental reasoning before we start Humean book burning.
    In scientific reasoning, theories are confronted with facts; and one of the central conditions of scientific reasoning is that theories must be supported by facts. Now how exactly can facts support theory?[p2]

An early answer, favored by Newton who believed himself to have done such, is that one proves theories by deducing them from facts. This is readily seen to be impossible; one cannot, in general, deduce general laws from a finite number of facts. Lakatos attributes this false belief in provability to the inheritance of attitudes taken over from theology where provability is in the cards because theology starts with a presupposition of certain knowledge.
The presumption that provability was attainable was buttressed by the enormous success of Newtonian mechanics - scientists believed that Newton had deciphered God's ultimate laws. Then came Einstein and provability was recognized as a mirage, raising (reraising) the question:

    If all scientific theories are equally unprovable, what distinguishes scientific knowledge from ignorance, science from pseudoscience?[p3]

One twentieth century answer was "inductive logic" wherein theories were to be rated by their mathematical probability of satisfying the available total evidence. This sounds plausible and has the attractive feature of providing a measurement of quality. Popper, however, argued that the mathematical probability of any theory whatsoever, regardless of the amount of evidence, is zero. Popper in turn proposed the falsification criterion:
A theory is 'scientific' if one is prepared to specify in advance a crucial experiment (or observation) which can falsify it, and it is pseudoscientific if one refuses to specify such a 'potential falsifer'. [p3] In other words, we cannot prove scientific theories but we can disprove them. A pseudoscientific theory is one which admits of no means of disproof.

A major difficulty with Popper's criterion is that science doesn't work that way. Scientific theories have tenacity; theories are not jettisoned immediately because of facts that which contradict them. Sometimes rescue hypotheses are constructed; sometimes anomalies are simply ignored or are set aside to be considered later. Quite often crucial falsifying experiments are only seen to be such well after the fact.

If Popper has been falsified, what then? Lakatos says that Kuhn in turn suggests that "a scientific revolution is just an irrational change in commitment, that it is a religious conversion". This, in my opinion, is not at all a proper interpretation of Kuhn's theses although Kuhn is often read that way by post-modernists.

Lakatos proposes the a theory of research programmes. He remarks that research programmes are the unit of scientific achievement rather than isolated hypotheses. A research programme typically a hard core, a protective belt of auxiliary hypotheses, and a heuristic, i.e., problem solving machinery. Thus, in the Newtonian programme, the laws of motion and the universal law of gravitation are the hard core. Anomalies in the motion of planets are dealt with by considering factors that may affect the apparent motion, e.g., refraction of light or the existence of a hitherto unknown planet. The problem solving machinery is the vast body of classical mathematical physics.

Research programmes, scientific or pseudoscientific, have, at any stage, both undigested anomalies and unsolved problems. "All theories, in this sense, are born refuted and die refuted." Lakatos then makes a dubious move, to wit:

But how can one distinguish a scientific or progressive programme from a pseudoscientific or degenerating one? [p5] The move, here, is to identify "scientific and progressive" and "pseudoscientific and degenerating". He goes on to nominate "predicting new facts" as a major criterion for distinguishing between progressive and degenerating research programmes:

    Thus, in a progressive research programme, theory leads to the discovery of hitherto unknown novel facts. In degenerating programmes, however, theories are fabricated only in order to accomodate known facts. Has, for instance, Marxism ever predicted a stunning novel fact successively? Never.[p5]

He was doing so well, up to his rhetorical query about Marxism. He followed this with a list of failed predictions of Marxist theory. In point of fact, though, Marx made a number of novel predictions which panned out. Thus, Marxist theory predicted (and it was far from obvious at the time) the consolidation and merging of large firms and the increasingly convulsive cycle of booms and depressions - capitalism followed the predictions of Marxist theory up to the great depression. This is not to say that Marxist theory was correct (he was, after all, an economist) or that the Marxist research programme did not degenerate; in fact, it did because it fundamentally could not take into account the reaction of the industrial nations to economic crises.
Nor is it accurate to say that pseudosciences do not predict novel facts, even stunning natural facts. Velikovsky, for example, predicted that Venus would be hot and that Jupiter would be a radio source. For that matter, psychic hotline psychics have been known to make stunningly accurate predictions from time to time.

Lakatos, like many philosophers of science, tends to focus on physics, a focus that tends to be misleading. One of the striking features of physics as a science is the reduction of the domain of phenomena to be considered. This reduction is an enabler of the possibility of precise prediction.


Falsification and the methodology of scientific research programmes

This paper is the one that establishes the main force of Lakatos's argument. It begins with the observation that prior to the twentieth century and Einstein "knowledge meant proven knowledge - proven either by the power of the intellect or by the evidence of the senses." in response to Einstein's results the notion that scientific knowledge is proven knowledge has pretty much been abandoned.

He then spends a few paragraphs contrasting Popper and Kuhn before launching into a taxonomy of philosophical positions.

Justificationism:

"According to the 'justificationists' scientific knowledge consisted of proven propositions." Lakatos distinguishes between Classical Intellectualists who admitted powerful sorts of extralogical proofs, e.g., by revelation and intuition, and Classical Empiricists who admitted as axioms only a hard core of empirical "proven facts". The latter necessarily augmented classical deductive logic with "inductive logic". In the long run justificationism failed.

Probabilism (neojustificationism):

This approach treats scientific knowledge as "highly probable but not provable". As noted above Popper established (according to Lakatos) that this does not work. It would have been nice if Lakatos had fleshed this assertion out - in ordinary parlance one speaks of various hypotheses being more or less probable.

Dogmatic Falsification:

In turn Popper introduced dogmatic falsification:

    Scientific honesty then consists of specifying, in advance, an experiment such that if the result contradicts the theory, the theory have to be given up.[p13]

Lakatos argues at some length that dogmatic falsification is untenable. The essence of the matter seems to be that the line between "experimental fact" and "theory" is not absolute. Lakatos also argues that knowing what is "fact" and what is "theory" presupposes that there is a natural psychological (perceptual) border between them. In turn, this means that one has to be in one's right mind to make the distinction. In his argument I particularly liked:
    ... Indeed, all brands of justificationist theories of knowledge which acknowledge the sense as a source (whether as one source or as the source) of knowledge are bound to contain a psychology of observation.... All schools of modern justificationism can be characterized by the particular psychotherapy by which they propose to prepare the mind to receive the grace of proven truth in the course of a mystical experience. [p15]

Irrespective of the difficulties of grounding knowledge in the senses there is a methodological problem which is fatal (and is central in Lakatos's treatment): Predictions and theory are always subject to an "all other things being equal" clause. Scientific theories do not embrace all knowledge and the entirety of the universe; they are restricted in applicability.
Having lanced the boils of justificationism, neojustificationism, and dogmatic falsificationism, Lakatos introduces the shining knight of methodological falsificationism. Before the knight is brought into the arena Lakatos first detours through the thickets of conventionalism. The path is traced through pairs of alternatives, each pair being presented and one selected for further exploration.

Choice the first: Passivist versus activist theories of knowledge

    'Passivists' hold that true knowledge is Nature's imprint on a perfectly inert mind: mental activity can only result in bias and distortion. The most influential passivist school is class empiricism. 'Activists' hold that we cannot read the book of nature without interpreting it in the light of our expectations or theories. [p20]

As an observation, the most influential passivist schools are the various forms of mysticism, for which see Star Wars.

Choice the second: Conservative versus revolutionary activist theories

    'Conservative activists' hold that we are born with our basic expectations; with them we turn the world into 'our world' but must then live for ever in the prison of our world.... But revolutionary activists believe that conceptual frameworks can be developed and also replaced by new, better ones; it is we who create our 'prisons' and we can also, critically, demolish them. [p20]

Lakatos instances Kant and Kantians as conservative activists and, in a footnote, also Hegel. Lakatos says that Whewell, Poincare, Milhaud and Le Roy opened the revolutionary activist door which now, apparently, acquires the label "conventionalism".

Choice the third: Conservative versus revolutionary conventionalism

Conservative conventionalists hold that there is a standard sequence of stages. The first stage is a period in which theories are developed by trial and error. The second stage are inductive epochs in which the best theories are 'proved' by a priori considerations. The third stage is the cumulative development of auxiliary theories. The upshot is that well established theories are ruled to have been proved by a methodological decision and are not refutable.

Revolutionary conventionalists, on the other hand, hold that theories are not permanent prisons and are always potentially demolishable.

Choice the fourth: Simplicism versus methodological falsificationism

Lakatos identifies two rival schools of revolutionary conventionalism, Duhem's simplicism and Popper's methodological falsificationism. The essence of simplicism is that the simpler (more elegant) theory is to be preferred; it is subject to the objection that the criterion is highly subjective and is a matter of transitory fashion.

We have now arrived at Popper's methodological falsificationism which Lakatos proceeds to explore in more detail.

We may begin with the notion of "unproblematic background knowledge". Scientific knowledge forms a network of theory and observation which is refined and elaborated over time. All theories and observations are open to question and challenge; none are taken as being absolutely certain. This testing is done in the context of other scientific knowledge which is provisionally assumed to be unproblematic.

The difference between dogmatic and methodological falsificationism is that the former treats experiment and observation as being absolutely reliable as falsifiers whereas the the latter treats them as being provisional. Under methodological falsificationism theories are marked as rejected, i.e., classified as unscientific; under dogmatic falsificationism they are marked as falsified. In other words the shift from dogmatic to methodological falsification is a shift from truth to admissability. Falsification abandons the notion of truth as such and uses the weaker notion of "not known to be false". Methodological falsification in turn replaces "false" by "rejected".

Choice the fifth: Naive versus sophisticated methodological falsificationism.

The fundamental difference here is that naive methodological falsificationism rejects theories whereas sophisticated methodological falsificationism replaces. (MF hereafter will stand for methodological falsificationism.) In naive MF theories are rejected when they are "falsified"; in sophisticated MF theories are replaced by better theories.

The difficulty with falsification is that a "falsified" theory can always be rescued by an auxiliary hypotheses. in naive MF the falsification decision becomes one of deciding whether to accept the rescuing hypothesis. In other words the decision must be made as to whether the falsifying "datum" is an anomaly to be provisionally ignored or whether it is a damning crucial experiment.

In sophisticated MF a replacement theory must meet two acceptibility criteria. The first is that it must have additional empirical content, i.e., it must lead to the discovery of novel facts. The second is that the some of this excess content must be verified. In addition it must explain the previous success of the theory it replaces.

    Contrary to naive falsificationism, no experiment, experimental report, observation statement, or well-corroborated low-level falsifying hypothesis alone can lead to falsification. There is no falsification before the emergence of a better theory. [p35]

Lakatos characterizes the 'falsification' situation as needing a pluralistic model. The situation is not of a conflict between theory and facts but rather one between interpretive theories and explanatory theories.
Sophisticated MF, then, leads to the concept of a series of theories, each an improvement on its predecessors. Counter evidence is always relative. That is, counter evidence is evidence that serves as a refutation of the theory being replaced and as confirming evidence for the replacement theory.

The generation of these theories, then, takes place within the context of research programmes.

A research programme will have a negative and a positive heuristic associated with it.

The negative heuristic bars tampering with the 'hard core' - anomalies and counter instances are not accepted as refuting the hard core, are not immediately taken as being fatal. Instead the core is rescued by auxiliary hypotheses. A programme remains progressive as long as it continues to increase in empirical content.

The positive heuristic is the research policy of the programme - the puzzles to be solved, the models to be constructed, the questions to be investigated - in short, the substance of Kuhn's "normal science". Lakatos emphasizes that methodology of research programmes results in the relative autonomy of theoretical science.

Lakatos raises the question: How do research programmes die? Are their objective criteria for their death or do they simply die as a consequence of changing scientific fashion?

His answer is that programmes are eliminated when superior programmes supersede them. This is, in his view, the rational reason for their death. If a programme ceases to be progressive, i.e., it is no longer generating theories with no empirical content, then it remains a part of the body of science.

Note: programmes do indeed fail for "social" reasons.

In the competition between programmes fledglings are given leeway. New programmes (which are continually being started up and usually are abandoned) are given a chance to establish what they are good for. The real competition is between programmes which start with different aspects of a domain and encroach on each other. Where they conflict experiment may decide between them; then again it may not.

Lakatos says that there are two kinds of crucial experiments - those that decide between theories within a research programme and those that decide between research programmes. The former are part of the normal process of scientific investigation.

Lakatos holds that real crucial experiments - those which establish one programme over another - are seldom recognized at the time as being crucial. What is more, experiments intended to be crucial often are not. The essential difficulty is that it is only after the conflict has been resolved that one can recognize what the experiment signified. Within each research programme the putative crucial experiment is interpreted differently.

It is notable that Lakatos disparages Kuhn and in nowise represents him correctly (or favorably). Thus in the section on crucial experiments we have:

    One must never allow a research programme to become a weltanschauung, or a sort of scientific rigor, setting itself up as arbiter between explanation and non-explanation, as mathematical rigor sets itself up as arbiter between proof and non-proof. Unfortunately this is the position which Kuhn tends to advocate: indeed, what he calls 'normal science' is nothing but a research program that has achieved monopoly. But, as a matter of fact, research programmes have achieved complete monopoly only rarely and then only for relatively short periods, in spite of the efforts of some Cartesians, Newtonians and Bohrians. The history of science has been and ought to be a history of competing research programmes (or, if you wish, `paradigms') but it has not been and must not become a succession of periods of normal science: the sooner competition starts, the better for progress. `Theoretical pluralism' is better than `theoretical monism': on this point Popper and Feyerabend are right and Kuhn is wrong. [pp 68-69]

Now this is quite wrong. Kuhn's `normal science' is very much Lakatos's `positive heuristic'. It is the work that goes on within the context of a research programme as Lakatos grudgingly admits in a footnote [p91]. Kuhn does not advocate theoretical monism. One cannot equate paradigms and research programs although they are intimately related. Both Kuhn and Lakatos make the same mistake on behalf of their intellectual children, Kuhn for his paradigms, Lakatos for his research programs, which is to fail to recognize that they come in varied sizes, scope, and temporal duration. That is, effects due to these variations are not reflected in their discussions.
In the conclusion Lakatos distinguishes between mature science, consisting of research programmes, and immature science which consists of "a mere patched up pattern of trial and error". Mature science has "heuristic power".

The paper has quite a few examples - case studies - which illustrate his various arguments; they constitue much of the text. These are well worth reading carefully. The section, Kuhn vs Popper, might profitably have been omitted.


Remarks in commentary

In reviewing a major work such as this one looks not only at what is said but also what is not said - what themes, thoughts, and topics are absent.

One of the striking omissions is a theory of scientific truth. There are fragments of a theory; that is what the progression through the theories of knowledge is about. It is a curious progression; we move through:

    True
    Not yet falsified
    Not currently admitted to be falsified

In what sense, then, does scientific knowledge have any truth content at all? In what sense, then, can it even be said to be knowledge? The answer, perhaps, lies in the notion of verisimilitude. Verisimilitude is, however, a notion with serious problems. Lakatos wrestles with the problem here and there, mostly in the footnotes, but comes to no resolution or even serious treatment. In practice discarded theories of scientific truth appear as "unproblematic background knowledge".
A second omission is the omission of everything except astronomy and physics. One might well ask whether his structure applies to anything except physics (notoriously considered the hard, mature science.) Perhaps it does; perhaps it does not; he does not begin to consider the question.

A third omission is that the question raised in the introduction, "What is the difference between science and pseudoscience" is never actually answered. The main paper [written earlier] distinguishes between mature and immature science and between progressive and degenerating research programmes. The introduction [written later] names various pseudosciences, e.g. Marxism, but `demolishes' it as having a degenerate research programme. In `Real' science, however, a degenerate research programme lives on as long as nothing better has come along.

The depiction of research programmes and the march of theories is rather schematic and doesn't accord well with actual practice. Thus:

  • In actual research at the forefront of a discipline there is seldom a 'current best' theory; instead there is a plethora of competing theories.
  • Research programs usually die as a result of social changes.
  • Lakatos ignores the interrelationship between theory programmes and experimental programmes, and the essential tension that exists there. My impression is that philsophers tend to privelege theory over experiment.

My impression, over all, is that he is engaged in a sociological analysis of how the process of science actually works (and doing a much better job of it than the `science studies' folks do, I might add.) It is not the entirety of science and it is not the entirety of the process.
He doesn't really come to terms, in my opinion, with why science works. This need not be counted as fault - it is not the question he was engaged in answering.

Report to moderator   Logged
Pages: [1] Reply Notify of replies Send the topic Print 
Jump to:


Powered by MySQL Powered by PHP Church of Virus BBS | Powered by YaBB SE
© 2001-2002, YaBB SE Dev Team. All Rights Reserved.

Please support the CoV.
Valid HTML 4.01! Valid CSS! RSS feed