Skip to main content

Quantitative Tools for Evaluating Scientific Systematizations

  • Chapter
Information and Inference

Part of the book series: Synthese Library ((SYLI,volume 28))

Abstract

One of the basic functions of a system is to provide information on certain facts or states of affairs about which we are uncertain or agnostic. And to provide information is to reduce uncertainty or agnosticism. For instance, knowing the catalogue system of the University Library in Helsinki helps one to find answers to such questions as whether or not Popper’s Logik der Forschung is there and, if it is, where among the multitude of books it can be found.

Many suggestions and comments by Prof. Jaakko Hintikka, Mr. David Miller, Dr. Risto Hilpinen, Dr. Raimo Tuomcla, and Mr. Kimmo Linnila have been of great value in preparing this paper.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Cf. the distinction between local and global theorizing in Jaakko Hintikka’s paper ‘The Varieties of Information and Scientific Explanation’, in Logic, Methodology and Philosophy of Science III, Proceedings of the 1967 International Congress (ed. by B. van Rootselaar and J. F. Staal), Amsterdam 1968. pp. 151–71. This paper contains many suggestive ideas concerning the use of measures of information in scientific systematizations.

    Google Scholar 

  2. This line of thought appears in Peirce’s retroductive inference, as presented in N. R. Hanson, Patterns of Discovery, Cambridge 1958, p. 86. Similarly, Karl Popper writes: “What is the general problem situation in which the scientist finds himself? He has before him a scientific problem: he wants to find a new theory capable of explaining certain experimental facts; facts which the earlier theories successfully explained; others which they could not explain; and some by which they were actually falsified” (Popper, Conjectures and Refutations, New York 1962, p. 241 ).

    Google Scholar 

  3. To be accurate, some phrase like ‘with respect to everything else known’ should be added here. That is, if the ‘background knowledge’ is b, the measures should read U(d005Ch0026b) and U(d005Cb). The background knowledge may contain other hypotheses accepted at a given time as well as descriptions of observational results different from d; for instance, the antecedent conditions for inferring d from h should be included in b. Following the tradition, this background knowledge is usually left implicit in the expressions under consideration.

    Google Scholar 

  4. If the background information b is written explicitly, expression (1) obtains the following form: (Mathtype)

    Google Scholar 

  5. For instance, Karl Popper (see The Logic of Scientific Discovery, London 1959, Appendix 002AIX), Carl G. Hempel and Paul Oppenheim (‘Studies in the Logic of Explanation’, Philosophy of Science 15 (1948) 135-75), Rudolf Carnap and Yehoshua Bar- Hillel (‘An Outline of a Theory of Semantic Information’, Technical Report No. 247 of the Research Laboratory of Electronics, MIT, 1952; reprinted in Y. Bar-Hillei, Language and Information, Reading, Mass., 1964, pp. 221–74), and J. G. Kemeny (‘A Logical Measure Function’, Journal of Symbolic Logic 18 (1953) 289–308) procced in this way. One notable exception to this tradition is Isaac Levi (see Levi, Gambling with Truth, New York 1967, and especially his paper ‘Information and Inference’, Synthese 17 (1967) 369 – 91 ).

    Google Scholar 

  6. By the ‘usual conditions of adequacy’ we mean in the first place such restrictions on the measure p as arc implied by defining p as a fair-betting ratio; see e. g. Kemeny’s essay ‘Carnap’s Theory of Probability and Induction’, in The Philosophy of Rudolf Carnap (ed. by P. A. Schilpp) La Salle, III., 1963, pp. 711-38. What else should be required of p in order for it to offer appropriate tools for defining measures of uncertainty is left open to a large extent. One group of measure functions (such as give for general sentences a zero probability in an infinite domain) is argued to be inadequate for this purpose by Hintikka and Pietarinen (‘Semantic Information and Inductive Logic’, in Aspects of Inductive Logic (ed. by K. J. Hintikka and P. Suppcs), Amsterdam 1966, pp. 96-112). Certain general difficulties and open questions concerning the inductive probabilities should perhaps be mentioned here. The main difficulties are the following (see Kemeny, loc. cit., and also Carnap’s ‘Replies and Expositions’, in the same volume): (i) how to extend the methods of determining inductive probabilities for sentences from such simple languages as the monadic predicate calculus to languages with more than one family of predicates of first and higher order; and (ii) how to find satisfactory inductive probabilities for general propositions. Kemeny (as well as Camap) points out that the extension mentioned under (i) does not cause new problems in principle, though it does mean vast and difficult mathematical work. The problems under (ii), on the other hand, raise new questions. One answer has been offered by Hintikka (see his ‘Two-Dimensional Continuum of Inductive Methodsx, in Aspects of Inductive Logic, pp. 113-32) for a monadic first-order language. Carnap in his ‘Replies’ (p. 977) mentions that he also has a - so far unpublished - solution to the problem.

    Google Scholar 

  7. See note 5 for the references.

    Google Scholar 

  8. In ‘The Varieties of Information and Scientific Explanation’. In his Conjectures and Refutations, p. 390, Popper seems to have the same measure in mind; similarly, and more explicitly, in ‘Theories, Experience and Probabilistic Intuitions’, in The Problem of Inductive Logic (ed. by Imre Lakatos), Amsterdam 1968, p. 287.

    Google Scholar 

  9. E. g., in Carnap and Bar-Hillel, op. cit.

    Google Scholar 

  10. This sense of explanation is illustrated by what Hempel regards as a general condition of adequacy for any rationally acceptable explanation of a particular event. To quote Hempel, “any rationally acceptable answer to the question ‘Why did event X occur?’ must offer information which shows that X was to be expected - if not definitely, as in the case of D-N explanation, then at least with reasonable probability. Thus, the explanatory information must provide good grounds for believing that X did in fact occur; otherwise, that information would give us no adequate reason for saying: ‘That explains it - that does show why X occurred.’”(C. G. Hempel, Aspects of Scientific Explanation, New York 1965, pp. 367 – 8 ).

    Google Scholar 

  11. The idea of using the logarithmic measure of transmitted information as the basis for defining expressions for the explanatory power is not new. It is discussed by Popper in Logic of Scientific Discovery, p. 403; similarly, I. J. Good argues that this measure is “an explication for ‘explanatory power’ but not for corroboration” (see I. J. Good, ‘Weight of Evidence, Corroboration, Explanatory Power. I. formation and the Utility of Experiments’, Journal of the Royal Statistical Society, B, 123 (1960) 319 – 31 ).

    Google Scholar 

  12. This remark also concerns the measure of explanatory power(Mathtype)proposed by Popper (e. g., p. 400 in Logic of Scientific Discovery.

    Google Scholar 

  13. For the reference, see note 5.

    Google Scholar 

  14. Perhaps this requirement is made most explicitly by Joseph Hanna on p. 13 of his paper ‘A New Approach to the Formulation and Testing of Learning Models’, Synthese (1966) 344-380. Hanna‘s ideas come very close to the approach presented here: he relies entirely, however, on statistical concepts of probability and uncertainty. (For a further comparison of Hanna’s approach and the one sketched here, see J. Pietarinen and R. Tuomela, ‘An Information Theoretic Approach to the Evaluation of Behavioral Theories’, Reports from the Institute of Social Psychology, Univ. of Helsinki, No. 2, 1968.) In other standard references to the characterization of what we shall call inductive systematization and what is variously called statistical (e. g. by W. C. Salmon in ‘The Status of Prior Probabilities in Statistical Explanation’, Philosophy of Science 32 (1965) 137–46), or probabilistic (e. g. by Nagel in Structure of Science), or inductive (by Hempel in Aspects of Scientific Explanation) explanation or prediction is based on the idea that the explanans makes the explanandum highly probable.

    Google Scholar 

  15. A particularly interesting field of application for the measures systs and systs is offered by historical research. It is proper for historians to ask how much the evidence material we have to hand has common content with such and such narrative.

    Google Scholar 

  16. That there is a one-to-one correspondence between the structure of statistical informational analysis with that of the usual analysis of variance has been shown by Garner and McGill in their paper ‘Relation between Uncertainty, Variance, and Correlational Analysis’, Psychometrica 21 (1956) 219-28. Since the unci-measure is quite analogous to the statistical (Shannonian) measure of information, it is not surprising to find terms similar to those in the variance analysis in our context too.

    Google Scholar 

  17. In Conjectures and Refutations, pp. 215–50.

    Google Scholar 

  18. Loc. cit., p. 217. The same ideas can be found in many of the earlier publications of Popper, esp. in Logic of Scientific Discovery, as is indicated by Popper himself on the page cited.

    Google Scholar 

  19. The literature on scientific explanation has an argument (relying on Popper’s ideas) which is relevant here. H.E. Kyburg’s theorem put forward in his discussion ‘On Salmon’s Paper’, Philosophy of Science 32 (1965) 147-51, p. 148, says that the explanatory powers of two theories are equal if and only if the prior probabilities of these theories are equal. This argument is built on the premise that explanatory power is a monotone increasing function of the logical strength of theories, that is, on the idea that the explanatory power of some theory is a monotone increasing function of a measure of the possibilities which are excluded by the theory. But it is not valid if the premise is so qualified that it corresponds to the intuition behind our concept of explanatory power: the explanatory power of a theory with respect to an explanandum is a monotone increasing function of a measure of possibilities excluded by the theory from the possibilities allowed by the explanandum.

    Google Scholar 

  20. See e. g. Conjectures and Refutations, pp. 390–1, and Logic of Scientific Discovery, pp. 400–3.

    Google Scholar 

  21. Consider, for example his measure (Mathtype) If now from h we can deduce a fact d, and from k a fact f such that (Mathtype), h gives a higher value to (T) than k. It is then not difficult to show that (Mathtype) and that (Mathtype)hence (I) and (II) arc valid. But if h and k make the test statements only more or less probable the corresponding proof cannot be stated.

    Google Scholar 

  22. By Popper and e. g. by J. G. Kemeny in Two Measures of Complexity, Journal of Philosophy 52 (1955) 131 – 75.

    Google Scholar 

  23. For the notion of random variable, see e. g. W. Feller, An Introduction to Probability Theory and its Applications, Vol. I, 2nd ed., New York 1957, Chapter I X.

    Google Scholar 

  24. See C. E. Shannon and W. Weaver, The Mathematical Theory of Communication, Urbana, III.. 1949, p. 56.

    Google Scholar 

  25. See Pietarinen and Tuomela, op. cit.

    Google Scholar 

  26. Structure of Science, p. 139.

    Google Scholar 

  27. Often the empirical (physical) probabilities which occur in probabilistic hypotheses are given a relative frequency interpretation. Occasionally empirical interpretation other than the statistical one is preferable, however. For instance, in theories designed for explaining individual choice behavior the response probabilities of experimental subjects are more naturally given a personal or sometimes perhaps a psychological rather than a statistical interpretation.

    Google Scholar 

  28. The terminology- of certain authors differs from ours. Nagel, for instance, understands by probabilistic explanation what is here called inductive explanation; our probabilistic explanation corresponds to his concept of statistical explanation (see Structure of Science, pp. 22–3). Hempel also speaks of statistical explanation which can be of either deductive or inductive type; he distinguishes statistical explanation from the nomological kind of explanation (which corresponds to our probabilistic- deterministic distinction).

    Google Scholar 

  29. This condition has been stated e. g. by Hempel (in Aspects of Scientific Explanation, p. 389) and Levi (in Gambling with Truth, p. 209). Obvious as it may seem to be, it is by no means philosophically unproblematic, as is shown by David Miller’s ‘A Paradox of Information’, The British Journal for the Philosophy of Science 17 (1966) 59–61, and by the discussion on this paper, especially by W. Rozeboom’s ‘New Mysteries for Old: the Transfiguration of Miller’s Paradox’, The British Journal for the Philosophy of Science 19 (1969) 345 – 58.

    Google Scholar 

  30. Cf. Carnap, Logical Foundations of Probability, pp. 495–6.

    Google Scholar 

  31. De Finetti’s own interpretation of his famous result concerning betting ratios on probability statements is that the assumption of (unknown) empirical probabilities is unnecessary. However, this is not the only possible interpretation, as has been argued e. g. by Hintikka (‘The Philosophical Significance of de Finetti’s Representation Theorem’, unpublished). He sees one significance of de Finetti’s result in the very fact that it shows a person who believes in the existence of objective probabilities (and surely most scientists do this) how to bet on them.

    Google Scholar 

  32. This need not always be the case, however. There are good examples of what Hempel calls self-evidencing explanations, where the occurrence of the explanandum event provides the only evidential support, and where this support is nevertheless very strong (see Aspects of Scientific Explanation, pp. 372-3).

    Google Scholar 

  33. See D. V. Lindley, ‘Statistical Inference’, Journal of the Royal Statistical Society, B, 15 (1953) 30–65. The acceptance of h means here the rejection of the alternative k.

    Google Scholar 

  34. Isaac Levi’s essay Gambling with Truth as well as many of his earlier publications on the aims of science and on the importance of decision theoretical considerations in scientific inference and acceptance procedures arc of utmost importance in this context. Unfortunately, this single reference must suffice here.

    Google Scholar 

  35. For a more detailed discussion of this kind of measure of acceptability, see R. Hilpinen, Rules of Acceptance and Inductive Logic, Acta Philosophica Fennica 22 (1968), Ch.9.

    Google Scholar 

  36. Cf. Hempel, Aspects of Scientific Explanation, pp. 344–403.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1970 D. Reidel Publishing Company, Dordrecht-Holland

About this chapter

Cite this chapter

Pietarinen, J. (1970). Quantitative Tools for Evaluating Scientific Systematizations. In: Hintikka, J., Suppes, P. (eds) Information and Inference. Synthese Library, vol 28. Springer, Dordrecht. https://doi.org/10.1007/978-94-010-3296-4_5

Download citation

  • DOI: https://doi.org/10.1007/978-94-010-3296-4_5

  • Publisher Name: Springer, Dordrecht

  • Print ISBN: 978-94-010-3298-8

  • Online ISBN: 978-94-010-3296-4

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics