Abstract
One of the basic functions of a system is to provide information on certain facts or states of affairs about which we are uncertain or agnostic. And to provide information is to reduce uncertainty or agnosticism. For instance, knowing the catalogue system of the University Library in Helsinki helps one to find answers to such questions as whether or not Popper’s Logik der Forschung is there and, if it is, where among the multitude of books it can be found.
Many suggestions and comments by Prof. Jaakko Hintikka, Mr. David Miller, Dr. Risto Hilpinen, Dr. Raimo Tuomcla, and Mr. Kimmo Linnila have been of great value in preparing this paper.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Cf. the distinction between local and global theorizing in Jaakko Hintikka’s paper ‘The Varieties of Information and Scientific Explanation’, in Logic, Methodology and Philosophy of Science III, Proceedings of the 1967 International Congress (ed. by B. van Rootselaar and J. F. Staal), Amsterdam 1968. pp. 151–71. This paper contains many suggestive ideas concerning the use of measures of information in scientific systematizations.
This line of thought appears in Peirce’s retroductive inference, as presented in N. R. Hanson, Patterns of Discovery, Cambridge 1958, p. 86. Similarly, Karl Popper writes: “What is the general problem situation in which the scientist finds himself? He has before him a scientific problem: he wants to find a new theory capable of explaining certain experimental facts; facts which the earlier theories successfully explained; others which they could not explain; and some by which they were actually falsified” (Popper, Conjectures and Refutations, New York 1962, p. 241 ).
To be accurate, some phrase like ‘with respect to everything else known’ should be added here. That is, if the ‘background knowledge’ is b, the measures should read U(d005Ch0026b) and U(d005Cb). The background knowledge may contain other hypotheses accepted at a given time as well as descriptions of observational results different from d; for instance, the antecedent conditions for inferring d from h should be included in b. Following the tradition, this background knowledge is usually left implicit in the expressions under consideration.
If the background information b is written explicitly, expression (1) obtains the following form: (Mathtype)
For instance, Karl Popper (see The Logic of Scientific Discovery, London 1959, Appendix 002AIX), Carl G. Hempel and Paul Oppenheim (‘Studies in the Logic of Explanation’, Philosophy of Science 15 (1948) 135-75), Rudolf Carnap and Yehoshua Bar- Hillel (‘An Outline of a Theory of Semantic Information’, Technical Report No. 247 of the Research Laboratory of Electronics, MIT, 1952; reprinted in Y. Bar-Hillei, Language and Information, Reading, Mass., 1964, pp. 221–74), and J. G. Kemeny (‘A Logical Measure Function’, Journal of Symbolic Logic 18 (1953) 289–308) procced in this way. One notable exception to this tradition is Isaac Levi (see Levi, Gambling with Truth, New York 1967, and especially his paper ‘Information and Inference’, Synthese 17 (1967) 369 – 91 ).
By the ‘usual conditions of adequacy’ we mean in the first place such restrictions on the measure p as arc implied by defining p as a fair-betting ratio; see e. g. Kemeny’s essay ‘Carnap’s Theory of Probability and Induction’, in The Philosophy of Rudolf Carnap (ed. by P. A. Schilpp) La Salle, III., 1963, pp. 711-38. What else should be required of p in order for it to offer appropriate tools for defining measures of uncertainty is left open to a large extent. One group of measure functions (such as give for general sentences a zero probability in an infinite domain) is argued to be inadequate for this purpose by Hintikka and Pietarinen (‘Semantic Information and Inductive Logic’, in Aspects of Inductive Logic (ed. by K. J. Hintikka and P. Suppcs), Amsterdam 1966, pp. 96-112). Certain general difficulties and open questions concerning the inductive probabilities should perhaps be mentioned here. The main difficulties are the following (see Kemeny, loc. cit., and also Carnap’s ‘Replies and Expositions’, in the same volume): (i) how to extend the methods of determining inductive probabilities for sentences from such simple languages as the monadic predicate calculus to languages with more than one family of predicates of first and higher order; and (ii) how to find satisfactory inductive probabilities for general propositions. Kemeny (as well as Camap) points out that the extension mentioned under (i) does not cause new problems in principle, though it does mean vast and difficult mathematical work. The problems under (ii), on the other hand, raise new questions. One answer has been offered by Hintikka (see his ‘Two-Dimensional Continuum of Inductive Methodsx, in Aspects of Inductive Logic, pp. 113-32) for a monadic first-order language. Carnap in his ‘Replies’ (p. 977) mentions that he also has a - so far unpublished - solution to the problem.
See note 5 for the references.
In ‘The Varieties of Information and Scientific Explanation’. In his Conjectures and Refutations, p. 390, Popper seems to have the same measure in mind; similarly, and more explicitly, in ‘Theories, Experience and Probabilistic Intuitions’, in The Problem of Inductive Logic (ed. by Imre Lakatos), Amsterdam 1968, p. 287.
E. g., in Carnap and Bar-Hillel, op. cit.
This sense of explanation is illustrated by what Hempel regards as a general condition of adequacy for any rationally acceptable explanation of a particular event. To quote Hempel, “any rationally acceptable answer to the question ‘Why did event X occur?’ must offer information which shows that X was to be expected - if not definitely, as in the case of D-N explanation, then at least with reasonable probability. Thus, the explanatory information must provide good grounds for believing that X did in fact occur; otherwise, that information would give us no adequate reason for saying: ‘That explains it - that does show why X occurred.’”(C. G. Hempel, Aspects of Scientific Explanation, New York 1965, pp. 367 – 8 ).
The idea of using the logarithmic measure of transmitted information as the basis for defining expressions for the explanatory power is not new. It is discussed by Popper in Logic of Scientific Discovery, p. 403; similarly, I. J. Good argues that this measure is “an explication for ‘explanatory power’ but not for corroboration” (see I. J. Good, ‘Weight of Evidence, Corroboration, Explanatory Power. I. formation and the Utility of Experiments’, Journal of the Royal Statistical Society, B, 123 (1960) 319 – 31 ).
This remark also concerns the measure of explanatory power(Mathtype)proposed by Popper (e. g., p. 400 in Logic of Scientific Discovery.
For the reference, see note 5.
Perhaps this requirement is made most explicitly by Joseph Hanna on p. 13 of his paper ‘A New Approach to the Formulation and Testing of Learning Models’, Synthese (1966) 344-380. Hanna‘s ideas come very close to the approach presented here: he relies entirely, however, on statistical concepts of probability and uncertainty. (For a further comparison of Hanna’s approach and the one sketched here, see J. Pietarinen and R. Tuomela, ‘An Information Theoretic Approach to the Evaluation of Behavioral Theories’, Reports from the Institute of Social Psychology, Univ. of Helsinki, No. 2, 1968.) In other standard references to the characterization of what we shall call inductive systematization and what is variously called statistical (e. g. by W. C. Salmon in ‘The Status of Prior Probabilities in Statistical Explanation’, Philosophy of Science 32 (1965) 137–46), or probabilistic (e. g. by Nagel in Structure of Science), or inductive (by Hempel in Aspects of Scientific Explanation) explanation or prediction is based on the idea that the explanans makes the explanandum highly probable.
A particularly interesting field of application for the measures systs and systs is offered by historical research. It is proper for historians to ask how much the evidence material we have to hand has common content with such and such narrative.
That there is a one-to-one correspondence between the structure of statistical informational analysis with that of the usual analysis of variance has been shown by Garner and McGill in their paper ‘Relation between Uncertainty, Variance, and Correlational Analysis’, Psychometrica 21 (1956) 219-28. Since the unci-measure is quite analogous to the statistical (Shannonian) measure of information, it is not surprising to find terms similar to those in the variance analysis in our context too.
In Conjectures and Refutations, pp. 215–50.
Loc. cit., p. 217. The same ideas can be found in many of the earlier publications of Popper, esp. in Logic of Scientific Discovery, as is indicated by Popper himself on the page cited.
The literature on scientific explanation has an argument (relying on Popper’s ideas) which is relevant here. H.E. Kyburg’s theorem put forward in his discussion ‘On Salmon’s Paper’, Philosophy of Science 32 (1965) 147-51, p. 148, says that the explanatory powers of two theories are equal if and only if the prior probabilities of these theories are equal. This argument is built on the premise that explanatory power is a monotone increasing function of the logical strength of theories, that is, on the idea that the explanatory power of some theory is a monotone increasing function of a measure of the possibilities which are excluded by the theory. But it is not valid if the premise is so qualified that it corresponds to the intuition behind our concept of explanatory power: the explanatory power of a theory with respect to an explanandum is a monotone increasing function of a measure of possibilities excluded by the theory from the possibilities allowed by the explanandum.
See e. g. Conjectures and Refutations, pp. 390–1, and Logic of Scientific Discovery, pp. 400–3.
Consider, for example his measure (Mathtype) If now from h we can deduce a fact d, and from k a fact f such that (Mathtype), h gives a higher value to (T) than k. It is then not difficult to show that (Mathtype) and that (Mathtype)hence (I) and (II) arc valid. But if h and k make the test statements only more or less probable the corresponding proof cannot be stated.
By Popper and e. g. by J. G. Kemeny in Two Measures of Complexity, Journal of Philosophy 52 (1955) 131 – 75.
For the notion of random variable, see e. g. W. Feller, An Introduction to Probability Theory and its Applications, Vol. I, 2nd ed., New York 1957, Chapter I X.
See C. E. Shannon and W. Weaver, The Mathematical Theory of Communication, Urbana, III.. 1949, p. 56.
See Pietarinen and Tuomela, op. cit.
Structure of Science, p. 139.
Often the empirical (physical) probabilities which occur in probabilistic hypotheses are given a relative frequency interpretation. Occasionally empirical interpretation other than the statistical one is preferable, however. For instance, in theories designed for explaining individual choice behavior the response probabilities of experimental subjects are more naturally given a personal or sometimes perhaps a psychological rather than a statistical interpretation.
The terminology- of certain authors differs from ours. Nagel, for instance, understands by probabilistic explanation what is here called inductive explanation; our probabilistic explanation corresponds to his concept of statistical explanation (see Structure of Science, pp. 22–3). Hempel also speaks of statistical explanation which can be of either deductive or inductive type; he distinguishes statistical explanation from the nomological kind of explanation (which corresponds to our probabilistic- deterministic distinction).
This condition has been stated e. g. by Hempel (in Aspects of Scientific Explanation, p. 389) and Levi (in Gambling with Truth, p. 209). Obvious as it may seem to be, it is by no means philosophically unproblematic, as is shown by David Miller’s ‘A Paradox of Information’, The British Journal for the Philosophy of Science 17 (1966) 59–61, and by the discussion on this paper, especially by W. Rozeboom’s ‘New Mysteries for Old: the Transfiguration of Miller’s Paradox’, The British Journal for the Philosophy of Science 19 (1969) 345 – 58.
Cf. Carnap, Logical Foundations of Probability, pp. 495–6.
De Finetti’s own interpretation of his famous result concerning betting ratios on probability statements is that the assumption of (unknown) empirical probabilities is unnecessary. However, this is not the only possible interpretation, as has been argued e. g. by Hintikka (‘The Philosophical Significance of de Finetti’s Representation Theorem’, unpublished). He sees one significance of de Finetti’s result in the very fact that it shows a person who believes in the existence of objective probabilities (and surely most scientists do this) how to bet on them.
This need not always be the case, however. There are good examples of what Hempel calls self-evidencing explanations, where the occurrence of the explanandum event provides the only evidential support, and where this support is nevertheless very strong (see Aspects of Scientific Explanation, pp. 372-3).
See D. V. Lindley, ‘Statistical Inference’, Journal of the Royal Statistical Society, B, 15 (1953) 30–65. The acceptance of h means here the rejection of the alternative k.
Isaac Levi’s essay Gambling with Truth as well as many of his earlier publications on the aims of science and on the importance of decision theoretical considerations in scientific inference and acceptance procedures arc of utmost importance in this context. Unfortunately, this single reference must suffice here.
For a more detailed discussion of this kind of measure of acceptability, see R. Hilpinen, Rules of Acceptance and Inductive Logic, Acta Philosophica Fennica 22 (1968), Ch.9.
Cf. Hempel, Aspects of Scientific Explanation, pp. 344–403.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1970 D. Reidel Publishing Company, Dordrecht-Holland
About this chapter
Cite this chapter
Pietarinen, J. (1970). Quantitative Tools for Evaluating Scientific Systematizations. In: Hintikka, J., Suppes, P. (eds) Information and Inference. Synthese Library, vol 28. Springer, Dordrecht. https://doi.org/10.1007/978-94-010-3296-4_5
Download citation
DOI: https://doi.org/10.1007/978-94-010-3296-4_5
Publisher Name: Springer, Dordrecht
Print ISBN: 978-94-010-3298-8
Online ISBN: 978-94-010-3296-4
eBook Packages: Springer Book Archive