Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Problems with Using the Normal Distribution – and Ways to Improve Quality and Efficiency of Data Analysis

Abstract

Background

The Gaussian or normal distribution is the most established model to characterize quantitative variation of original data. Accordingly, data are summarized using the arithmetic mean and the standard deviation, by ± SD, or with the standard error of the mean, ± SEM. This, together with corresponding bars in graphical displays has become the standard to characterize variation.

Methodology/Principal Findings

Here we question the adequacy of this characterization, and of the model. The published literature provides numerous examples for which such descriptions appear inappropriate because, based on the “95% range check”, their distributions are obviously skewed. In these cases, the symmetric characterization is a poor description and may trigger wrong conclusions. To solve the problem, it is enlightening to regard causes of variation. Multiplicative causes are by far more important than additive ones, in general, and benefit from a multiplicative (or log-) normal approach. Fortunately, quite similar to the normal, the log-normal distribution can now be handled easily and characterized at the level of the original data with the help of both, a new sign, x/, times-divide, and notation. Analogous to ± SD, it connects the multiplicative (or geometric) mean * and the multiplicative standard deviation s* in the form * x/s*, that is advantageous and recommended.

Conclusions/Significance

The corresponding shift from the symmetric to the asymmetric view will substantially increase both, recognition of data distributions, and interpretation quality. It will allow for savings in sample size that can be considerable. Moreover, this is in line with ethical responsibility. Adequate models will improve concepts and theories, and provide deeper insight into science and life.

Introduction

Quantitative variation in scientific data is usually described by the arithmetic mean and the standard deviation in the form ± SD. In graphical displays, error bars around mean values display the degree of precision of the means – which is usually essential for an adequate interpretation. This characterization is adequate for and evokes the image of a symmetric distribution or, more specifically, the normal or Gaussian distribution [1][3]. As is well known, the latter model implies that the range from - SD to + SD contains roughly the middle two thirds (68%) of the variation, and the interval ± 2 SD covers 95%. So widely is this description used that it is almost mandatory in most scientific journals to present data with their means and either standard deviations or standard errors of the mean (SEM), in the form ± SD or ± SEM.

Results and Discussion

The Problem

However, there are numerous examples for which the description by a mean and a symmetric range of variation around it is clearly misleading. This becomes obvious whenever the standard deviation is of the same order as the mean so that the lower end of the 95% data interval extends below zero for data that cannot be negative, as is the case for most original data in science. In such cases, we say that the data fail the “95% range check.” Table 1a presents some recent examples. For instance, in investigations of health risk, a sample of insulin concentrations in rat blood is described by ± SD  =  296±172 (4]. If a normal distribution were appropriate, the 95% range would extend from -48 to 640, and 4% of the animals would have negative insulin values which is, of course, impossible. Moreover and worse, in this and many further examples, there is even a positive threshold below which values cannot occur. Clearly, data of this kind will be skewed.

The problem is less apparent, but often even more severe if, instead of standard deviations, standard errors of the mean (SEM) are given (Table 1b). In such cases the intervals obtained, compared to the mean value, are shorter, thus hiding the skewed nature of the data.

One example is on data evaluation and error bars and gives helpful explanations of several points of confusion on this topic [5]. It is highly estimated and one of the top ten of all-time most viewed papers in biology according to the Faculty of 1000 [6], In this paper, symmetric error bars showing SEM of n  =  3 observations are displayed for data sets concerning the evolution of clonal cell counts, but for 3 out of 8 samples (E2-E4), the estimated distribution, if assumed normal, would suggest between 12% and 19% of the data being negative.

A peculiar type of plot is found in [10] (Fig. 4, p. 471). Based on the established symmetric view at the level of the original data as described above, the means and standard errors (of n  =  3) are presented in this case on a logarithmically scaled vertical axis. This results in asymmetric intervals with upward bars that are shorter than downward ones. Again, as a ± 2 SD interval would enclose negative numbers in at least one case, the corresponding lower bar would extend to minus infinity on that plot.

Initially, we noticed such examples from the fields of our own research [9], [14], [15]. Extending the scope, we recognized them to exist across the sciences, with the notable exception of some fields of research such as atmospheric, hydrological, soil, or financial sciences. As a general rule, we found one or more papers with such examples per issue of a journal, including the most prestigious ones with their spectrum of contributions from across the sciences and their qualified refereeing systems. A conservative estimate based on the Journal Citation Report [16] thus leads to more than one thousand such papers published per week in the Science Edition only.

The description of data by ± SD or ± SEM does, of course, not formally imply the assumption of a symmetrical distribution, and many authors will be aware of the asymmetric nature of their data. Then, for any formal analyses of the data, appropriate methods, notably nonparametric tests, are used. In the same paper, however, graphical displays usually still use the symmetric description, thus pointing to a dilemma. In any case, our emphasis here is not to criticize inadequate analyses of data, but to highlight the potential for improved quality and new insights to be obtained by using an alternative description.

Towards Solving the Problem

In all cases cited in Table 1, the distributions of the datasets will be skewed, with the longer tail to the right. The simplest model that describes such variability is the log-normal distribution [12], [17][19]. Fig. 1a shows a typical case of data (last line in Table 1) with fitted normal and log-normal distributions. The normal distribution is clearly inappropriate as it suggests a probability of 20% for negative values. The log-normal model corresponds to a normal distribution for logarithmically transformed data, which yields a nice fit (Fig. 1b).

thumbnail
Figure 1. Adequate characterization of data improves the results.

- a,b, The frequency distribution of a chemical (hydroxymethylfurfurol, HMF) in honey is used to illustrate the problem and its solution. a. Obviously, the normal density curve does not fit this skewed dataset, but the log-normal does. b. the distribution is normal after logarithmic transformation and, thus, log-normal. Back-transforming and SD from the level of the logarithms gives the multiplicative (or geometric) mean * and the multiplicative standard deviation s* that allow to characterize variation at the original scale of the data (a), c, Comparing the two types of (1 standard deviation) intervals for the datasets A-J shown in Table 1. Clearly, the multiplicative intervals are shorter, increasing, thus, the potential for differentiation. Moreover, they never lead to negative values, and usually describe the variation encountered well. d,e, Multiplicative intervals improve differentiation in an example from [20]. d, Original, additive description of variation, with two significant differences, *, and a third one, close to significance. Error bars indicate SEM. e, The multiplicative type of intervals (based on the original, unpublished data received from the authors) shown here with a log-scale on the vertical axis leads to a more plausible picture, makes all three differences more significant, and one highly significant now. Error bars indicate SEM*.

https://doi.org/10.1371/journal.pone.0021403.g001

Log-normal variation is most adequately characterized by the geometric - or multiplicative - mean * and the multiplicative standard deviation s* [18]. These parameters determine an interval containing 2/3 of the data as does the description ± SD for (additive) normal data: The interval ranges from * divided by s* to * times s* and may be denoted by * x/s* (read “ * times – divide s* ”). The two types of intervals are indicated in Fig. 1a. They are compared for all datasets of Table 1 in Fig. 1c. (Since we do not have access to the original data, * and s* were calculated from and SD using the formulas for the expectation and standard deviation of a log-normal distribution, as described in the footnote of Table 2.) The 95% variation interval for insulin in rats [4] now covers the range * x/(s*)2  =  256 x/(1.71)2  =  87 to 753 pM, that appears physiologically plausible. For the respective values and intervals for the other cases, see Table 2, which contains examples from a variety of fields of science.

[Table 2 about here.]

In order to show the advantages of an appropriate description, we discuss a graph of Raizen et al. [20] reproduced in Fig. 1d. The symmetrical error bars follow the typical pattern of skewed distributions discussed above. Using a log scaled vertical axis (Fig. 1e), the variation in the lower curves appears similar to the scatter in the upper part, thus reflecting a common relative variation for all the conditions and groups. This insight leads to more efficient statistical testing. The three two group t-tests indicated by the authors (Fig. 1d) become more significant as the p values decrease from 1.8% to 1.5%, from 4.2% to 0.5%, and from 5.3% to 2.0%, if the t-test on the original data is replaced by the same test on log transformed data (Fig. 1e). Thus, in this example, which stands for many analyses found in science, recognition of the log-normal nature of the data leads to more informative graphs and more precise statistics.

The Fundamental Role of Multiplication - and of the Log-Normal Distribution

Heath [21] pointed out that for “certain types of data the assumption that the data are drawn from a normal population is usually wrong, and that the alternative assumption of a log-normal distribution is better”. As further explained below, this statement appears to be of a much broader importance: it is in line with the fact that, in general, laws and processes in science and life are rather of multiplicative than additive nature. From the ample evidence (e.g. 22], let us mention some basic features:

Chemistry is fundamental for life. The velocity of the reaction of A with B is proportional to the product of the individual concentrations, like v ∼ [A] • [B]. With the complex networks of biochemical reactions and pathways for, e.g., anabolism, catabolism, and signalling within the many kinds of biological tissues, this type of law thus affects innumerable aspects of life such as, e.g., concentrations of insulin [4], [23]. Secondly, life depends on processes and laws of mobility and permeability. Baur [24] demonstrated with thorough documentation these processes not to fit the normal, but the log-normal distribution. – Similarly, the Hagen-Poiseuille law Vt  =  (ΔP Τ4 π)/(8 η L) is important for mobility and, without going into detail here, consists of several multiplicative (and divisive) steps.

Thirdly, considering growth, it appears that rates are often constant in first approximation, meaning that the current size is multiplied by the rate to obtain the new size. Finally, cell numbers after division follow the exponential row 1-2-4-8-16. With a median concentration of, e.g., 106 bacteria, one cell division more or less yields 2×106 or 0.5×106 bacteria. The variation is asymmetric and could be described by 106 x/2. This appears to be the reason why for blood cell counts Sorrentino arrived at a log-normal fit [25][27] which is supposed to hold for other cell counts, too [e.g. 5,11,52,54,57,59,60]. In the present context, the name of one outcome of cell division is interesting to consider, as that process is simply called multiplication. – Summarizing, more than 50% of the examples from Table 2 can be based on one or the other of these effects, and for other examples, further multiplicative effects are quite plausible.

The link between multiplicative processes and the log-normal distribution is straightforward: Whereas additive effects lead to the normal distribution according to the Central Limit Theorem (CLT) in its additive form, that is well known and almost exclusively considered so far, the superposition of many small random multiplicative effects results in a log-normally distributed random variable according to the multiplicative CLT [17] that needs to become better known, and understood. To this aim, statistical models resembling gambling machines can help. Whereas the mechanical equivalent of the additive CLT is the established Galton board [28], the multiplicative CLT can be visualized by an analogous novel board [18], [29], [30].To conclude, there is a sound theoretical justification for thinking in multiplicative terms and using the log-normal distribution as first choice, at least as an approximation.

In addition to Heath, Baur and Sorrentino [21], [24][27] several authors have stressed the need for the log-normal view in their fields of research. Kelly [31], described them for food webs, and Hattis et al. [32] related health risks caused by toxicants to a chain of multiplicative steps including contact rate, uptake as a fraction of contacts, general systemic availability etc. Morrison [33], re-analysing published data based on using the normal view, even came, with the log-normal view, to conclusions contradicting the original ones.

There is also a more general area of concern. It relates to technical norms and limits of intervention. One example comes from testing construction material, where procedures to date are based on a normal approach, but Schäper shows the log-normal to fit better [34]. Similar considerations relate to limits of medical and chemical intervention [32], areas that appears to be of considerable concern.

In some sense, the skewed distributions failing the “95% range check” form the visible tip of the iceberg, which itself consists of the predominant multiplicative effects. A question even arises about the relevance of additive effects – and therefore of the normal distribution – in nature and science at large.

Normally Distributed Data

Of course, there are sets of original data that can be adequately described by a normal distribution. Such samples generally have a low coefficient of variation, and the fitted log-normal and normal distributions are similar. However, since the log-normal fits many skewed samples in addition, it is to be preferred because it describes more often data adequately than the common normal distribution. Re-examining published original data, we did not find any samples fitting the (additive) normal distribution that did not fit the log- or multiplicative normal distribution equally well, or better. This even applies to examples such as body heights used in textbooks to illustrate the normal distribution. RA Fisher's data of 1164 men [1] yield a p value of a Chisquare goodness of fit of 0.13 for the normal, and of 0.48 for the log-normal distribution. Exceptions to these findings are measurements that can adopt negative values, like angles and geographical coordinates. In addition, of course, transformed data and other quantities derived from original data often show a normal distribution.

It is common practice to first perform a goodness of fit test for normality of the data and to transform the data or use an alternative to the t test if the normal distribution is rejected. Note that this recipe is not supported by statistical theory, one reason being that for small samples, the goodness of fit tests have low power to detect any deviations and will therefore rarely lead to the appropriate test. Nevertheless, we have shown above that the “95% range check” can reject normality even for very few observations.

Increased Efficiency

Empirical studies are not only conducted to describe the data, but also to draw formal inference. The simplest and most common statistical problem is the comparison of two groups of data. To this aim, graphical descriptions are often augmented by asterisks indicating statistically significant differences. The description by ± SD or ± SEM suggests the application of the t-test as the natural choice. More careful authors apply the nonparametric Wilcoxon rank sum test instead if there are enough observations (>4) in each group. The appropriate alternative for small samples consists of applying the t-test to logarithmically transformed data. The widespread multiple comparisons procedures should also be used on transformed data.

Fortunately, the use of the t-test for skew data usually keeps the level of the test at or below the assumed level (of usually 5%). Its use entails, however, the need for more experimental data to achieve the same precision in conclusions, i.e., the power of the test is unnecessarily low. Figure 2 makes this point clear. Assuming two samples of, e.g., n0  =  10 log-normal observations with a given s*, the difference of parameters * between the two populations was chosen such that the statistical power of the adequate t-test for logarithmically transformed data is 90%. When the t-test is applied to the untransformed data, significance is obtained less often, i.e., the power is less than 90%. We therefore increased the sample size and simulated again, until the (inappropriate) test achieved the power of 90%. This increased sample size depends on the multiplicative standard deviation s*, which characterizes the skewness of the data and on the original sample size n0 as shown in Fig. 2.

thumbnail
Figure 2. Savings in sample size.

If the t-test, which is based on the normal distribution, is applied to (skewed) raw data, the statistical power is lower than for the optimal procedure, which consists of applying it to the log transformed values. Starting from 2 groups of log-normal data with a given s*, we calculate the sample size needed in each group to achieve the same (simulated) statistical power with the (inappropriate) t-test applied to the raw data as with the optimal test, applied to n0  =  5, 10, and 50 observations in each group. This sample size is a function of s*. For the median skewness, s*  =  2.4, 16 observations are needed instead of 10, corresponding to 60% additional effort.

https://doi.org/10.1371/journal.pone.0021403.g002

For our examples chosen arbitrarily (Table 2) n varied from 3 to 47 and was most often around 10. s* varied from 1.7 to 8.6, with 20% of the cases being above 3.1, and with a median s* of 2.4. For the latter and n0  =  10, a sample size of 16 is needed with the inappropriate way of testing to achieve the same power. This means an increase of 60% in sample size. The range of this curve, n0  =  10, starts from an increase of 20% at s*  =  1.7, and as much as 120% additional effort would be needed with s*  =  3.1. For n0  =  50 the curve is little different at the beginning and rises to 80% additional effort at s*  =  3.1. The difference in effort is most expressed with low sample size. Whereas for n0  =  5 and s*  =  1.7 there is an increase of 35%, it rises up to 200% for s*  =  3.1. Thus, for clearly skewed data, adequate evaluation leads to large savings in experimental effort, i.e., in cost, patients, or animals involved, and therefore has ethical and political relevance.

More Precise Models

Of course, the log-normal distribution is not always the best model for skewed data. It is clearly appropriate to select a model that describes the variation of data as precisely as possible in any given application, and to use the corresponding optimal inference procedures. For some fields of science, there is solid theoretical and empirical justification to use a particular type of distribution, e.g., the Weibull, Gamma, Pareto, or Exponential distribution in insurance and reliability.

Note that large samples are needed to select between different types of distributions empirically. If such data is not available, nonparametric tests and respective confidence intervals should be used. Nevertheless, in most cases the description by * x/s* is still more adequate than ± SD, and the log-normal model may serve as an approximation in the sense that many scientists perceive the normal as a valid approximation now.

Conclusions and Outlook

In the light of the examples considered, it is evident that data often follow asymmetric variation, even though they are characterized in symmetric terms, and the question arises: Has the normal distribution become too normal?

We advocate the use of the log-normal distribution and the description by * x/s* as a simple standard way of treating data — unless more adequate specific distributions are available – in the same spirit as the normal distribution and the ± SD notation have been and are used up to date. In the same way, * x/SEM* should replace ± SEM when calculating “inferential error bars” [5], and similarly for confidence intervals.

In fact, when assessing the variability of data from the ± SD characterization, we usually compare the SD to the mean. The multiplicative standard deviation does not need such a standardisation, and there is evidence that typical values occur within most kinds of empirical data. Incubation times of human diseases, e.g., show a typical range of s* values around 1.4 [18 and Limpert & Stahel, unpublished), and it would well be of interest to see how this compares to diseases of animals and plants. Thus, the use of s* has the potential of providing deeper insight into the variability of data than the usual standard deviation.

The use of the log-normal model is equivalent to first subjecting the data to the log transformation and then proceeding with methods based on the normal distribution. In graphical displays, the use of logarithmically scaled axes combines the advantages of appropriate symmetrical error bars with the ease of interpretation of the shown values (cf. Fig 1e).

When multiplicative effects are quantified by experiments, a version of analysis of variance with multiplicative instead of additive effects would be adequate as already recognized by Fisher and Mackenzie in 1923 [35]. Such models are again akin to treating log-transformed data by usual, additive analysis of variance or regression methods. This is in agreement with the established advice of John Tukey to use logarithms as the “first aid transformation” in the evaluation of the usual type of quantitative data–-a type of data that he calls “amounts” [36]. When fitting such models, it is well known that assessing the distribution of residuals is important, and we get the impression that this point is often neglected by those who use the models for untransformed original data.

In economics and even more so in finance, the log-normal distribution has been generally used for half a century now [17], [37][39]. This often occurs implicitly through studying, e.g., logarithmically transformed returns rather than absolute ones. This view forms the basis of the more advanced models used, e.g., for option pricing [40], [41]. Similar traditions are also established in some other fields of science.

Fortunately, characterizing log-normal variation, by * x/s*, is no more difficult than using the common description by ± SD. Thus, there is no reason why the log-normal should, as has been well expressed by Aitchison & Brown, remain the Cinderella of distributions, dominated by its famous “normal” sister [17], and the questions arise, in general: “How normal are additive effects?” and “How normal is the normal distribution?” We believe that the shift in emphasis, away from additive to multiplicative effects and from the normal towards the log- or multiplicative normal distribution, is beneficial and necessary. It will lead to advances in the interpretation of data, and improve our understanding of the concepts behind the empirical phenomena in science and life.

Analysis

The data used in this study were obtained from the literature. Most references were found by browsing through certain issues of renowned journals and scrutinizing the figures displaying data.- All calculations were done with the statistical programming environment R. For obtaining Fig. 2, a function was written that simulated the power of the t-test on untransformed data for any given sample size n and multiplicative standard deviation s*. For given n>n0, the s* leading to 90% power was then calculated by an ad-hoc method for solving the respective implicit equation.

Acknowledgments

We are grateful to Dr. David Raizen, University of Pennsylvania, Philadelphia, and coauthors for allowing us to re-analyze their data as well as for their general interest; to Dr. Robert Merton, Sloan School of Management, MIT, Cambridge, for helpful comments about the significance of the log-normal distribution in finance; to Markus Abbt, Zurich, and Dr. Roy Snaydon for continuous interest and support; and to the referees for stimulating comments and discussions.

Author Contributions

Conceived and designed the investigation: EL. Collected the data: EL. Analyzed the data: WAS. Contributed analysis tools: WAS. Wrote the manuscript: EL WAS.

References

  1. 1. Fisher RA (1958) Statistical Methods for Research Workers. Edinburgh: Oliver and Boyd. 356 p.
  2. 2. Snedecor GW, Cochran WG (1989) Statistical Methods. Ames Iowa: Iowa University Press. 503 p.
  3. 3. Rice JA (2007) Mathematical Statistics and Data Analysis. 3rd ed. Belmont, Cal: Thomson. 603 p.
  4. 4. Wisløff U, Najjar SM, Ellingsen O, Haram PM, Swoap S, et al. (2005) Cardiovascular risk factors emerge after artificial selection for low aerobic capacity. Science 307: 418–420.
  5. 5. Cumming G, Fidler F, Vaux DL (2007) Error bars in experimental biology. The Journal of Cell Biology 177: 7–11.
  6. 6. Faculty of 1000 website Available: http://f1000.com/rankings/mostviewed/alltime/biology. Accessed Jun 9 2011.
  7. 7. Rowe HM, Jakobsson J, Mesnard D, Rougemont J, Reynard S, et al. (2010) KAP1 controls endogenous retroviruses in embryonic stem cells. Nature 463: 237–240.
  8. 8. Tondeur S, Pagnault C, Le Carrour T, Lannay Y, Benmadi R et al (2010) Expression map of the human exome in CD34+ cells and blood cells: Increased alternative splicing in cell motility and immune response genes. PloS ONE 5(2): e8990.
  9. 9. Godet F, Limpert E (1998) Recent evolution of multiple resistance of Blumeria (Erysiphe) graminis f.sp. tritici to selected DMI and morpholine fungicides in France. Pestic. Sci. 54: 244–252.
  10. 10. Rakoff-Nahoum S, Medzhitov R (2007) Regulation of spontaneous intestinal tumorigenesis through the adaptor protein MyD88. Science 317: 124–127.
  11. 11. Smith KL, Robinson BH, Helly JJ, Kaufmann RS, Ruhl HA, et al. (2007) Free-drifting icebergs: hot spots of chemical and biological enrichment in the weddell sea. Science 317: 478–482.
  12. 12. Lawrence D, D'Odorico P, Diekmann L, DeLonge M, Das R, et al. (2007) Ecological feedbacks following deforestation create the potential for a catastrophic ecosystem shift in tropical dry forest. Proc. Natl. Acad. Sci. 104: 20696–20701.
  13. 13. Renner E (1970) Mathematisch-statistische Methoden in der praktischen Anwendung. Berlin: Parey. 116 p.
  14. 14. Limpert E (1999) Fungicide sensitivity – towards improved understanding of genetic variability. In: Lyr H, Russell PE, Sisler HD, editors. Modern Fungicides and Antifungal Compounds II. Andover, UK: Intercept. pp. 187–193.
  15. 15. Limpert E, Stahel WA (2003) Life is multiplicative – novel aspects of the distribution of data (in German). Report 53. Int. Breed. Conf. Austria: Gumpenstein. pp. 15–21.
  16. 16. Journal citation report 2008.
  17. 17. Aitchison J, Brown JAC (1957) The Lognormal Distribution. Cambridge, UK: Cambridge University Press. 176 p.
  18. 18. Limpert E, Stahel WA, Abbt M (2001) Log-normal distributions across the sciences – keys and clues. BioScience 51: 341–352.
  19. 19. Johnson N, Kotz S, Balakrishnan N (1994) Continuous Univariate Distributions. Vol. 1. NY: Wiley. 761 p.
  20. 20. Raizen D, Zimmermann JE, Maycock MH, Ta UD, You YJ, et al. (2008) Lethargus is a Caenorhabditis elegans sleep like state. Nature 451: 569–573.
  21. 21. Heath DF (1967) Normal or log-normal: Appropriate distributions. Nature 213: 1159–1160.
  22. 22. Handbook of Chemistry and Physics. 91 (2011) Cleveland Ohio. 2. CRC Press. 610 p.
  23. 23. Baur JA, Pearson KJ, Price NL, Jamieson HA, Lerin C, et al. (2006) Resveratrol improves health and survival of mice on a high calorie diet. Nature 444: 337–342.
  24. 24. Baur P (1997) Lognormal distribution of water permeability and organic solute mobility in plant cuticles. Plant, Cell and Environment 20: 167–177.
  25. 25. Sorrentino RP, Melk JP, Govind S (2004) Genetic analysis of contributions of dorsal group and JAK-Stat92E pathway genes to larval hemocyte concentration and the egg encapsulation response in Drosophila. Genetics 166: 1343–1356.
  26. 26. Sorrentino RP, Tokosumi T, Schulz RA (2007) The Friend of GATA protein U-shaped functions as a hematopoietic tumor suppressor in Drosophila. Dev.Biol 311: 311–23.
  27. 27. Sorrentino RP (2010) Large standard deviations and logarithmic-normality – the truth about hemocyte counts in Drosophila. Fly. 4. : 327–332. www.landesbioscience.com/journals/fly/article/13260.
  28. 28. Galton F (1889) Natural Inheritance. London: Macmillan. 259 p.
  29. 29. ETH (2011) Department of Computer Science website. Available http://www.inf.ethz.ch/personal/gut/lognormal/index.html. Accessed Jun 9.
  30. 30. Faculty of 1000 website Available: http://f1000.com/1020726#evaluations. Accessed Jun 9 2011.
  31. 31. Kelly BC, Ikonomou MG, Blair JD, Morin AE, Gobas FAPC (2007) Food Web–Specific Biomagnification of Persistent Organic Pollutants. Science 317: 236–239.
  32. 32. Hattis D, Banati P, Goble R, Burmaster DE (1999) Human interindividual variability in parameters related to health risks. Risk Analysis 19: 711–726.
  33. 33. Morrison DA (2004) Technical variability and required sample size of helminth egg isolation procedures: revisited. Parasitol Res 94: 361–366.
  34. 34. Schäper M (2010) Application of the logarithmic normal distribution in material testing – misleading norm statements resulting in faulty analyses. Bautechnik 87: 541–549.
  35. 35. Fisher RA, Mackenzie WA (1923) Studies in crop variation II. J Agric Sci 13: 311–320.
  36. 36. Mosteller F, Tukey JW (1977) Data analysis and Regression – a Second Course in Statistics. Reading, MA: Addison-Wesley. 588 p.
  37. 37. Samuelson PA (1965) Rational theory of warrant pricing. Industrial Management Review 6: 13–31.
  38. 38. Merton RC (1971) Optimum Consumption and Portfolio Rules in a Continuous-Time Model. Journal of Economic Theory 3: 373–413.
  39. 39. Merton RC (1969) Lifetime Portfolio Selection under Uncertainty: The Continuous-Time Case. Review of Economics and Statistics 51: 247–257.
  40. 40. Black F, Scholes M (1973) Pricing of options and corporate liabilities. Journal of Political Economy 81: 637–654.
  41. 41. Merton RC (1973) Theory of rational option pricing. The Bell Journal of Economics and Management Science 4, 141-183. Reprinted in: Merton RC. Continuous-Time Finance. Oxford, U.K.: Basil Blackwell, 1990. (Rev. ed., 1992).
  42. 42. Boilard E, Nigrovic PA, Larabee K, Watts GFM, Coblyn JS, et al. (2010) Platelets amplify inflammation in Arthritis vial collagen-dependent microparticle production. Science 327: 580–583.
  43. 43. Sales KU, Masudunskas A, Bey AL, Rasmussen AL, Weigert R, et al. (2010) Matriptase initiates activation of epidermal pro-kallikrein and disease onset in a mouse model of Netherton syndrome. Nature Genetics 42: 676–683.
  44. 44. Romani L, Fallarino F, De Luca A, Montagnoli C, D'Angelo C, et al. (2008) Defective tryptophan catabolism underlies inflammation in mouse chronic granulomatous disease. Nature 451: 211–216.
  45. 45. Auffray C, Fogg D, Garfa M, Elain G, Join-Lambert O, et al. (2007) Monitoring of Blood Vessels and Tissues by a Population of Monocytes with Patrolling Behavior. Science 317: 666–670.
  46. 46. Factor VM, Laskowska D, Jensen MR, Woitach JT, Popescu NC, et al. (2000) Vitamin E reduces chromosomal damage and inhibits hepatic tumor formation in a transgenic mouse model. Proc. Natl. Acad. Sci 97: 2196–2201.
  47. 47. Maeder CI, Hink MA, Kinkhabwala A, Mayr R, Bastiaens PIH, et al. (2007) Spatial regulation of Fus3 MAP kinase activity through a reaction-diffusion mechanism in yeast pheromone signalling. Nature Cell Biology 9: 1319–1326.
  48. 48. Wilson CG, Sherman PW (2010) Anciently asexual bdelloid rotifers escape lethal fungal parasites by drying up and blowing away. Science 327: 574–576.
  49. 49. Carlton JG, Martin-Serrano J (2007) Parallels Between Cytokinesis and Retroviral Budding: A Role for the ESCRT Machinery. Science 316: 1908–1912.
  50. 50. Merkle FT, Mirzadeh Z, Alvarez-Buylla A (2007) Mosaic organization of neural stem cells in the adult brain. Science 317: 381–384.
  51. 51. McHugh TJ, Jones MW, Quinn JJ, Balthasar N, Coppari R, et al. (2007) Dentate gyrus NMDA receptors mediate rapid pattern separation in the hippocampal network. Science 317: 94–99.
  52. 52. Wang L, Anderson DJ (2010) Identification of an aggression-promoting pheromone and its receptor neurons in Drosophila. Nature 463: 227–232.
  53. 53. Nagamune K, Hicks LM, Fux B, Brossier F, Chini EN, et al. (2008) Abscisic acid controls calcium-dependent egress and development in Toxoplasma gondii. Nature 451: 207–211.
  54. 54. Durand C (2007) Embryonic stromal clones reveal developmental regulators of definite hematopoetic stem cells. Proc. Natl. Acad. Sci 104: 20838–20843.
  55. 55. Griffin MB, Schott J, Schink B (2007) Nitrite, an Electron Donor for Anoxygenic Photosynthesis. Science 316: 1870.
  56. 56. Rohatgi R, Milenkovic L, Scott MP (2007) Patched1 Regulates Hedgehog Signaling at the Primary Cilium. Science 317: 372–376.
  57. 57. Le Naour F, Rubinstein E, Jasmin C, Prenant M, Boucheix C (2000) Severely reduced female fertility in CD9-deficient mice. Science 287: 319–321.
  58. 58. Escobar-Restrepo JM, Huck N, Kessler S, Gagliardini V, Gheyselinck J, et al. (2007) The FERONIA Receptor-like Kinase Mediates Male-Female Interactions During Pollen Tube Reception. Science 317: 656–660.
  59. 59. Tjamos EC, Tsitsigiannis DI, Tjamos SE, Antoniou PP (2004) Selection and screening of endorhizospere bacteria from solarized soils as biocontrol agents against Verticillium dahliae of solanaceous hosts. European Journal of Plant Pathology 110: 35–44.
  60. 60. Machi G (2006) Interaction between Peudomonas savastanoi pv. Savastanoi and Pantoea agglomerans in olive knots. Plant Pathology 55: 614–624.
  61. 61. Stehmann C, De Waard MA (1996) Sensitivity of populations of Botrytis cinerea to triazoles, benomyl and vinclozolin. European Journal of Plant Pathology 102: 171–180.
  62. 62. Rinsoz T, Oppliger A, Duquenne P (2006) Assessment of bacterial load in the indoor air of a poultry house: evaluation of the performance of real-time quantitative PCR. 8th Int. Cong. on Aerobiology, Abstracts, 253:
  63. 63. Bendel M, Kienast F, Bugmann H, Rigling D (2006) Incidence and distribution of Heterobasidion and Armillaria and their influence on canopy gap formation in unmanaged mountain pine forests in the Swiss Alps. European Journal of Plant Pathology 116: 85–93.
  64. 64. Piña-Ochoa E, Hogslund S, Geslin E, Cedhagen T, Revsbech NP, et al. (2010) Widespread occurrence of nitrate storage and denitrification among Foraminifera and Gromiida. Proc. Nat. Acad. Sci Proc. Nat. Acad. Sci 107: 1148–1153.
  65. 65. Ji H, Burin M, Schartman E, Goodman J (2006) Hydrodynamic turbulence cannot transport angular movement effectively in astrophysical disk. Nature 444: 343–346.