Brought to you by:
Paper

Cosmological measure with volume averaging and the vacuum energy problem

and

Published 2 April 2012 © 2012 IOP Publishing Ltd
, , Citation Artyom V Astashenok and Antonino del Popolo 2012 Class. Quantum Grav. 29 085014 DOI 10.1088/0264-9381/29/8/085014

0264-9381/29/8/085014

Abstract

In this paper, we give a possible solution to the cosmological constant problem. It is shown that the traditional approach, based on volume weighting of probabilities, leads to an incoherent conclusion: the probability that a randomly chosen observer measures Λ = 0 is exactly equal to 1. Using an alternative, volume averaging measure, instead of volume weighting can explain why the cosmological constant is non-zero.

Export citation and abstract BibTeX RIS

1. Introduction

The problem of vacuum energy is probably one of the most interesting puzzles of modern physics. The observable energy density of the vacuum is at least 60 orders of magnitude smaller than the value expected from the Standard Model. There is no known natural way to derive the tiny cosmological constant used in cosmology from particle physics.

These problem can be explained through the anthropic principle. In a well-known article [1], Weinberg estimated the upper limit for the effective cosmological constant as

where Λ0 is an observable value. Higher values suppress hierarchical structure formation in the universe, and, therefore, lead to cosmologies completely devoid of life as we know it.

A further development in the application of anthropic argument was the inclusion of selection rules such as a self-sampling assumption or 'mediocrity principle', namely the notion that there is nothing very unusual about our civilization. Acceptance of mediocrity principle, grounded on statistical approach, explicitly implies the existence of a multiverse that serves a role of a statistical ensemble. In multiverse, we can estimate the probabilities of observing any given event j. Such a probability is factorized as

Equation (1.1)

where ${\bar{P}}_j$ is the concentration of j-type bubbles and fj is the anthropic factor proportional to the total amount of observers residing inside the j-type bubble.

At first glance, it may seem that the approach based on (1.1) could solve the problem of the cosmological constant. Indeed, the anthropic factor decreases with increasing Λ. However, there is another difficulty. As we shall see, when Λ → 0, the number of observers within a bubble increases and tends to infinity. This phenomenon may be called the 'infrared divergence' [2, 3]. Therefore the value Λ = 0 is preferred from the anthropic point of view.

This paper is organized as follows. The next section is devoted to calculations of probabilities in multiverse according to (1.1). The existence of the 'infrared divergence' is demonstrated. In the third section, we use another method to calculate the probabilities that the divergence does not appear. In conclusion, possible objections are considered.

2. Infrared divergence at Λ → 0

The probability PΛ to find oneself in a universe with a given value of vacuum energy can be determined via

Equation (2.1)

where $\bar{P}(\Lambda )$ is an a priori probability distribution (the relative abundance of different values of Λ associated with the different types of bubbles in the multiverse), and N(Λ) is an anthropic factor proportional to the total number of observers in a given region of a multiverse. The aforementioned number is evidently related to the star formation rate (SFR) [4], which can be estimated from the astrophysical data:

Equation (2.2)

where $\dot{n}(t, \Lambda )$ is the SFR in a comoving volume Vc and tc defines the time of collapse. Obviously, for those universes whose expansion has a de Sitter-type asymptote tc = .

If the universe has zero curvature and the radiation is small enough, then the Friedmann equations become integrable and the scale factor may be written as

Equation (2.3)

where $T = 2/\sqrt{3 \Lambda }$ and τ is a dimensionless time τ = t/T.

In order to calculate the comoving volume, Vc, we will use the causal patch cut-off. The causal patch is the region within the cosmological horizon [5, 6]. Such a choice will result in

Equation (2.4)

For $t\ll t_{\Lambda }=\sqrt{3/\Lambda }$, the scale factor changes according to a power law

When Λ → 0, Vc(t) diverges. The anthropic factor is inversely related to the cosmological constant. The resulting P(Λ) will be determined by the previously chosen function $\bar{P}(\Lambda )$. The natural choice is a flat prior distribution, i.e. dP(Λ)/dΛ = 0, because the interval of anthropically acceptable values of Λ is small in comparison with the Planck scale. As we shall see, for such a distribution the probability of finding oneself in a universe with smaller values of the cosmological constant is larger. Therefore, the case Λ = 0 is preferred.

Remark. One notes that for Λ = 0, equation (2.4) is not applicable. In this case, the infinity does not follow by formally taking the limit as Λ → 0. But for the Friedmann universe without vacuum energy there is no cosmological horizon, and therefore the comoving volume is infinite.

Let us consider this fact in detail. The behavior of $\dot{n}(t)$ for our Universe depends on the particular model of star formation [710]. However, the difference in $\dot{n}(t)$ using different star formation models is not large, since all these models predict that $\dot{n}(t)$ reaches a maximum after a couple of billions of years and then is subjected to a relatively fast decrease. The height and the width of the maximum depend on the cosmological constant.

Bousso and Leichenauer (hereafter BL) developed a semi-analytic model [11] of the SFR as a function of time, studying in particular how spatial curvature, amplitude of primordial density perturbations and cosmological constant influence the SFR. Different from previous papers (e.g., Hernquist and Springel 2003, hereafter HS), their model is principally interested in how large changes in the studied parameters affect the SFR. The HS model, for example, is no longer valid when one studies SFR under large variations of the quoted parameters.

In order to understand the shape of the SFR, we have to recall two things.

  • (a)  
    Structure formation originates from tiny perturbations already present in the early Universe. They expanded with the Universe and then collapsed before recombination, giving rise to dark matter haloes that formed the gravitational wells in which baryons fell after recombination.
  • (b)  
    There was no star formation until the structure formed with T > 104 K. After the structure formed, the SFR started to rise and reached the maximum.

Apparently, there are significant differences between the results of the BL and HS models and their calculations. One should point out that the BL star formation model predicts that the rate at the present day is much smaller than in the other models. Also, the other models seem to fit the data better at these late times. This is natural, since these models are adjusted to fit the data and the BL model is not. This is also why the other two models closely agree with each other close to the present epoch.

A similar question arises concerning the height and position of peaks in these models. The top amplitude of the SFR differs significantly: the biggest in the BL model and the smallest in the fossil model [8]. In reality, the data about the peak of the SFR are much less certain [12]. One can see that the other models do not agree with each other concerning the epoch when the peak formed and its width. According to BL (private communication) the data, for large redshift, near the peak, were not as reliable as the data for small redshift and so it is much less clear as to which model is closer to reality.

The detailed calculations in [11] show that for a sufficiently wide range of Λ, the SFR varies only slightly (all other parameters remain fixed to the observed values; see especially figures 3 and 4 in [11]).

One question at this point arises why for large Λ values, the SFR weakly depends on Λ (e.g., Λ = 10Λ0), while for small Λ (0 < Λ < Λ0), we have the same SFR and the same total stellar mass production per unit of comoving volume as in our Universe.

A brief explanation of this issue is that in the universe, star formation peaked around 3 Gyr and dropped off since then for a number of reasons. At 3 Gyr, the cosmological constant was dynamically unimportant and we might as well set it to zero.

Vacuum energy became important only several gigayears later, when star formation was already lower. Without a cosmological constant, it is true that there would be more mergers in the future, but these will mostly be halos that are too massive to cool efficiently. So while Λ does suppress hierarchical structure formation in the future, it has little effect on star formation, and decreasing Λ would not significantly increase star formation in the future.

Therefore, when one is interested in numerical estimations, one can assume (for example in the range 0 < Λ < 10Λ0) that

The accuracy of these estimates, in any case, is limited by our knowledge of star formation mechanisms. In the following, we consider the analytical fit for the SFR in our Universe given in [7]. The dependence of the SFR from redshift z is

Equation (2.5)

with a = 3/5, b = 14/15, zm = 5.4 and $\dot{n}_{0}= 0.15\ {M}_\odot \ {\rm yr}^{-1}\ {\rm Mpc}^{-3}$ is the maximal SFR at z = zm. We plot this expression in figure 1(a), where the SFR reaches a peak at a redshift zm = 5.4, declining roughly exponentially toward both low and high redshifts. Using the well-known relation between t and redshift z,

where H0 is the Hubble constant in our time and Ωm, Λ = ρm, Λ/(ρm + ρΛ), we can derive the time dependence of $\dot{n}(t)$ depicted in figure 1(b). We used $\Omega _\Lambda = 0.72\pm 0.04$ from the WMAP results of [14] and H0 = 72 ± 8 km s−1 Mpc−1 from the Hubble Space Telescope key project [15]. From figure 1(b), one can see that the SFR reaches maximum at tm ≈ 1.5 Gyr and then steadily declines.

Figure 1.

Figure 1. The SFR (Myr−1Mpc−3) as function of redshift (a) and time (b).

Standard image

Therefore, we can estimate the anthropic factor for 0 < Λ < 10Λ0 using the time dependence of the SFR in figure 1(b). From (2.2) and (2.4) it follows that

Equation (2.6)

In equation (2.6), the integration is over dimensionless time which is linked to our dimensionless time τ0 by the relation $\tau =\tau _{0}\sqrt{\Lambda /\Lambda _{0}}$. For simplicity, we assume that $\dot{n}(t)=0$ after tf ∼ 14 Gyr. For our Universe, τf = tanh −1ΩΛ. This assumption is in good agreement with VIMOS VLT Deep Survey (VVDS) data [13]. According to VVDS, the SFR declines steadily by a factor 4 from z = 1.2 to z = 0.05, since in this phase both giant and intermediate galaxy populations decline. The most luminous sources ceased to efficiently produce new stars 12 Gyr ago (at z ∼ 4), while intermediate luminosity sources continued to produce stars till 2.5 Gyr ago (at z ∼ 0.2).

It is convenient to define the relative anthropic factor as

Equation (2.7)

where α = Λ/Λ0 and f(τ) = sinh −2/3(τ).

Following [16], one can assume that observers appeared in the universe after a 'delay time' of the order of some billions of years. One can rewrite equation (2.2) as

Equation (2.8)

where Δt is a time delay. In this case in (2.7), one needs just to change the limit of the inner integral: t → (t + Δt).

The results of calculations for Δt = 0, 5, 10 Gyr are given in figures 2, 3 and 4, respectively. In the quoted figures, data of numerical calculations are marked by crosses, while solid lines are analytical fits. The result of the quoted figures is that the relative anthropic factor (and therefore the probability to find oneself in a universe with a given Λ) increases with decreasing Λ. The rapidity of this increase is smallest for Δt = 0 and largest for Δt = 10 Gyr. We found the following analytical fit for numerical estimations plotted in figures 24:

Equation (2.9)

Parameters γ1 and β depend on Δt (see table). Parameter γ2 slowly decreases with decreasing α. When Λ → 0, the relative anthropic factor tends to

where

is a constant. At Λ = 0, our calculations lose all meaning because in this case the relative anthropic factor becomes infinitely large. Hence, the probability that randomly selected observer measures a value Λ = 0 is exactly equal to 1.

. 

Δt, Gyr γ1 β
0 0.79 30
5 1.08 10
7.5 1.21 10
10 1.33 10
12.5 1.45 10
Figure 2.

Figure 2. The relative anthropic factor for Δt = 0. As shown in the upper panel, we use a logarithmic scale for small values of Λ.

Standard image
Figure 3.

Figure 3. The relative anthropic factor for Δt = 5 Gyr.

Standard image
Figure 4.

Figure 4. The relative anthropic factor for Δt = 10 Gyr.

Standard image

3. The 4-volume averaging of probabilities as a possible solution to the infrared divergence problem

If the eternal inflation model is correct, then this implies the existence of multiverse, containing infinite number of copies of every possible observer. What can we tell about probabilities of a event in such a multiverse? Suppose we are conducting an experiment to determine the value of the cosmological constant with various possible outcomes Λn. The probabilities for these outcomes are connected by the formula

Equation (3.1)

where Nn, k) is the amount of Λn and Λk results in all the multiverse. The direct calculation of probabilities by equation (3.1) becomes impossible because Nk, n) → . The task of infinities elimination, known as the 'measure problem', is important for modern cosmology. Some other possibilities for the measure definition have been proposed so far [1725].

Among the various approaches to measure the problem, one should note the one described in [26]. The key idea, in [26], is that one needs to replace volume weighting of probabilities by volume averaging. According to this assumption, the relative probabilities are proportional to expectation values of the fraction of the number of locations in which the observation occurs. The latter value is proportional to a number of observers per unit 4-volume, i.e. the number of occurred observations per unit spatial volume per unit of time. Therefore, one needs to compare the densities of observers instead of their numbers.

Initially, volume averaging was introduced in order to solve the Boltzmann brain (BB) problem [27, 28]. This problem can be described as follows. Let us consider toy multiverse containing only two types of universes. The universe of the first type (I) expands forever, while the universe of the second type (II) has a finite size and finite lifetime. In type-I universes, 'ordinary observers', like ourselves, should be vastly outnumbered by an infinite number of BBs, arising from vacuum fluctuations. In type-II universes, only a finite number of ordinary observers exist; therefore, the 'ordinary observers' in such multiverse are highly atypical. As pointed out in [26], replacing the volume weighting measure with volume averaging can avoid the BB catastrophe because the density of BBs is much less than the density of ordinary observers. Volume averaging also has a deep link with quantum mechanics [2931].

One can show that volume averaging eliminates the 'infrared divergence'. The spatial volume corresponding to comoving volume in (2.4) is

Equation (3.2)

Obviously, for τ → , V3 converges to $4\pi (\sqrt{3/\Lambda })^3/3$, i.e. to the Hubble volume of bubble. The 4-volume therefore is

and diverges at τ → . The density of observers per unit 4-volume tends to zero for long-lived de Sitter vacua, but the relative density of observers is non-zero:

Equation (3.3)

So the probability of finding oneself in a universe with the given value of Λ becomes a well-defined function of Λ. Combining (2.9) and (3.3) gives the following result:

Equation (3.4)

The dependence of Nrel on Λ is depicted in figure 5 for Δt = 5 (thin solid line), 7.5 (thin dotted line), 10 (thick solid line) and 12.5 Gyr (thick dotted line). Hence, the density of observers reaches its maximum for a value Λm of the vacuum energy. The value of this maximum decreases with increasing time delay. For a time delay 2.5 Gyr < Δt < 12.5 Gyr, we have Λm ≈ 5–10Λ0. For larger values of vacuum energy Λ > 10Λ0, one has to take into account the star formation decline due to the early dominance of vacuum energy.

Figure 5.

Figure 5. The relative density of observers for various Δt.

Standard image

Finally, it remains to show that volume averaging does not lead to singularities in the case Λ = 0. Let us consider the unit comoving volume in such a universe. The total stellar mass within this unit volume can be estimated as

The corresponding 4-volume increases with time according to the law

Hence, the density of observers is equal to

and it is easy to see that

So the point Λ = 0 is not singular.

4. Conclusion

Some questions remain to be answered. They are the following.

  • (1)  
    The observable vacuum energy density is one order of magnitude smaller than the 'optimal' value Λm. Does this mean that our Universe is atypical in the multiverse?The traditional approach is that the observable universe is assumed a typical one. It seems to us that this methodology only complicates the understanding of the real universe. In our opinion, the knowledge of fundamental laws is enough to estimate parameters of a 'typical' (from anthropic point of view) universe. Subsequent comparison of these parameters with the observed values in our Universe helps us to find an answer to the aforementioned question. One also notes that according to our calculations (figure 5), the probability of finding oneself in the universe with Λ = Λm is only 1.5–3 times higher than the probability of finding oneself in the universe with Λ = Λ0. Therefore, the observed value of vacuum energy lies in the reasonable region.
  • (2)  
    Negative values of Λ. One can note that, for Λ < 0, the volume averaging also eliminates 'infrared divergence' at |Λ| → 0. Hence, the regularization scheme based on volume averaging gives correct answers for Λ < 0. But the following problem appears. If vacuum energy is negative, the universe ends its existence in big crunch singularity. In this case, the density of observers is larger than in the case of a de Sitter universe, because the 4-volume of anti-de Sitter universe is much less. So, why we do not live in the universe with negative Λ?

Firstly, this problem occurs in the prediction stage, but it disappears at the explanatory stage when the cosmological constant has already been measured by the observer. According to [32], only the observers with a similar informational content can be assigned to the same equivalence class. The sign of the cosmological constant is already known for us. Hence, we are representatives of the reference class which includes all observers for which Λ > 0. When calculating probabilities one should consider only observers belonging to this class.

Secondly, there may exists a physical mechanism, unknown to date, imposing the bounds on the lifetime (and 4-volume) of de Sitter universe [33, 34]. An interesting scenario was suggested in [3537]. According to this scenario, our vacuum should be rather unstable and should decay within 20 Gyr (which is possible if the gravitino is super-heavy).

In conclusion, it should be emphasized that the main result of this paper is the proof that replacing volume weighting with volume averaging in the cosmological measure can avoid the infrared divergence problem. Volume averaging leads to the natural explanation of the reason why the observed value of vacuum energy is non-zero. Perhaps this result can be considered as an argument in favor of using volume averaging measure in cosmology.

Acknowledgments

AA is grateful to A V Yurov for useful discussion. The authors are also grateful to anonymous referees for useful comments.

Please wait… references are loading.
10.1088/0264-9381/29/8/085014