0% found this document useful (0 votes)
7 views

Unsolved problems in particle physics

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

Unsolved problems in particle physics

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 49

Unsolved problems in particle physics

Sergey Troitsky
Institute for Nuclear Research of the Russian Academy of Sciences

Abstract
arXiv:1112.4515v1 [hep-ph] 19 Dec 2011

I consider selected (most important according to my own choice) unsolved prob-


lems in particle theory, both those related to extensions of the Standard Model
(neutrino oscillations, which probably do not fit the usual three-generation scheme;
indications in favour of new physics from astrophysical observations; electroweak
symmetry breaking and hierarchy of parameters) and those which appear in the
Standard Model (description of strong interactions at low and intermediate ener-
gies).

Contents
1 Introduction: status and parameters of the Standard Model 2

2 The observed deviation from the Standard Model: neutrino oscilla-


tions. 5
2.1 Theoretical description. . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.2 Experimental results: standard three-flavour oscillations. . . . . . . . . . 7
2.3 Experimental results: non-standard oscillations. . . . . . . . . . . . . . . 10
2.4 The neutrino mass. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

3 Astrophysical and cosmological indications in favour of new physics. 13


3.1 Baryon asymmetry. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.2 Dark matter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.3 Accelerated expansion of the Universe . . . . . . . . . . . . . . . . . . . 19

4 Aesthetic difficulties: the origin of parameters. 23


4.1 Electroweak interaction and the Higgs boson. . . . . . . . . . . . . . . . 23
4.2 The gauge hierarchy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
4.3 The fermion mass hierarchy. . . . . . . . . . . . . . . . . . . . . . . . . . 35

5 Theoretical challenges in the description of hadrons. 37


5.1 Problems of the perturbative QCD. . . . . . . . . . . . . . . . . . . . . . 37
5.2 The lattice results. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
5.3 Dual theories: supersymmetric duality and holography. . . . . . . . . . . 40

6 Conclusions. 43
1 Introduction: status and parameters of the Standard
Model
One may compare the current state of quantum field theory and its applications to
particle physics with the situation 20-30 years ago and discover, amusingly, that all
principal statements of this field of physics are practically unchanged, which is in con-
trast with rapid progress in condensed-matter physics. Indeed, most of the experiments,
held during the last two decades, supported the correctness of predictions which had
been made earlier, derived from the models developed earlier. This success of the par-
ticle theory resulted in considerable stagnation in its development. However, one may
expect that in the next few years, the particle physics will again become an intensively
developing area. Firstly, there is a certain amount of collected experimental results
(first of all related to cosmology and astrophysics, but also obtained in laboratories)
which suggest that the Standard Model (SM) is incomplete. Secondly, the theory was
developing under the guidance of the principle of naturalness, that is the requirement
to explain quantitatively any hierarchy in model parameters (in the case of SM, it is
possible only within a larger fundamental theory yet to be constructed). Finally, one
of the most important arguments for the coming excitement in particle physics is the
expectation of new results from the Large Hadron Collider. As it will become clear
soon, this accelerator will be able to study the full range of energies where the physics
responsible for the electroweak symmetry breaking should appear, so we are expecting
interesting discoveries in the next few years in any case: either it will be the Higgs
boson, or some other new particles, or (in the most interesting case) no new particle
will be found which would suggest a serious reconsideration of the Standard Model.
The Large Hadron Collider (LHC, see e.g. [1]) is an accelerator which allows to
collide protons with the center-of-mass energy up to 14 TeV (currently working at
7 TeV) and heavy nuclei. In a 30-km length tunnel, at the border of Switzerland
and France, there are four main experimental installations (general-purpose detectors
ATLAS and CMS; LHCb which is oriented to the study of B mesons and ALICE,
specialized in heavy-ion physics) as well as a few smaller experiments. The first results
of the work of the collider have brought a lot of new information on particle interactions
which we will mention when necessary.
The purpose of the present review is to discuss briefly the current state of parti-
cle physics and possible prospects for its development. For such a wide subject, the
selection of topics is necessarily subjective and estimates of importance of particular
problems and of potential of particular approaches reflect the author’s personal opinion,
while the bibliography cannot be made exhaustive.
The contemporary situation in the particle physics may be described as follows.
Most of the modern experimental data are well described by the Standard Model of
particle physics which was created in 1970s. At the same time, there are a considerable
amount of indications that SM is not complete and is not more than a good approx-
imation to the correct description of particles and interactions. We are not speaking
now about minor deviations of certain measured observables from theoretically cal-
culated ones – these deviations may be related to insufficient precision of either the
measurement or the calculations, to unaccounted systematic errors or insufficient sets
of experimental data (statistical fluctuations); it happens that these deviations disap-
pear after a few years of more detailed study. Contrary, we will emphasise more serious
qualitative problems of SM, the latter being considered as an instrument of quantitative
description of elementary particles. These problems include the following:

2
Figure 1: Particle interactions.

(1) experimental indications to the incompleteness of SM, namely the well-established


experimental observations of neutrino oscillations (which are impossible, see Sec. 2.4,
in SM) and incapability of SM to describe the results of astrophysical observations, in
particular of those related to the structure and evolution of the Universe;
(2) not fully natural and not calculable in the theory values of the SM parameters,
in particular, the fermion mass hierarchy, the hierarchy of symmetry-breaking scales
and the absence of a light (with mass . 100 GeV) Higgs boson;
(3) purely theoretical difficulties in description of hadrons by means of the available
methods of quantum field theory.
We will discuss these unsolved problems of SM and related prospects for the devel-
opment of the particle theory.
For future reference, it is useful to recall briefly the structure of SM (see e.g. [2, 3]
and the appendix to [4]). The model includes a certain set of particles and their
interactions.
Out of four known interactions (see Fig. 1), three are described by SM – the elec-
tromagnetic, weak and strong ones. The first two of them have a common electroweak
gauge interaction behind them. The symmetry of this interaction, SU (2)L × U (1)Y ,
manifests itself at energies higher than ∼ 200 GeV. At lower energies, this symmetry
is broken down to U (1)EM 6= U (1)Y (the electroweak symmetry breaking); in SM, this
breaking is related to the vacuum expectation value of a scalar field, the Higgs boson.
Parameters of the electroweak breaking are known up to a high precision; experimental
data are in a perfect agreement with the theory. The Higgs boson has not been observed
yet; its mass, being a free parameter of the theory, is bound by direct experimental
searches (see Table 1 and more details in Sec. 4.1).
The strong interaction in SM is described by the quantum chromodynamics (QCD),
a theory with the gauge group SU (3)C . The effective coupling constant of this theory
grows when the energy is decreased. As a result, particles which feel this interaction
cannot exist as free states and appear only in the form of bound states called hadrons.
Most of modern methods of quantum field theory work for small values of coupling
constants, that is, for QCD, at high energies.
The fourth known interaction, the gravitational one, is not described by SM, but
its effect on the microscopic physics is negligible.
The particle content of SM is summarized in Fig. 2. Quarks and leptons, the
so-called SM matter fields, are described by fermionic fields. Quarks take part in
strong interactions and compose observable bound states, hadrons. Both quarks and
leptons participate in the electroweak interaction. The matter fields constitute three
generations; particles from different generations interact identically but have different
masses. The full electroweak symmetry forbids fermion masses, so nonzero masses of

3
Figure 2: Particles described by the Standard Model.

quarks and leptons are directly related to the electroweak breaking – in SM, they appear
due to the Yukava interaction with the Higgs field and are proportional to the vacuum
expectation value of the latter. For the case of neutrino, these Yukawa interactions
are forbidden as well, so neutrinos are strictly massless in SM. The gauge bosons,
which are carriers of interactions, are massless for unbroken gauge groups U (1)EM
(electromagnetism – photons) and SU (3)C (QCD – gluons); masses of W ± and Z
bosons are determined by the mechanism of the electroweak symmetry breaking. All
SM particles, except for the Higgs boson, are found experimentally.
From the quantum-field-theory point of view, quarks and leptons may be described
as states with definite mass. At the same time, gauge bosons interact with superposi-
tions of these states; in another formulation, when the base is chosen to consist of the
states interacting with the gauge bosons, the SM symmetries allow not only for the
mass terms, mii ψ̄i ψi , for each ith fermion ψi , but also for a nondiagonal mass matrix
mij ψ̄i ψj . Up to unphysical parameters, in SM, this matrix is trivial in the leptonic
sector, while in the quark sector it is related to the Cabibbo-Kobayasi-Maskava (CKM)
matrix. The latter may be expressed through three independent real parameters (quark
mixing angles) and one complex phase (for more details, see [3, 5]).
The Standard Model has therefore 19 independent parameters, values of 18 of which
are determined experimentally. They include three gauge coupling constants, αs , α2
and α1 for gauge groups SU (3)C , SU (2)W and U (1)Y , respectively (the latter two are
often expressed through the electromagnetic coupling constant α and the mixing angle
θW ), the QCD Θ-parameter, nine charged-fermion masses mu,d,s,c,b,t,e,µ,τ , three quark
mixing angles θ12,13,23 , one complex phase δ of the CKM matrix and two parameters
of the Higgs sector, which are conveniently expressed through the known Higgs-boson
vacuum expectation value v and its unknown mass MH . Experimental values of these
parameters, recalculated from the 2010 data [6] (bounds on the mass of the Higgs boson
based on LEP, Tevatron and LHC data are given as of December, 2011), may be found
in Table 1.
It is worth reminding that the observable world is mostly mad eof atoms, so, out of

4
αs (MZ ) = 0.114 ± 0.0007
1/α(Mz ) = 127.916 ± 0.015
sin2 θW (MZ ) = 0.23108 ± 0.00005
Θ . 10−10
+0.8
mu (2 GeV) = 2.5−1.0 MeV
md (2 GeV) = 5.0+1.0
−1.5 MeV
ms (2 GeV) = 105+25
−35 MeV
+0.031
mc (mc ) = 1.266−0.036 GeV
mb (mb ) = 4.198 ± 0.023 GeV
mt (mt ) = 173.1 ± 1.35 GeV
me = 510.998910 ± 0.000013 keV
mµ = 105.658367 ± 0.000004 MeV
mτ = 1.77682 ± 0.00016 GeV
θ12 = 13.02◦ ± 0.05◦
θ23 = 2.35◦ ± 0.06◦
θ13 = 0.199◦ ± 0.011◦
δ = 1.20 ± 0.08
v(mµ ) = 246.221 ± 0.002 GeV
mH 115 GeV . . . 127 GeV

Table 1: Parameters of the Standard Model. For parameters with significant energy depen-
dence, the energy scales, to which the numerical values correspond, are given in
parentheses.

the full manifold of elementary particles, only few are met “in the everyday life”. These
are u and d quarks in the form of protons (udd) and neutrons (uud), electrons and,
out of interaction carriers, the photon. The reasons for that are different for different
particles. In particular, neutrino does not interact with the electromagnetic field and
is therefore very hard to detect; heavy particles are unstable and decay to lighter ones;
strongly interacting quarks and gluons are confined in hadrons. The full manyfold of SM
particles reveal themselves either in complicated dedicated experiments, or indirectly
by their effect seen in astrophysical observations.
Thus, before to proceed with the description of unsolved problems, let us recall
that all experimental results concerning physics of charged leptons, photons, W and Z
bosons at all available energies and quarks and gluons at high energies are in excellent
agreement with SM for a given set of its parameters.

2 The observed deviation from the Standard Model: neu-


trino oscillations.
Let us discuss the unique, well-established in laboratory experiments, evidence in favour
of incomleteness of SM, the phenomenon of neutrino oscillations, that is mutual con-
version of neutrinos of different generations to each other. A more detailed modern
description of the problem may be found in the book [7], in the Appendix to the text-
book [4], in reviews [8, 9, 10] etc.

5
2.1 Theoretical description.
In analogy with the case of charged leptons, let us consider three generations of neutrino:
electron neutrino (νe ), muon neutrino (νµ ) and tau neutrino (ντ ). The corresponding
fermion fields interact with the gauge bosons W and Z through weak charged and
neutral currents. These interactions are responsible for both creation and experimental
detection of neutrinos.
Similarly to the quark case, one may suppose that neutrinos have a nonzero mass
matrix (though it cannot be incorporated in SM, the low-energy effective theory, elec-
trodynamics, does not forbid it) which may be nondiagonal. It is convenient to describe
this system in terms of linear combinations ν1,2,3 of the original fields νe,µ,τ with the
diagonal mass matrix, X
νi = Uiα να ,
α=e,µ,τ

where Uiα , i = 1, 2, 3; α = e, µ, τ , are the elements of the leptonic mixing matrix.


To demonstrate the phenomenon of neutrino oscillations, let us restrict ourselves to
the case of two flavours, νe and νµ . Let their linear combinations,

ν1 = cos θ12 νe + sin θ12 νµ , (1)

ν2 = − sin θ12 νe + cos θ12 νµ ,


be the eigenvectors of the mass matrix with eigenvalues m21 , m22 , respectively. The
inverse transformation expresses (νe , νµ ) through (ν1 , ν2 ):

νe = cos θ12 ν1 − sin θ12 ν2 ,

νµ = sin θ12 ν1 + cos θ12 ν2 .


Suppose that at the moment t = 0, in a certain weak-interaction event, an electron
neutrino νe was created, that is the superposition of ν1 and ν2 with known coefficients:

ν1 (0) = cos θ12 νe (0),

ν2 (0) = − sin θ12 νe (0).


The evolution of mass eigenstates for a plane monochromatic wave moving in the di-
rection z mas be described as
√ 2 2
νi (z, t) = e−iωt+i ω −mi z νi (0), i = 1, 2,
q
where ω is the energy and ω 2 − m2i is the momentum. While propagating, the wave
packets corresponding to ν1 and ν2 disperse in different ways, so that the relation
(cos θ12 , − sin θ12 ) between their coefficient no longer holds, which means that an ad-
mixture of the orthogonal state, q νµ , appears. In the (commonly considered) ultrarel-
m2i
ativistic limit, ω  mi and ω 2 − m2i ' ω − 2ω . The probability to detect νµ at a
point (t, z) for each emitted νe is then

m22 − m21
 
P (νµ ; z, t) = |νµ (z, t)|2 = sin2 2θ12 sin2 z . (2)

6
One may see that this probability is an oscillating function of the distance z, hence the
term “neutrino oscillations”. As expected, no oscillations happen either in the case of
equal (even nonzero) masses (similar dispersions of ν1 and ν2 ) or for a diagonal mass
matrix (θ12 = 0, ν1 = νe etc.). A similar description of oscillations of three neutrino
flavours determines, in analogy with Eq. (1), three mixing angles θ12 , θ13 , θ23 .
When individual neutrinos propagate to large distances, the oscillation formalism
described above stops to work, because the particles of different mass require different
time to propagate from the source, hence loss of coherence; nevertheless the transfor-
mations of neutrinos are possible and their probability is calculable.

2.2 Experimental results: standard three-flavour oscillations.


Let us turn now to the history (see e.g. [7]) and the modern state (see e.g. [11]) of the
question of neutrino oscillations. In 1957, Pontecorvo [12, 13] suggested the possibility
of oscillations in the “neutrino–antineutrino” system, similar to K meson oscillations
already known at that time. This first mentioning of the possibility of neutrino oscilla-
tions was aimed at the explanation of preliminary Davis’ results about observation of
the reaction ν̄ +37 Cl →37 Ar + e− with reactor neutrinos. On one hand, this experi-
mental result has not been confirmed; on the other one, it has become clear that the
Pontecorvo model was not able to describe it even if it were true. The first mention of
mutual transformations of νe and νµ is due to Maki, Nakagawa and Sakata [14], while
the first succesful description of oscillations in the system of two-flavour neutrinos was
given by Pontecorvo [15] and by Gribov and Pontecorvo [16]. The theory of neutrino
oscillations in its present form has been developed in 1975-76 by Bilenky and Pon-
tecorvo [17, 18], Eliser and Swift [19], Fritch and Minkowski [20], Mikheyev, Smirnov
[21, 22] and Wolfenstein [23].
The first experimental evidence in favour of neutrino oscillations have been obtained
more than a half century ago, though for considerable period of time their interpreta-
tion remained an open question. We are speaking about the so-called “solar neutrino
problem”: the observed flux of neutrinos form the Sun was considerably lower than it
was predicted by a model of solar nuclear reactions. This solar neutrino deficit was first
found in the Homestake experiment (USA; first as early as 1968 [24]) and subsequently
confirmed by Kamiokande (Japan) [25], SAGE (Russia, Baksan neutrino observatory of
INR, RAS) [26], GALLEX/GNO (Italy, the Gran-Sasso laboratory) [27] and Super-K
(Japan) [28] experiments, which made use of various experimental techniques and were
sensitive to neutrinos form different nuclear reactions. Since only electron neutrinos
are produced in the Sun, and only these were detected in the experiments, the deficit
might be explained by transformation of a part of electron neutrinos to muon ones.
The natural source of muon neutrinos is provided by cosmic rays, that is charged
particles (protons and nuclei) of extraterrestrial origin which interact with atoms in
the Earth’s atmosphere and produce secondary particles. A significant part of the
latters are charged π mesons. Neutrinos from decays of these π mesons, as well as from
decays of secondary muons, are called atmospheric neutrinos. The first indications to
oscillations of the atmospheric neutrinos have been obtained in the end of 1980s in
Kamiokande [29] and IMB [30] experiments, with subsequent confirmation in Soudan-2
[31], MACRO [32] and Super-K [33]. Their result is the anisotropy in the flux of muon
neutrinos: from above, that is from the atmosphere, the flux is higher than from below
(through the Earth). Without oscillations, the flux were isotropic since it is determined
by an isotropic flux of primary cosmic rays while the interaction of neutrino with the
terrestrial matter is negligible. This anisotropy is not seen for electron neutrinos, hence

7
Figure 3: Limits (95% confidence level) on the νe − νµ oscillation parameters as result from
the analysis taking into account three neutrino flavours [36]. The dotted line cor-
responds to the combination of solar experiments, the full line represents the Kam-
LAND constraints, the gray ellipsis gives the constraints from the combination of
all data. The star, the triangle and the square correspond to the most probable
oscillation parameters obtained in these three analyses, respectively.

it is natural to suppose that νµ oscillate mainly to ντ (the latters were not detected in
these experiments).
In the first decade of our century, a significant experimental progress in the questions
we discuss has been achieved, so that now we have a reliable experimental proof of
neutrino transformations with measured parameters.
νe − νµ oscillations. In addition to more or less model-dependent results about
the solar neutrino deficit (νe disappearence), the SNO experiment has detected, in 2001
[34], appearence of neutrino of other flavours from the Sun in a full agreement with the
flux expected in the oscillational picture. It has therefore closed the “solar neutrino
problem” and supported, at the same time, the standard solar model. The KamLAND
experiment [35] registered disappearence of electron antineutrino born in reactors of
atomic power plants (in contrast with the case of the Sun, the initial flux of the particles
may be directly determined in this case). Parameters of oscillations, measured in these
very different experiments, are in excellent agreement, see Fig. 3. The SNO results,
together with even more precise results of the BOREXINO experiment (Italy) [37],
confirm the expected energy dependence of the number of disappeared solar neutrinos
in agreement with predictions of Mikheyev, Smirnov [21, 22] and Wolfenstain [23], who
developed a theory of neutrino oscillations in plasma: due to the fact that electrons
are present in plasma, unlike muons and tau leptons, the interaction with medium
goes differently for different types of neutrino. As a result, the oscillation formalism is
modified and the resonance enhancement of oscillations becomes possible.
νµ − ντ oscillations. In addition to the Super-K experiment, which have measured
[38, 39] deviations from isotropy in atmospheric νµ and ν̄µ to a great accuracy, the
disappearence of νµ has been measured directly in neutrino beams created by particle
accelerators (experiments K2K [40] and MINOS [41]), see Fig. 4. Finally, in 2010, the
OPERA detector which is located in the Gran Sasso laboratory (Italy) has detected

8
Figure 4: Limits (90% confidence level) on the νµ − ντ oscillation parameters. The dotted
line represents the results of the SuperK analysis with account of three neutrino
flavours [39]; the full line represents constraints by MINOS [42]. The star and the
triangle denote the most probable oscillation parameters for these two analyses,
correspondingly.

∆m212 7.58+0.22 −5 eV2



= −0.26  × 10
+0.12
∆m223 = 2.31−0.09 × 10−3 eV2
sin2 θ12 = 0.312+0.017
−0.016
sin2 θ13 = 0.025 ± 0.007
sin2 θ23 = 0.42+0.08
−0.03

Table 2: Parameters of oscillations of three flavours of neutrino obtained with account of all
relevant experimental data as of summer 2011 [47].

[43] the first (and currently unique) case of appearence of ντ in the νµ beam from the
SPS accelerator (CERN, Switzerland).
The mixing angle θ13 . For a long time, the solar (νe − νµ ) and atmospheric
(νµ − ντ ) oscillation data have been described independently (see discussions in [4],
Appendix C) while relatively low precision of experiments allowed for zero value of the
mixing angle θ13 . The situation has been recently changed and, analyzed commonly, the
data of various experiments point to nonzero θ13 [44]. In summer 2011, two accelerator
experiments, T2K (Japan) [45] and MINOS [46], which both search for appearence
of νe in νµ beams, published their results which are incompatible with θ13 = 0. A
quantitative analysis of all data on the solar and atmospheric neutrinos, jointly with
the accelerator and reactor experiments which study the same part of the parameter
space, points [47] towards a nonzero value of θ13 at the confidence level better than
99%. In Table 2, the results of this analysis are quoted.

9
2.3 Experimental results: non-standard oscillations.
The combination of all experiments described above is in a good quantitative agreement
with the picture of oscillations of three types of neutrino with certain parameters.
However, there exist results which do not fit this picture and may suggest that the
fourth (or, maybe, even the fifth) neutrino exists. As we have seen above, one of the
principal oscillation parameters is the mass-square difference, ∆m2ij = m2j − m2i . The
results on atmospheric and solar neutrinos, jointly with the accelerator and reactor
experiments, are explained by two unequal ∆m2ij , see Table 2,

∆m212  ∆m223 ∼ 2 × 10−3 eV2 .

In the case of three neutrinos, these two values compose the set of linearly independent
∆m2ij and
|∆m213 | = |∆m212 − ∆m223 | ∼ ∆m223 .
Therefore, the observation of any neutrino oscillations with ∆m2ij  ∆m223 implies
either the existence of a new neutrino flavour (i, j > 3) or some other deviation from
the standard picture. On the other hand, there is a very restrictive bound on the number
of relatively light (mi < MZ /2) particles with the quantum numbers of neutrino. This
bound comes from precise measurements of the Z-boson width and implies that there
are only three such neutrinos. This means that the fourth neutrino, if exists, does not
interact with the Z boson; in other words, it is “sterile”. We will turn now to a certain
experimental evidence in favour of ∆m2ij ∼ 1 eV2 . We note that the oscillations related
to this ∆m2ij should reveal themselves at relatively short distances and may be detected
in so-called short-baseline experiments.
ν̄µ − ν̄e oscillations. The LSND experiment [48] studied muon decay at rest,
µ+ → e+ νe ν̄µ , and measured the ν̄e flux at the distance about 30 m from the place
where muons were held. The excess of this flux over the background rate has been
detected and interpreted as appearence of ν̄e as a result of ν̄µ oscillations, for a range of
possible parameters. A similar experiment, KARMEN [49] excluded a significant part
of this parameter space, however, in 2010, the MiniBooNE experiment [50] has also
detected an anomaly which is compatible with the LSND results and, within statistical
uncertainties, does not contradict KARMEN for a certain range of parameters (Fig 5).

Another group of short-baseline experiments which study possible ν̄e −ν̄µ oscillations
search for disappearence of ν̄e in the antineutrino flux from nuclear reactors. These
experiments continued for decades; recently, their results have been reanalized jointly
[52] with a more precise theoretical calculation of the expected fluxes. It has been shown
that there is a statistically significant deficit of ν̄e in the detectors which is compatible
with ∆m2 ∼ 1eV2 , – the so-called reactor neutrino anomaly. The corresponding limits
on the parameters are also shown in Fig. 5 for convenience. However, one should keep in
mind that while LSND, KARMEN and MiniBooNE detected ν̄e in the ν̄µ flux, therefore
constraining ν̄e − ν̄µ oscillations, the reactor experiments only fix the disappearance of
ν̄e . While the lack of this disappearence excludes ν̄e − ν̄µ oscillations, the presence of
it may be explained as a transformation of ν̄e into antineutrino of any other type.
One can see that there are several independent indications in favour of ∆m2 ∼ 1eV2 ,
which, as we discussed above, require either introduction of more-than-three neutrino
flavours or (see below) some other new physics.

10
Figure 5: Limits (90% confidence level) on the parameters of ν̄µ − ν̄e oscillations. The shaded
region is compatible with the LSND signal [48]; the region inside the dotted curve
– with the MiniBooNE signal [51]. Thin full lines bound the region of parameters
compatible with a joint re-analysis of reactor data [52], see text. The KARMEN2
experiment excludes [49] the region above and to the right from the thick full line.

Other anomalies. Recent intense exploration of the field of neutrino oscillations


revealed also a range of other anomalies which are currently being discussed and re-
checked intensively.
Possible difference between neutrino and antineutrino oscillations. The MiniBooNE
experiment studied separately neutrino and antineutrino beams. The ν̄e appearence has
been detected [50, 51] while that of νe has not [53] (see Fig. 6). In assumption of equal
oscillation parameters for ν and ν̄, the MiniBooNE result contradicts to LSND, but
without this assumption, contrary, the LSND claim is supported. It is worth noting
that the MINOS experiment also performed separate measurements with neutrino and
antineutrino beams (studying a range of much smaller ∆m2 ); first their results for the
two cases were incompatible at the 98% confidence level, however, subsequent analysis
of a larger amount of data did not confirm this difference [54]. The latter result agrees
with Super-K: though this experiment cannot distinguish neutrino from antineutrino in
each particular case, it may limit [55] antineutrino oscillation parameters statistically,
on the basis of a known contribution of ν̄µ to the atmospheric neutrino flux.
Calibration of gallium detectors. The GALLEX [56] and SAGE [57] experiments,
constructed to detect solar neutrinos with the help of the gallium detectors, calibrated
their instruments with the help of artificial sources of radioactivity. They detected a
deficit of electron neutrino compatible with oscillations with ∆m2 & 0.1 eV2 (see also
[58]). This mass-square difference, which by itself does not agree with the standard
three-neutrino oscillation picture, agrees with the antineutrino results of LSND, Mini-
BooNE and reactor experiments, however the corresponding mixing angle differs from
the predictions of the latters [59].
Other puzzles. When speaking about unexplained results of neutrino experiments,
one may mention also the unexpected excess of events with energies . 400 MeV detected
by MiniBooNE for neutrinos [60] and antineutrinos [51]; possible seasonal variations
of the neutrino flux in the Troitsk-νmass [61] and MiniBooNE [62] experiments; the
result of the OPERA experiment [63] which measured the velocity of muon neutrinos

11
Figure 6: Limits (90% confidence level) on the νµ −νe and ν̄µ − ν̄e oscillation parameters. The
shaded area corresponds to the part of the parameter space which is excluded for
neutrino oscillations by MiniBooNE [53] and KARMEN [49], while thick contours
limit the region which corresponds to the signal in antineutrino oscillations (full
lines, MiniBooNE [51]; dotted line, LSND [48]).

which happened to be large than the speed of light. All these very interesting anomalies
currently await their confirmation in independent experiments.
Possible theoretical explanations. The experimental results listed above are rather
hard to explain. On one hand, a series of experiments suggest neutrino transforma-
tions compatible with ∆m2 & 0.1 eV2 which cannot be described in the frameworks
of a standard three-generation scheme. On the other hand, the addition of the fourth
neutrino cannot help to explain the difference between the neutrino and antineutrino
oscillations [64, 65]. Alternatively, one can consider (a) two generations of sterile neu-
trinos (see e.g. [66] and references therein); (b) breaking [67] of the CPT invariance1 ,
or (c) nonstandard interaction of neutrino with matter which may distinguish particles
and antiparticles [71, 72]. A critical analysis of some of these suggestions may be found
e.g. in [66, 73, 74]. These scenarios experience considerale difficulties with simultane-
ous explanation of the full set of the experimental data, though they cannot be totally
excluded; it might happen that a certain combination of these possibilities is realized
in Nature.
A confirmation of the result about superluminal neutrino motion would require
a serious reconsideration of basic ideas of particle physics. A successful theory which
explains quantitatively the OPERA result should also agree with very restrictive bounds
on the Lorentz-invariance violation in the sector of charged particles, with the absence
1
Invariance with respect to simultaneous charge conjugation (C) and reflection of both space (P) and
time (T) coordinates is (see e.g. [68]) a fundamental symmetry which inevitably exists in any (3+1)-
dimensional Lorentz-invariant local quantum field theory. However, there exist phenomenologically
acceptable models with the CPT violation (either with higher number of space dimensions, or with the
Lorentz invariance violation, or with a nonlocal interaction). In the context of the neutrino oscillations,
they are discussed e.g. in [67, 69, 70].

12
of dispersion of the neutrino signal from the supernova 1987A and with absence of
intense neutrino decays which are characteristic for many models with deviations from
the relativistic invariance.

2.4 The neutrino mass.


Conversions of neutrino of one type to another are experimentally proven and the set
of numerous independent and very different experiments are in a good agreement with
the oscillation picture. The oscillatory behaviour of the neutrino conversions is proven
by a comparison of the results obtained for different energies (cf. the argument of the
sine squared in Eq. (2)). The last step is to measure the neutrino flux at different
distances along a single path (the distance dependence in Eq. (2)) which is planned for
the nearest future. Up to this last detail, the neutrino oscillations are experimentally
confirmed. Since the oscillations are possible only for different masses of neutrinos of
different types, they prove also that (at least some of) the neutrino masses are nonzero.
At the same time, direct experimental searches for neutrino masses have not been
succesful yet; the most restrictive bounds, put by the Troitsk-νmass (INR RAS) and
Mainz experiments, where the tritium beta decay was studied, are mνe . 2 eV [75, 76].
For other neutrino types, the experimental bounds on the mass are much weaker. An
indirect bound on the sum of the neutrino masses may be obtained from the studies of
anisotropies of the cosmic
P microwave background and of hierarchy of structures in the
Universe [77]; it reads i mνi . 0.35 eV.
At the same time, in SM the lepton numbers are conserved separately for each gen-
erations, that is changes of the neutrino flavour are forbidden. By making use of the
SM fields, it is impossible to construct a gauge invariant renormalizable interaction re-
sulting in the neutrino mass, even after the electroweak symmetry breaking. Therefore,
neutrino oscillations represent an experimental proof of the incompleteness of SM.
How can one modify SM to have massive neutrinos? First note that at energies
below the electroweak breaking scale, the neutrino field is gauge invariant, it is un-
charged and colorless. For such fermion fields one may write two kinds of mass terms,
namely the Dirac term mD ν̄R νL (all charged SM fermions have similar masses) and the
Majorana one, mM νL CνL , where C is the charge conjugation matrix and νL , νR denote
the left-handed and right-handed neutrino spinors, respectively.
In SM, only left-handed neutrinos are present, therefore to have Dirac masses, one
must introduce new fields νR,i . At first sight, the Majorana mass does not require
new fields; however, like the Dirac one, it cannot be obtained from a renormalizable
interaction. Going beyond the renormalizability means that SM is a low-energy limit
of a more complete theory (like the non-renormalizable Fermi theory is a low-energy
limit of SM), so it is again inevitable to introduce new fields. In any case, neutrinos are
several orders of magnitude lighter than the charged fermions and a succesful theory
of neutrino masses should be able to explain this fact (see also Sec. 4.3).

3 Astrophysical and cosmological indications in favour of


new physics.
While laboratory experiments in particle physics give only limited indications to the
incompleteness of SM (neutrino oscillations being the main one), most scientists are
confident that a more complete theory should be invented. The main reason for this
confidence comes from astrophysics and cosmology. In recent decades, intense devel-

13
opment of the observational astronomy in various energy bands has forced cosmology
(that is the branch of science studying the Universe as a whole) to become an accurate
quantitative discipline based on precise observational data (see e.g. textbooks [4, 78]).
Today, cosmology has its own “standard model” which is in a good agreement with
most observational data. The basis of the model is a concept of the expanding Uni-
verse which, long ago, was very dense and so hot that the energy of thermal motion of
elementary particles did not allow them to compose bound states. As a result, it were
the particle interactions which determined all processes and, in the end, influenced the
Universe development and the state of the world as we observe it today. The expanding
Universe cooled down and particles were unified into bound states – first atomic nuclei
from nucleons, then atoms from nuclei and electrons. Unstable particles decayed and
the Universe arrived to its present appearence. As we will see below, presently the
Universe expands with acceleration and is comprised mainly from unknown particles.
Even a dedicated book would be insufficient to describe all aspects of interrelations
between cosmology and particle physics (the readers of Physics Uspekhi might be in-
terested in reviews [79, 80]). Here, we will briefly consider three principal observational
indications in favour of physics beyond SM, namely, the baryon asymmetry of the Uni-
verse, the dark matter and the accelerated expansion of the Universe (both the related
notion of the dark energy and physical reasons for inflation).

3.1 Baryon asymmetry.


Quark-antiquark pairs had to be created intensively in the hot early Universe. The
Universe then expanded and cooled down, quarks and antiquarks annihilated and the
survived ones composed baryons (protons and neutrons). Notably, there are very few
antibaryons in the present Universe, which means that at the early stages, there were
more quarks than antiquarks. One can determine, by which amount: the number of
quark-antiquark pairs was of the same order as the number of photons while the baryon-
photon ratio may be determined from the analysis of the cosmic microwave background
anisotropy and from studies of primordial nucleosynthesis. The ratio of the excess of
quark number nq over the antiquark number, nq̄ , is of order
nq − nq̄
∼ 10−10 ,
nq + nq̄

that is a single “unpaired” quark was present for each ten billion quark-antiquark pairs.
It is hard to imagine that this tiny excess of matter over antimatter was present in the
Universe from the very beginning; moreover, a number of quantitative cosmological
models predict exact baryon symmetry of the very early Universe. Looks like the
asymmetry appeared in course of the evolution of the Universe. For this to happen,
the following Sakharov conditions [81] should be fulfilled:
1. baryon number nonconservation;
2. CP violation;
3. breaking of thermodynamical equilibrium.
Though the classical SM lagrangian conserves the baryon number, nonperturbative
quantum corrections may break it, that is the condition 1 may be fulfilled in SM.
The source of CP violation (condition 2) is also present in SM, it is the phase in the
quark mixing matrix. Finally, in the course of the evolution of the Universe, the state

14
with the zero vacuum expectation value of the Higgs field (high temperature) has been
replaced by the present state. It can be shown (see e.g. [82] and references therein),
that the thermodynamic equilibrium was strongly broken at that moment, if it were a
first-order phase transition. Therefore, in principle, all three conditions might be met
in SM. However, it has been shown that the first-order electroweak phase transition
in SM is possible only for the Higgs boson mass MH . 50 GeV which was excluded
from direct searches long ago. Also, the amount of CP violation in the CKM matrix
is insufficient. We conclude that the observed baryon asymmetry of the Universe is
an indication to the incompleteness of SM. A particular mechanism of generation of
the baryon asymmetry is yet unknown (it should also explain the smallness of the
asymmetry amount, ∼ 10−10 ).

3.2 Dark matter.


Study of dynamics of astrophysical objects (galaxies, galaxy clusters) and of the Uni-
verse as a whole allows one to determine the distribution of mass which may be sub-
sequently compared to the distribution of the visible matter. Various independent
observational data point to the estimate that the contribution of the visible matter
(mostly baryons) to the energy density of the Universe is five times smaller than the
contribution of invisible matter. We will first briefly discuss modern observational evi-
dence for existence of the dark matter and then proceed to the discussion of implications
of these observations to the particle physics.
1. Rotation curves of galaxies. Serious attention has been attracted to the question
of the invisible matter after the analysis of rotation curves of galaxies (see e.g. [83])
(Fig. 7). For nearby galaxies it is possible to measure, by making use of the Doppler
effect, the velocities of stars and gas clouds at different distances from the galaxy center,
that is from the rotation axis. The Newton law of gravitation allows to estimate the
distribution of mass as a function of the distance from the center; it was found that
in the outer parts of galaxies, where luminous matter is practically absent, there is a
significant mass density, so that the visible part of a galaxy is embedded into a much
larger invisible massive halo. These measurements have been performed for many
galaxies, in particular for our Milky Way.
2. Dynamics of galaxy clusters. In a similar way (though based on completely
different observations), it is possible to determine the mass distribution in galaxy clus-
ters. This provided for the historically first argument in favour of dark matter [86].
Modern observations have demonstrated that the main part of the baryonic matter
sits not in the star systems, galaxies, but in hot gas clouds in the intergalactic space.
This gas emits X rays, so the observations allow to reconstruct the distribution of the
electron density and temperature. From the latter, by making use of the conditions of
hydrostatic equilibrium, the mass distribution may be determined. Comparison with
the distribution of the luminous matter (that is, mostly of the gas) points again to
the existence of some hidden mass. A similar, though less precise, conclusion may be
obtained from the analysis of velocities of galaxies inside a cluster.
3. Gravitational lensing. It may happen that a massive object (e.g. a galaxy cluster)
is located between a distant source (e.g. a galaxy) and the observer. According to the
general relativity, the light from the source is deflected by the massive object, so the
latter may serve as a gravitational lens, which produces several distorted images of
the source. A joint analysis of images of several sources allows one to model the mass
distribution in the lens and to compare it with the distribution of visible matter (see
e.g. [87]). The baryon distribution is reconstructed from X-ray observations of the

15
Figure 7: Some of the first indications to the existence of dark matter have been obtained
from the analysis of the rotation curves of galaxies. Observational data on the
rotation velocity as a function of the distance to the axis, given here for the galaxy
NGC 3198 (dots), are not described by the curve which represents the expected
velocity calculated from the distribution of luminous matter (lower line; data and
calculation from [84]). At distances & 10 kpc, the luminous matter is practically
absent (as one may see at the lower photograph taken from the digitalized Palomar
sky atlas [85]), but the rotation velocities of gas clouds seen in the radio band are
almost constant. This indicates that at periphery of the galaxy, there is a significant
concentration of mass (the so-called halo).

16
Figure 8: The galaxy cluster Abell 1689. The background image of the cluster in the optical
band was obtained by the Hubble Space Telescope (image from the archive [88]).
The contours describe the model of mass distribution (full curves, Ref. [87]) based
on the gravitational lensing and the distribution of luminous gas observed in X rays
(dotted curves based on data from the Chandra X-ray telescope archive, Ref. [89]).
Currently this mass model is one of the most precise ones.

luminous gas which contains about 90% of the cluster mass (Fig. 8). The full mass of
the cluster calculated in this way far exceeds the mass of the baryons obtained from
observations.
4. Colliding clusters of galaxies. One of the most beautiful observational proofs
of the existence of dark matter is [90] the observation of colliding clusters of galaxies
(Fig. 9). Contrary to the case of a usual cluster, Fig. 8, one does not need to calculate
the mass in this case: comparison of the mass distribution and the gas distribution
demonstrates that the main part of the mass of the clusters and that of luminous matter
are located in different places. The reason for this dramatic difference, not seen in
normal, noninteracting clusters, is related to the fact that the dark matter, constituting
the dominant part of the mass, behave as a nearly collisionless gas. During the collision
of clusters, the dark matter of one cluster, together with rare – and therefore also
collisionless – galaxies kept by its gravitational potential, went through another one,
while the gas clouds collided, stopped and were left behind.
These results, both by themselves and in combination with other results of quanti-
tative cosmology (first of all those obtained from the analysis of the cosmic microwave
background and the large-scale structure of the Universe, see e.g. [4]), point unequiv-
ocally to the existence of nonluminous matter. One should point out that the terms
“dark”, or “nonluminous”, mean that the matter does not interact with the electro-
magnetic radiation and not only happens to be in a non-emitting state. Indeed, it
should not also absorb electromagnetic waves, since otherwise the induced radiation
would inevitably appear. The usual matter, that is baryons and electrons, may be
put in this state only if packed in compact, very dense objects (neutron stars, brown

17
Figure 9: As in Fig. 8 but for colliding clusters 1E 0657–558 (the mass distribution model
from [91], optical and X-ray images from archives [88, 89] correspondingly). Squares
denote the positions of maxima of the mass distributions; diamonds denote the
positions of maxima of the gas emission.

dwarfs etc.) which should be located in the halo of our Galaxy, as well as in other
galaxies and in the intergalactic space within clusters. One may estimate the amount
of these objects which is required to explain the observational results concerning the
nonluminous matter. This amount appears to be so large that these compact objects
should often pass between the observer and some distant sources, which should result
in temporal distortion of the source image because of the gravitational lensing (the
so-called microlensing effect). These events have been indeed observed, but at a very
low rate which allows for a firm exclusion of this explanation for dark matter [92].
We are forced to say that, probably, the dark matter consists of new, yet unknown,
particles, so that its explanation requires to extend SM. The dark-matter particles
should be (almost) stable in order not to decay during the lifetime of the Universe
(∼14 billion years). These particles should also interact with the ordinary matter
only very weakly to avoid direct experimental detection (direct searches for the dark
matter, which should exist everywhere, in particular in laboratories, last already for
decades). A number of theoretical models of the dark-matter origin predict the mass
of the new particle between ∼ 1 GeV and ∼ 1 TeV and the cross section of interaction
with ordinary particles of order of a typical weak-interaction cross section. Particles
with these properties are called WIMPs (weakly interacting massive particles); they are
absent in SM but exist in some of its extensions. One of the most popular candidates for
the WIMP is the lightest superpartner (LSP) in supersymmetric extensions of SM with
conserved R parity (see Sec. 4.2 below). The LSP cannot decay because the conservation
of R parity requires that among decay products, at least one supersymmetric particle
should be present, while all other supersymmetric particles are heavier by definition
(in the same way, the electric-charge conservation provides for the electron stability
and the baryon-number conservation provides for the stability of the proton). In a
wide class of models the LSP is an electrically neutral particle (neutralino) which is
considered a good candidate for a dark-matter particle. Note that there is a plethora
of other scenarios in which the dark-matter particles have very different masses, from
10−5 eV (axion) to 1022 eV (superheavy dark matter). Also, in principle, the dark
matter may consist of large composite particles (solitons).

18
3.3 Accelerated expansion of the Universe
In this section, we briefly discuss several technically interrelated problems which concern
one of the least understandable, from the particle-physics point of view, part of the
modern cosmology. They include:
1. observation of the accelerated expansion of the Universe (“dark energy”);
2. weakness of the effect of the accelerated expansion as compared to typical scales
of particle physics (the cosmological-constant problem);
3. indications to intense accelerated expansion of the Universe at one of the early
stages of its evolution (inflation).
Let us start with the observational evidence in favour of (recent and present) accel-
erated expansion of the Universe.
1. The Hubble diagram. The first practical instrument of quantitative cosmology,
the Hubble diagram plots distances to remote objects as a function of the cosmological
redshift of their spectral lines. It was the way to discover the expansion of the Universe
and to measure its rate, the Hubble constant. When methods to measure distances to
the objects located really far away became available for astronomers, they found (see
e.g. [93, 94]) deviations from a simple Hubble law which indicate that the expansion
rate of the Universe changes with time, namely the expansion accelerates. The method
of distance determination we are speaking about2 is based on the study of type Ia
supernovae and deserves a brief discussion (see also [95]).
A probable mechanism of the type Ia supernova explosion is the following. A white
dwarf (a star at the latest stage of its evolution in which nuclear reactions have stopped)
is rotating in a dense double system with a normal star. The matter from the normal
star flows to the white dwarf and increases its mass. When the mass exceeds the so-
called Chandrasekhar limit (the limit of stability of a white dwarf whose value depends,
in practice, on the chemical composition of the star only), intense thermonuclear re-
actions start and the white dwarf explodes. It is interesting and useful to note that,
therefore, in all cases the exploding stars have roughly the same mass and constitution
(up to details of the chemical composition). As a consequence, all type Ia supernova
explosions resemble each other not only qualitatively, but quantitatively as well: the
energy release is roughly the same; the time dependence of the luminosity is similar.
Even more amusing is the fact that even for rare outsiders (which differ from the most
of supernovae either by the chemical composition or by some random circumstances),
all curves representing the luminosity as a function of time are homothetic (Fig. 10),
that is map one to another at a simultaneous scaling of both time and luminosity.
It means that, upon measurement of a lightcurve of any type Ia supernova, one may
determine its absolute luminosity with a good precision. Then, comparison with the
observed magnitude allows to determine the distance to the object. In this way it is
possible to construct the Hubble diagram (Fig. 11) which demonstrates statistically
significant deviations from the law which corresponds to the uniform (or decelerated)
expansion of the Universe.
2. Gravitational lensing. The method of the gravitational lensing, discussed above,
allows not only for reconstruction of the mass distribution in the lensing cluster of
galaxies, but also for determination of geometrical distances between sources, the lens
and the observer. If redshifts of the source and the lens are known, one may compare
2
The Nobel prise, 2011.

19
Figure 10: Temporal dependence of the absolute magnitude of type Ia supernovae. Above:
lightcurves of 68% of supernovae are contained within the shaded band, however
there are very rare outsiders (for example, lightcurves of an unusually bright super-
nova SN1991T (squares) and an unusually weak supernova SN1986G (triangles)
are presented); light curves and the band are taken from [96]. Below: the same
curves but scaled simultaneously in the horizonthal (time) and vertical (luminos-
ity) axes according to the rules described in [97]. Introduction of this correction
shifts all “exclusive” curves to the band. Therefore, to know the absolute value of
the luminosity, it is sufficient to measure the shape of the light curve.

Figure 11: The Hubble diagram, presenting the dependence of the distance from the redshift
z of spectral lines of distant galaxies, obtained from observations of type Ia super-
novae. Gray lines correspond to data on individual supernovae with experimental
error bars [98]. The uniform expansion of the Universe corresponds to the lower
(dotted) line, the accelerated expansion – to the upper (full) line.

20
them with the derived distances and find [99] deviations from the Hubble law with a
high precision.
3. Flatness of the Universe and the energy balance. A number of measurements
point to the spatial flatness of the Universe, that is to the fact that its three-dimensional
curvature is zero. The main argument here is based on the analysis of the cosmic mi-
crowave background anisotropy [100]. In the past, the Universe was denser and hotter
than now. Various particles (photons in particular) were in thermodynamical equilib-
rium, so that the distribution of photons over energies was Planckian, corresponding
to the temperature of the surrounding plasma. The Universe cooled down while ex-
panding and at some moment, electrons and protons started to join into hydrogen
atoms. Compared to plasma, the gas of neutral atoms is practically transparent for
radiation; since then, photons born in the primordial plasma propagate almost freely.
We see them as the cosmic microwave background (CMB) now. At the moment when
the Universe became transparent, the size of the causally connected region (that is the
region which a light signal had time to cross since the Big Bang), called a horizon, was
only ∼ 300 kpc. This quantity may be related to a typical scale of the CMB angular
anisotropy; the present Universe is much older and we see at the same moment many
regions which had not been causally connected in the early Universe. This angular
scale has been directly measured from the CMB anisotropy. The theoretical relation
between this scale and the size of the horizon at the moment when the Universe became
transparent is very sensitive to the value of the spatial curvature; the analysis of the
data from WMAP satellite points to a flat Universe with a very high accuracy.
Other methods exist to test the flatness of the Universe. One of the most beautiful
among them is the geometric Alcock-Paczinski criterion. If it is known that an object
has a purely spherical shape, one may try to measure its dimensions along the line of
sight and in the transverse direction. Taking into account distortions related to the ex-
pansion of the Universe, one may compare the two sizes and constrain the cosmological
parameters, first of all, deviations from flatness. Clearly, it is not an easy task to find
an object whose longuitudinal and transverse dimensions are certainly equal; however,
one may measure characteristic dimensions of some astrophysical structures which, av-
eraged over large samples, should be isotropical. The most precise measurement of this
kind [101] uses double galaxies whose orbits are randomly oriented in space while the
orbital motion is described by the Newton dynamics.
From the general-relativity point of view, the flat Universe represents a rather
specific solution which is characterized by a particular total energy density (the so-called
critical density, ρc ∼ 5 × 10−6 GeVcm3 ). At the same time, the estimates of the energy
density related to matter contribute ∼ 0.25ρc , that is the remaining three fourths of
the energy density of the Universe are due to something else. This contribution, whose
primary difference form the matter contribution is in the absence of clustering (that is
of concentration in stars, galaxies, clusters etc.), carries a not fully successful name of
“dark energy”.
The question about the nature of the dark energy is presently open. The technically
most simple explanation is that the accelerated expansion of the Universe results from
a nonzero vacuum energy (in general relativity, the reference point on the energy axis is
relevant!), that is the so-called cosmological constant. From the particle-physics point
of view, the dark-energy problem is, in this case, twofold. In the absence of special
cancellations, the vacuum energy density should be of order of the characteristic scale
of relevant interactions Λ, that is
Λ4
ρ ∼ 3 3.
c ~

21
The observed value of ρ corresponds to Λ ∼ 10−3 eV, while characteristic scales of the
strong (ΛQCD ∼ 108 eV) and electroweak (v ∼ 1011 eV) interactions are many orders of
magnitude higher. One side of the problem (known for a long time as “the cosmological-
constant problem”) is to explain how the contributions of all these interactions to
the vacuum energy cancel. In principle, some symmetry may be responsible for this
cancellation: for instance, the energy of a supersymmetric vacuum in field theory is
always zero. Unfortunately, the supersymmetry, even if it has some relation to the real
world, should (as discussed in Sec. 4), be broken at the scale not smaller than ∼ v, and
the contributions to the vacuum energy should have then the same order. On the other
hand, the observed accelerated expansion of the Universe tells us that the cancellation
is not complete and hence there is a new energetic scale in the Universe ∼ 10−3 eV. The
explanation of this scale is a task which cannot be completed within the frameworks of
SM, where all parameters of the dimension of energy are orders of magnitude higher. If
this scale is given by a mass of some particle, the properties of the latter should be very
exotic in order both to solve the problem of the accelerated expansion of the Universe
and not to be found experimentally. For instance, one of suggested explanations [102]
introduces a scalar particle whose effective mass depends on the density of medium
(this particle is called “chameleon”). By itself, the dependence of the effective mass on
the properties of medium is well-known (for instance, the dispersion relation of photon
in plasma is modified in such a way that it gets a nonzero effective mass). In our case,
due to interaction with the external gravitational field, the chameleon has a short-
distance potential in relatively dense medium (e.g. at the Earth), which prohibits its
laboratory detection, but at large scales of the (almost empty) Universe the effect of this
particle becomes important. One should also note that a solution to the problem of the
accelerated expansion of the Universe might have nothing to do with particle physics
at all and be entirely based on peculiar properties of the gravitational interaction (for
instance, on deviations from the general relativity at large distances).
However, the problem of the accelerated expansion of the Universe is not exhausted
by the analysis of the modern state. There are serious indications that, at an early
stage of its evolution, the Universe experienced a period of intense exponential expan-
sion, called inflation (see e.g. [78, 103]. Though the inflation theory is currently not a
part of the standard cosmological model (it awaits more precise experimental tests), it
solves a number of problems of the standard cosmology and, presently, does not have
an elaborated alternative. Let us list briefly some problems which are solved by the
inflationary model.
1. As it has been already pointed out, various parts of the presently observed Uni-
verse were causally disconnected from each other in the past, if one extrapolates
the present expansion of the Universe backwards in time. Information between
the regions which are now observed in different directions could not be transmit-
ted, for instance, at the moment when the Universe became transparent for CMB.
At the same time, the CMB is isotropic up to a high level of accuracy (relative
variations of its temperature does not exceed 10−4 ), the fact that indicates to the
causal connection between all currently observed regions.
2. Zero curvature of the Universe, from the theoretical point of view, is not singled
out by any condition: the Universe should be flat from the very beginning, nobody
knows why.
3. The modern Universe is not precisely homogeneous – the matter is distributed
inhomogeneously, being concentrated in galaxies, clusters and superclusters of

22
galaxies; a weak anisotropy is observed also in CMB. Most probably, these struc-
tures were developed from tiny primordial inhomogeneities, whose existence should
be assumed as the initial condition.
These and some other arguments point to the fact that the initial conditions for the
theory of a hot expanding Universe had to be very specific. A simultaneous solution
to all these problems is provided by the inflationary model which is based on the
assumption about an exponential expansion of the Universe which happened before the
hot stage. From the theory point of view, this situation is fully analogous to the present
accelerated expansion, but the energy density, which determines the acceleration rate,
was much higher. It may be related to the presence of a new, absent in SM, scalar field,
the inflaton. If it has a relatively flat (that is, weakly depending from the field value)
potential and the value itself slowly changes with time, then the energy density of the
inflaton provides for the required exponential expansion. For a particle physicist, at
least two questions arise: first, what is the nature of the inflaton, and second, what
was the reason for the inflation to stop and not to continue until now.
To summarize, we note that a large number of observations related to the structure
and evolution of the Universe cannot be explained if the particle physics is described
by SM only: one needs to introduce new particles and interactions. Jointly with the
observation of neutrino oscillations, these facts constitute the experimental basis for
the confidence in incompleteness of SM. At the same time, presently none of these
experimental results point to a specific model of new physics, so one is guided also by
purely theoretical arguments when constructing hypothetical models.

4 Aesthetic difficulties: the origin of parameters.


4.1 Electroweak interaction and the Higgs boson.
Results of high-precision measurements of electroweak-theory parameters, in particular
at the LEP accelerator, confirm the predictions of SM, based on the Higgs mechanism.
At the same time, the only SM particle which has not been discovered experimentally,
is the Higgs boson. Its mass is a free parameter of the model and is not directly related
to any of measurable parameters, so the lack of signs of the Higgs boson in data may
be simply explained by its mass: the energies and luminosities of available accelerators
might be insufficient to create this particle with a significant probability.
At the same time, purely theoretical concerns suggest that the Higgs boson should
not be too heavy. It is related to the fact that, without the account of the Higgs scalar,
the scattering amplitudes of massive W bosons grow as E 2 with energy E. As a result,
at energies somewhat higher than the W mass, the perturbation theory fails and all
model predictions start to depend on unknown higher-order contributions; the theory
finds itself in a strong-coupling regime and loses predictivity. The contribution from
the Higgs boson, however, cancels the part of the amplitude which grows with energy,
so only the constant term remains, ∼ g 2 MH 2 /(4M 2 ), where M
W H and MW are masses
of the Higgs and W bosons, respectively, and g is the SU (2)L gauge coupling constant.
Therefore, to keep calculability, MH should not be too large; a quite reliable limit is
MH . 800 GeV. Even more restrictive limits come from the radiative corrections to
the potential of the Higgs boson itself. In the leading order of perturbation theory, the
self-interaction constant of the Higgs boson has a pole at the energy scale
 2 2
4π v
Q ∼ v exp 2 .
3MH

23
Figure 12: The Higgs boson mass expected from indirect data and constrained from direct
searches (see text). The left panel shows all experimental limits; the right one is a
zoom of the most interesting region, MH < 200 GeV.

This means that at energies Λ ≤ Q, the contributions of new particles or interactions


should change the coupling behaviour to avoid divergence. The requirement Λ ≥ 1 TeV
results in the limit MH . 550 GeV. Note that this means that the SM Higgs boson
should be discovered at LHC.
Maybe even more interesting situation is related to the experimental data on the
search of the Higgs particle. The latter may reveal itself not only directly, being pro-
duced at colliders, but also indirectly, through the influence of the virtual Higgs bosons
on numerous observables. Though this influence is not large, a numebr of electroweak
observables are known with a very high precision, so that their joint analysis may con-
strain the mass of yet undiscovered Higgs particle. Let us look at Fig. 12, which is based
on the analysis of indirect experimental data3 as of September 2011. The horizontal
axis gives the possible Higgs-boson mass; the shaded regions of MH are excluded, as
of December 2011, from direct experimental search of the Higgs boson at colliders at
the 95% confidence level (the light band MH < 114 GeV – LEP [104], the light bands
114 GeV< MH < 115.5 GeV and 127 GeV< MH < 600 GeV – LHC [105, 106], the dark
bands 100 GeV< MH < 109 GeV and 156 GeV< MH <177 GeV – Tevatron [107]).
The curve [108] demonstrates how well a given value of MH agrees with a combination
of all other than the direct-search experiments (the lower ∆χ2 , the better agreement;
the curve width represents the uncertainty in theoretical predictions). One can see
that the most preferable value of MH is already experimentally excluded! Clearly, this
does not mean a catastrophe because a narrow range of slightly less preferable values
are allowed, but it motivates theoretical physicists to think about possible alternative
explanations of the electroweak symmetry breaking [109]. One should note that it is
rather difficult to discover a light, 115 GeV< MH < 127 GeV, Higgs boson at LHC:
unlike for a heavy one, several years of work might be required.
The lack of the Higgs boson with the expected mass and the prospect of further
restriction of the allowed mass region at LHC are important, but far not principal
arguments in favour of alternative theoretical models of the electroweak symmetry
breaking, whose history goes back for decades. The point is that the Higgs boson
is the only SM scalar particle (all others are either fermions or vectors). A scalar
particle brings to a theory a number of unfavoured properties some of which we have
just mentioned above, while others will be discussed below. That is why alternative
mechanisms of the electroweak symmetry breaking use, as a rule, only fermionic and
3
See also the regularly updated webpage at http://gfitter.desy.de/GSM .

24
vector fields.
A class of hypothetical models in which the vacuum expectation value of the Higgs
particle is replaced by the vacuum expectation value of a two-fermion operator with the
same quantum numbers are called technicolor models (see e.g. [110]). The replacement
of the scalar by a fermion condensate looks quite natural if one recalls that in the
historically first, and surely realized in Nature, example of the Higgs mechanism, the
Ginzburg–Landau superconductor, the condensate of the Cooper pairs of electrons plays
the role of the Higgs boson.
The base for the construction of technicolor models is provided by the analogy to
QCD. Indeed, an unbroken nonabelian gauge symmetry, similar to SU (3)C , may re-
sult in confinement of fermions and to formation of bound states (in QCD these are
hadrons, bound states of quarks). In fact, in QCD a nonzero vacuum expectation value
of the quark condensate also appears, but its value, of order ΛQCD ∼200 MeV, is much
less than the required electroweak symmetry breaking scale (v ≈ 246 GeV). There-
fore one postulates that there exists another gauge interaction, in a way resembling
QCD, but with a characteristic scale of order v. The corresponding gauge group GTC
is called a technicolor group. The bound states, technihadrons, are composed from the
fundamental fermions, techniquarks T , which feel this interaction. The techniquarks
carry the same quantum numbers as quarks, except instead of SU (3)C , they transform
as a fundamental representation of GTC . Then, the vacuum expectation value hT̄ T i
breaks SU (2)L × U (1)Y → U (1)EM in such a way that the correct relation between
masses of the W and Z bosons is fulfilled automatically. A practical implementation
of this beautiful idea faces, however, a number of difficulties which result in a compli-
fication of the model. First, the role of the Higgs boson in SM is not only to break the
electroweak symmetry: its vacuum expectation value also gives masses to all charged
fermions. Attempts to explain the origin of fermion masses in technicolor models re-
sult in significant complication of the model and, in many cases, in contradiction with
experimental constraints on the flavour-changing processes. Second, many parameters
of the electroweak theory are known with very high precision (and agree with the usual
Higgs breaking), while even a minor deviation from the standard mechanism destroys
this well-tuned picture. To construct an elegant and viable technicolor model is a task
for future which will become relevant if the Higgs scalar will not be found at LHC.
In another class of models (suggested in [111] and further developed in numerous
works which are reviewed, e.g., in [109]), the Higgs scalar appears as a component of a
vector field. Since the vacuum expectation value of a vector component breaks Lorentz
invariance, this mechanism works exclusively in models with extra space dimensions.
For instance, from the four-dimensional point of view, the fifth component of a five-
dimensional gauge field behaves as a scalar, and giving a vacuum expectation value
to it breaks only five-dimensional Lorentz invariance while keeping intact the observed
four-dimensional one. Symmetries of the five-dimensional model, projected onto the
four-dimensional world, protect the effective theory from unwanted features related to
the existence of a fundamental scalar particle. These models also have a number of
phenomenological problems which can be solved at a price of significant complication
of a theory.
The so-called higgsless models [112] (see also [109]) are rather close to these multi-
dimensional models, though differ from them in some principal points. The higgsless
models are based on the analogy between the mass and the fifth component of mo-
mentum in extra dimensions: both appear in four-dimensional effective equations of
motion similarly. In the higgsless models, the nonzero momentum appears due to im-
posing some particular boundary conditions in a compact fifth dimension. In the end,

25
these boundary conditions are responsible for breaking of the electroweak symmetry.
Unlike in five-dimensional models, where the Higgs particle is a component of a vector
field, the physical spectrum of the effective theory in higgsless models does not contain
the corresponding degree of freedom. These models have some phenomenological diffi-
culties (related e.g. to precise electroweak measurements). Another shortcoming of this
class of models is considerable arbitraryness in the choice of the boundary conditions,
which are not derived from the model but are crucial for the electroweak breaking.
Finally, we note that a composite Higgs boson may be even more complex than
just a fermion condensate: it may be a bound state which includes strongly coupled
gauge fields. Description of these bound states requires a quantitative understanding
of nonperturbative gauge dynamics. Taking into account the analogy between strongly
coupled four-dimensional theories and weakly coupled five-dimensional ones (which will
be discussed in Sec. 5.3, these models may even happen to be equivalent to multidi-
mensional models described above.

4.2 The gauge hierarchy.


Each of the main interactions of particles has its own characteristic energy scale. For
the strong interaction it is ΛQCD ∼ 200 MeV, the scale at which the QCD running
coupling becomes strong; this scale determines masses of hadrons made of light quarks.
The scale of the electroweak theory is determined by the vacuum expectation value of
the Higgs boson, v ≈ 246 GeV, which determines, through the corresponding coupling
constants, the masses of the W and Z bosons and of SM matter fields. For gravity,
the characteristic scale is the Planck scale MPl ∼ 1019 GeV, determined by the Newton
constant of the classical gravitational interactions.
These three scales are related to known forces. Extensions of SM give motivation
to some other interactions and, consequently, to other scales. First of all it is MGUT ∼
1016 GeV, the scale of the suggested Grand Unification of interactions. In several
models explaining neutrino masses there exists a scale Mν ; sometimes the scale MPQ ,
related to the CP invariance of the strong interaction, is also introduced. Values of
these two scales are model dependent but roughly MPQ ∼ Mν ∼ 1014 GeV.
The gauge hierarchy problem (see also [2, 79, 113]) consists in the disproportionality
of these scales:
(ΛQCD , v)  (MPl , MGUT , MPQ , Mν )
and in a range of related questions which may be divided into three groups.
1. The origin of the hierarchy: why the scales of the strong and electroweak
interactions are smaller than others by many orders of magnitude? That is, why, for
instance, all SM particles are practically massless at the gravity scales? It is possible, in
the frameworks of the Grand-Unification hypothesis, to get a reasonable explanation of
the relation ΛQCD  MGUT . It is based on the logarithmic renormalization-group de-
pendence of the gauge coupling constant from energy E. In the leading approximation,
this dependence for the strong-interaction coupling α3 reads
αGUT
α3 (E) = ,
1 + β3 αGUT ln(E/MGUT )

where β3 is a positive coefficient which depends on the set of strongly interacting matter
fields (in SM, β3 = 11/(12π)), while αGUT ∼ 1/30 is the value of the coupling constant
of a unified gauge theory at the energy scale ∼ MGUT . The scale ΛQCD , where α3

26
Figure 13: Hierarchy of scales of gauge interactions.

becomes large, may be determined in this approximation as


 
1
ΛQCD = MGUT exp −
β3 αGUT

and the exponent provides for the required hierarchy. However, a similar analysis is
not succesful for the electroweak interaction, whose coupling constants are small at the
scale v. The latter is unrelated to any dynamical scale and is introduced in the theory
as a free parameter.
2. The stability of the hierarchy. In√the standard mechanism of the electroweak
breaking, the characteristic scale v = MH / 2λ, where λ is the self-interaction constant
of the Higgs boson. Together with MH , the scale v gets, in SM, quadratically divergent
radiative corrections,
δv 2 ∼ δMH2
= f (g)Λ2UV ,
where f (g) is a symbolic notation for some known combination of the coupling constants
(in SM, f (g) ≈ 0.1), and ΛUV is the ultraviolet cutoff which may be interpreted as an
energy scale above which SM cannot give a good approximation to reality. This scale
may be related to one of the scales MPl , MGUT etc. discussed above; in the assumption
of the absence of the “new physics”, that is of particle interactions other than those
already discovered (SM and gravity), one should take ΛUV ∼ MPl . Therefore, since
v 2 = v02 − δv 2 , where v0 is the parameter of the tree-level lagrangian, the hierarchy
v 2  MPl 2 appears as a result of cancellation between two huge contributions, v 2 and
0
2
δv . Each of them is of order f (g)MPl 2 ∼ 1033 v 2 , that is the cancellation has to be

precise up to 10−33 in every order of the perturbation theory. This fine tuning of
parameters of the model, though technically possible, does not look natural. One may
revert this logic and say that to avoid fine tuning in SM one should have

f (g)Λ2UV ∼ v 2 ⇒ ΛUV ∼ TeV. (3)

The relation (3) gives a base for the optimism of researchers who expect the discovery
of not only the Higgs boson but also some new physics beyond SM from the LHC.
3. The gauge desert. The third aspect of the same problem is related to the
presumed absence of particles with masses (and of interactions with scales) between
“small” (ΛQCD , v) and “large” (Mν , MGUT , MPl ) energetic scales, see Fig. 13. All known
particles are settled in a relatively narrow region of masses . v, beyond which, for many
orders of magnitude, lays the so-called gauge desert. Clearly, one may suppose that the

27
heavier particles simply cannot be discoverd due to insufficient energy of accelerators,
but this suggestion is not that easy to accomodate within the standard approach.
Indeed, new relatively light (∼ v) particles which carry the SM quantum numbers are
constrained by the electroweak precision measurements. Also, the latest Tevatron and
first LHC results on the direct search of new quarks strongly constrain the range of
their allowed masses (see [114] and references therein). In particular, for the fourth
generation of matter fields similar to the known three, the mass of its up-type quark
should exceed 338 GeV, while that of a down-type quark should exceed 311 GeV. The
mass of the corresponding charged lepton cannot be lower than 101 GeV [6]. The mass
of the fourth-generation standard neutrino should exceed one half of the Z-boson mass,
as it has been already discussed above. At the same time, these values of masses of the
fourth-generation charged fermions cannot have the same origin as those for the first
three generation, because to generate masses much larger than v, Yukawa constants
much larger than one are required. Since the methods to calculate nonperturbative
corrections to masses are yet unknown, for this case one cannot be sure that these
masses can be obtained at all in a usual way. Moreover, SM fermion masses exceeding
the electroweak breaking scale are forbidden by the SU (2)L × U (1)Y gauge symmetry:
a mechanism generating these masses would also break the electroweak symmetry at
a scale > v. Addition of matter fields which do not constitute full generations may
be considered as an essential extension of SM. Finally, addition of new matter affects
the energy dependence of the gauge coupling constants and spoils their perturbative
unification (unless one adds either full generations or other very special sets of particles
of roughly the same mass which constitute full multiplets of a unified gauge group).
We see that attempts “to inhabit the gauge desert” inevitably result in significant steps
beyond SM while the desert itself does not look natural.
Attempts to solve the gauge hierarchy problem may be also divided into several
large groups.
1. The most radical approach, rather popular in recent years, is to assume that
the high-energy scales are simply absent in Nature. For a theoretical physicist, the
most easy scales to refuse are Mν and MPQ , because they do not appear in all models
explaining neutrino masses and CP conservation in strong interactions, respectively.
MGUT is a bit more difficult: the Grand Unification of interactions gets support not
only from aesthetic expectations (electricity and magnetism unified to electrodynamics,
electrodynamics and weak interactions unified to the electroweak theory, etc.) and the
arguments related to the electric charge quantization (see e.g. [3]), but also from the
analysis of the renormalization-group running of the three SM gauge coupling constants
which get approximately the same value at the scale MGUT (see e.g. [2, 3]). It is worth
noting that at the plot (see Fig. 14) of α1,2,3 as functions of energy in SM, the three
lines do not intersect at a strictly one point, however, for the case of evolution for many
orders of magnitude in the energy scale, already an approximate unification is a surprise.
To make three lines intersect at one point precisely, one needs a free parameter, which
may be introduced in the theory with some new particles, e.g. with masses ∼TeV (this
happens in particular in models with low-energy supersymmetry, see below). Therefore,
the most amusing is not the precise unification of couplings in an extended theory with
additional parameters but the approximate unification already in SM. It is not that
easy to keep this miraculous property and at the same time to lower the MGUT scale in
order to avoid the hierarchy v  MGUT . Indeed, the addition of new particles which
affect the renormalization-group evolution either spoils the unification or, to the leading
order, does not change the MGUT scale (note that in SM, the unification occurs in the
perturbative regime and higher corrections does not change the picture significantly).

28
Figure 14: The energy-scale dependence of coupling constants of SM gauge interactions U (1)Y
(the full line), SU (2)W (the dashed line) and SU (3)C (the dotted line) in the
leading order.

The only possibility is to give up the perturbativity (the so-called “strong unification”
[115, 116]). In this latter approach, the addition of a large number of new fields in
full multiplets of a certain unified gauge group results in increasing of the coupling
constants at high energies; QCD stops to be asymptotically free at energies higher
than the masses of the new particles. In the leading order, all three coupling constants
have, in this case, poles at high energies; the unification of SM couplings guarantees
that the three poles coincide and are located at MGUT . However, this leading-order
approximation has nothing to do with the real behaviour of constants in the strong-
coupling regime, so the theory may generate a new scale Ms at which α1,2,3 become
strong, this scale being an ultraviolet analog of ΛQCD . For a sufficiently large number
of additional matter fields, Ms may be sufficiently close to the electroweak scale v:
in certain cases, it might be that Ms  MGUT (a nonperturbative fixed point). In
this scenario, low-energy observable values of the coupling constants appear as infrared
fixed points and do not depend on unknown details of the strong dynamics. Note that
the Grand-unified theory may have degrees of freedom very different from SM in this
case.
In the recent decade, the models became quite popular in which the hierarchy
problem is solved by giving up the large parameter MPl . This parameter is related
to the gravitational law and any attempt to change the parameter requires a change
in the Newtonian gravity. This may be achieved, for instance, if the number of space
dimensions exceeds three but, for some reason, the extra dimensions remain unseen
(see e.g. a review [117]). Indeed, assume that the extra dimensions are compact and
have a characteristic size ∼ R, where R is sufficiently small. Then it is easy to obtain
the relation
2 2+δ
MPl ∼ Rδ MPl,4+δ , (4)
where δ is the number of extra space dimensions, MPl,4+δ is the fundamental param-
eter of the (4 + δ)-dimensional theory of gravity, while MPl now is the effective four-
dimensional Planck mass. Already in the beginning of the past century, in works by
Kalutza [118], subsequently developed by Klein [119], possible existence of these extra

29
dimensions, unobservable because of small R, has been discussed. This approach as-
sumed that R ∼ 1/MPl (and therefore MPl ∼ MPl,4+δ ) and has become well-known and
popular in the second part of the 20th century in context of various models of string
theory, which however did not result in succesful phenomenological applications by
now. We will discuss, in a little more detail, another approach which allows to make R
larger without problems with phenomenology. It is based on the idea of localization of
observed particles and interactions in a 4-dimensional manifold of a (4 + δ)-dimensional
spacetime [120, 121, 122].
From the field-theoretical point of view, the localization of a (4 + δ)-dimensional
particle means that the field describing this particle satisfies an equation of motion
with variables related to the observed four dimensions (call them xµ , µ = 0, 1, 2, 3)
separated from those related to δ extra dimensions (zA , A = 1, . . . δ) and the solution
for the z-dependent part is nonzero only in a vicinity (of the size ∼ ∆) of a given
point in the δ-dimensional space (without loss of generality, one may consider the point
z = 0), while the x-dependent part satisfies the usual four-dimensional equations of
motion for this field. As a result, the particles described by the field move along the
four-dimensional hypersurface corresponding to our world and do not move away from
it to the extra dimensions for distances exceeding ∆. This may happen if the particles
are kept on the four-dimensional hypersurface by a force from some extended object
which coincides with the hypersurface. This solitonlike object is often called brane,
hence the expression “braneworld”. The readers of Physics Uspekhi may find a more
detailed description of this mechanism in [117].
Based on the topological properties of the brane, localisation of light (massless in
the first approximation) scalars and fermions in four dimensions4 implies that many
direct experimental bounds on the size of extra dimensions in a Kalutza-Klein-like
model restrict now the region ∆ accessible for the observed particles, instead of the
size R of the extra dimension. In [124], it has been suggested to use this possibility,
for R  ∆, to remove, according to Eq. (4), a large fundamental scale MPl and the
corresponding hierarchy. It has been pointed out that in this class of models, R is
bound from above mostly by nonobservation of deviations from the Newtonian gravity
at short distances; experiments now exclude the deviations at the scales of order 50 µm
only [125] (it was ∼ 1 mm at the moment when the model was suggested). This
allows, according to Eq. (4), to have MPl,4+δ ∼TeV, that is almost of the same order
as v. Models of this class are well studied from the phenomenological point of view
but have two essential theoretical drawbacks. The first one is related to the apparent
absence of a reliable mechanism of localization of gauge fields in four dimensions. The
only known field-theoretical mechanism for that [126] is based on some assumptions
about the behaviour of a multidimensional gauge theory in the strong-coupling regime.
Though these assumptions look realistic, they currently cannot be considered as well-
justified. The second difficulty is aesthetic and is related to the appearence of a new
dimensionful parameter R: the hierarchy v  MPl happens to be simply reformulated
in terms of a new unexplained hierarchy 1/R  MPl,4+δ .
To a large extent, these difficulties are overcome in somewhat more complicated
models, in which the spacetime cannot be presented as a direct product of our four-
dimensional Minkowski space and compactified extra dimensions [127, 128, 129]. The
principal difference of this approach from the one discussed above is that the gravita-
tional field of hte brane in extra dimensions is not neglected. For δ = 1 and in the limit
4
Note that recently, a fully analogous mechanism of localisation in one- or two-dimensional manifolds
has been tested experimentally for a number of solid-state systems (the quatum Hall effect, topological
superconductors and topological insulators, graphene), see e.g. [123].

30
of a thin brane, one obtains the usual five-dimensional general-relativity equations.
These equations have, in particular, solutions with four-dimensional Poincare invari-
ance. The metrics in these solutions is exponential in the extra-dimensional coordinate
(the so-called anti-de-Sitter space),
ds2 = exp(−2k|z|)dx2 − dz 2 (5)
where ds2 and dx2 are the squares of the five-dimensional and usual four-dimensional
(Minkowski) intervals, respectively. For a finite size zc of the fifth dimension, the
relation between the fundamental scales is now
MPl ∼ exp(kzc )MPl,5 .
If fundamental dimensionful parameters of the five-dimensional gravity satisfy MPl,5 ∼
k ∼ v, one may [129] explain the hierarchy v/MPl for zc ≈ 37/k, that is instead of the
fine tuning with the precision of 10−16 , one now needs to tune the parameters up to
∼ 0.1. It is interesting that in models of this kind with two or more extra dimensions,
it is possible [130] to localize gauge fields on the brane in the weak-coupling regime,
contrary to the case of the factorizable geometry.
2. A completely different approach to the problem of stabilization of the gauge
hierarchy is to add new fields which cancel quadratic divergencies in expressions for
the running SM parameters. The best-known realization of this approach is based on
supersymmetry (see e.g. reviews [131, 132, 133]), which provides for the cancellation of
divergencies due to opposite signs of ferminonic and bosonic loops in Feynman diagrams.
The requirement of supersymmetry is very restrictive for the mass spectrum of
particles described by the theory. Namely, together with the observed particles, their
superpartners, that is particles with the same masses and different spins, should be
present. The absence of scalar particles with masses of leptons and quarks and of
fermions with masses of gauge bosons means that unbroken supersymmetry does not
exist in Nature. It has been shown, however, that it is possible to break supersymmetry
while keeping the cancellation of quadratic divergencies. This breaking is called “soft”
and naturally results in massive superpartners.
In the minimal supersymmetric extension of SM (MSSM; see e.g. [133]), each of
the SM fields has a superpartner with a different spin: the Higgs boson corresponds to
a fermion, higgsino; matter-field fermions correspond to scalar squarks and sleptons;
gauge bosons correspond to fermions which transform in the adjoint representation of
the gauge group and are called gauginos (in particular, gluino for SU (3)C , wino and
zino for the W and Z bosons, bino for the hypercharge U (1)Y and photino for the
electromagnetic gauge group U (1)EM ). For the theory to be selfconsistent (absence
of anomalies related to the higgsino loops), and also to generate fermion masses in a
supersymmetric way, the second Higgs doublet is introduced, which is absent in SM.
The cancellation of quadratic divergencies may be easily seen in Feynman diagrams:
in the leading order, closed fermion loops have the overall minus sign and cancel the
contributions from loops of their superpartner bosons. This cancellation is precise
as long as the masses of particles and their superpartners are equal; otherwise the
contributions differ by an amount proportional to the difference between squared masses
of superpartners, ∆m2 . The condition of stability of the gauge hierarchy then requires
g2 2 2
that 16π 2 ∆m . v , where g is the coupling constant in the vertex of the corresponding
loop (the maximal, g ∼ 1, coupling constant is that of the top quark). We arrive to an
important conclusion which motivates in part the current interest to phenomenological
supersymmetry: if the problem of stabilization of the gauge hierarchy is solved by

31
supersymmetry, then the superpartner masses cannot exceed a few TeV, which means
that they might be experimentally found in the nearest future.
The MSSM lagrangian, in the limit of unbroken supersymmetry, satisfies all symme-
try requirements of SM, including the conservation of the lepton and baryon numbers.
At the same time, the SM gauge symmetries do not forbid, for this set of fields, certain
interaction terms which violate the lepton and baryon numbers. The coefficients at
these terms should be very small in order to satisfy experimental constraints, for in-
stance, those related to the proton lifetime. It is usually assumed that these terms are
forbidden by an additional global symmetry U (1)R . When supersymmetry is broken,
this U (1)R breaks down to a discreet Z2 symmetry called R parity. With respect to the
R parity, all SM particles carry charges +1 while all their superpartners carry charges
−1. The R-parity conservation leads to the stability of the lightest superpartner (see
Sec. 3.2).
The soft supersymmetry-breaking terms are introduced in the MSSM lagrangian
explicitly. They include usual mass terms for gaugino and scalars as well as trilinear
interactions of the scalar fields. In addition to the SM parameters, about 100 indepen-
dent real parameters are therefore introduced. In general, these new couplings with
arbitrary parameters may result in nontrivial flavour physics. The absence of flavour-
changing neutral currents and of processes with nonconservation of leptonic quantum
numbers, as well as limits from the CP violation, narrow the allowed region of the
parameter space significantly.
One may note the following characteristic features of the phenomenological super-
symmetry.
(1). The coupling-constant unification at a high energy scale becomes more precise
as compared to SM, if superpartners have masses ∼ v as required for the stability of
the gauge hierarchy.
(2). In the same regime, the gauge desert between ∼ 103 GeV and ∼ 1016 GeV is
still present.
(3). In MSSM, there is a rather restrictive bound on the mass of the lightest
Higgs boson. In the leading approximation of perturbation theory, it is MH < MZ .
The account of loop corrections allow to relax it slightly, but in most realistic models
MH < 150 GeV is predicted. The absence of a light Higgs boson discussed in Sec. 4.1
is a much more serious problem for supersymmetric theories than for SM.
(4). The phenomenological model described above explains the stability of the
gauge hierarchy but not its origin. The small parameter v/M , where M = MGUT or
M = MPl , does not require tuning in every order of perturbtion theory but should be
introduced in the model by hand, that is cannot be derived, nor expressed through
a combination of numbers of order one. At the same time, if the supersymmetry
breaking is moderate, as required to solve the quadratic-divergency problem, it may be
explained dynamically and related to nonperturbative effects which become important
at a characteristic scale of
Λ ∼ exp −O 1/g 2 M,


where g is some coupling constant. If g is small, then the supersymmetry breaking


scale is also small, Λ  M . In a number of realistic models it is possible to get,
up to powers of the coupling constants, v ∼ Λ dynamically (by means of radiative
corrections) and to explain therefore the origin of the gauge hierarchy. However, in
the MSSM frameworks, there is no place for nonperturbative effects of the required
scale: these effects are relevant only for QCD and with Λ ∼ ΛQCD  v. The dynamical
supersymmetry breaking should take place in a new sector, introduced expressly for this

32
Figure 15: Constraints on the MSSM parameters [134] in one popular scenario (see text).
The allowed region is the narrow strip which can be seen in the zoomed panel.

purpose and containing a new strongly coupled gauge theory with its own set of matter
fields. No sign of this sector is seen in experiments and one consequently supposes that
the interaction between the SM (or MSSM) fields and this sector is rather weak and
becomes significant only at high energies, unreachable in the present experiments. This
interaction is responsible for the soft terms, that is for mediation of the supersymmetry
breaking from the invisible sector to the MSSM sector. One distinguishes the gravity
mediation (at Planck energies) and the gauge mediation of supersymmetry breaking.
Gravity-mediated and gauge-mediated models have quite different phenomenology.
We see that MSSM, with addition of a sector which breaks supersymmetry dy-
namically and of a certain interaction between this hidden sector and the observable
fields, may explain the origin and stability of the gauge hierarchy, if the masses of su-
perpartners are not very high (. TeV). Note that the searches for supersymmetry in
accelerator experiments put serious constraints on the low-energy supersymmetry. Al-
ready the fact that superpartners have not been seen at LEP implied that a significant
part of the theoretically allowed MSSM parameter space was excluded experimentally.
Subsequent results of Tevatron and especially the first LHC data squeeze the allowed
region of parameters significantly, so that for “canonical” supersymmetry, only a very
narrow and not fully natural region of possible superpartner mass remains allowed. In
Fig. 15, theoretical and experimental (as of summer 2011) constraints on the MSSM
parameters are plotted for one rather natural and popular scenario of gravity-mediated
supersymmetry breaking. The masses of all scalar superpartners at the MGUT energy
scale are equal to m0 in this scenario, while masses of all fermionic superpartners are
M1/2 . Their ratios to the supersymmetric mixing matrix of the Higgs scalars, µ, are
given in the plot. In a scenario which explains the gauge hierarchy, the MSSM param-
eters and the Z-boson mass should be of the same order; for instance, in the model
which corresponds to the illustration, the following relation holds,
MZ2 ' 0.2m20 + 1.8M1/2
2
− 2µ2 .

The LHC bound, M1/2 & 420 GeV, results in the requirement of not fully natural

33
2 & 40M 2 . Together with the absence of a light Higgs boson
cancellations since 1.8M1/2 Z
discussed in Sec. 4.1, this “little hierarchy” problem makes the approach based on su-
persymmetry less motivated than it looked some time ago, though there exist variations
of supersymmetric models where this dificulty is overcome.
3. The Higgs field may be a pseudo-Goldstone boson. The Goldstone theorem guar-
antees a massless (even with the account of radiative corrections!) scalar particle for
each generator of a broken global symmetry. A weak explicit violation of this symmetry
allows to give a small mass to this scalar to get the so-called pseudo-Goldstone boson.
The same mechanism results in a low but nonzero mass of some composite particles
in a strongly-interacting theory (for instance, of the π meson). A direct application of
this approach to the Higgs boson is not possible because the interaction of a pseudo-
Goldstone particle with other fields contains derivatives and is very different from the
SM interactions. Realistic models of this kind with large coupling constants and with
interactions without derivatives, at the same time free from quadratic divergencies,
are called the “Little Higgs models” (see e.g. [135] and references therein). Diagram
by diagram, the absence of quadratic divergencies occurs due to complicated cancella-
tions of contributions of a number of particles with masses of order TeV, in particular
of additional massive scalars. Note that to reconcile a large number of new particles
with experimental constraints, in particular with those from the precision electroweak
measurements, the model requires significant complications.
4. Composite models: besides the Little Higgs models, a composite Higgs scalar is
considered in a number of other constructions, see e.g. [136]. In some rather popular
models with composite quarks and leptons, the SM matter fields, together with the
Higgs boson (or even without it) represent low-energy degrees of freedom of a strongly
coupled theory, like hadrons may be considered as low-energy degrees of freedom of
QCD. The mass scales of the theory, v in particular, are determined by the scale Λ
at which the running coupling constant of the strongly-coupled theory becomes large,
analogously to ΛQCD . The hierarchy Λ  MPl is now determined by the evolution
of couplings in the fundamental theory. These models generalize, to some extent, the
technicolour models, having more freedom in its construction at the price of even more
complications in the quantitative analysis. Note that (at least) in some supersymmetric
gauge theories, low-energy degrees of freedom may include also gauge fields, so in
principle, one may consider models in which all SM particles are composite (see e.g.
[116, 136]). On the other hand, the correspondence between strongly coupled four-
dimensional models and weakly-coupled five-dimensional theories (see Sec. 5.3) may
open prospects for a quantitative study of composite models. It might even happen that
the approaches to the gauge-hierarchy problem, based on assumptions of the extra space
dimensions, are equivalent to the approaches which invoke strongly coupled composite
models. As in other approaches, to explain the hierarchy, the scale Λ should not exceed
significantly the electroweak scale v, so that the LHC constraints on compositeness of
quark and leptons (roughly Λ & (4 . . . 5) TeV) may again be problematic.
Conclusion. All known scenarios which explain the origin and stability of the
gauge hierarchy without extreme fine tuning, predict new particles and/or interactions
at the energy scale not far above the electroweak scale. Absence of experimental signs of
these particles, especially with the account of the first LHC data, questions the ability
of these scenarios to solve the hierarchy problem. If the LHC finds the Higgs scalar
but will not confirm predictions of any of the models discussed above, nor will find
signs of some other, yet not invented, mechanism, then one would have to reconcider
the question of the naturalness of the fine tuning. A principally different position,
based on the anthropic principle, is seriously discussed but lays beyond the scope of

34
Figure 16: Masses of the charged SM fermions. The area of each circle is proportional to the
mass of the corrsponding particle.

our consideration.

4.3 The fermion mass hierarchy.


As it has already been pointed out, the SM fermionic fields, quarks and leptons, com-
prise three generations, that is three sets of particles with identical interactions but
with very different masses (see Fig. 16 for a pictorial illustration). The hierarchy of
these masses is one of the biggest puzzles of particle physics. Indeed, for instance,
the electron (me = 0.511 MeV), the muon (mµ = 105.7 MeV) and the tau lepton
(mτ = 1777 MeV) carry identical gauge quantum numbers. For quarks, it is con-
venient to determine the mass matrix whose diagonal elements determine the masses
of the quarks of three generations with identical interactions while combinations of
non-diagonal elements provide for the possibility of mixing between generations. The
hierarchical structure appears both in the diagonal elements (which differ by orders of
magnitude) and in the off-diagonal ones (the mixing is suppressed). In the SM frame-
works, neutrino are strictly massless and the mixing of charged leptons is absent, but
the same hierarchical structure is seen in the set of masses of charged leptons.
As we have discussed in Sec. 2, the experiments of the past decade not only estab-
lished confidently the fact of the neutrino oscillations (pointing therefore to nonzero
neutrino masses and giving the first laboratory indication to the incompleteness of
SM) but also opened the possibility of a quantitative study of neutrino masses and of
the mixing in the leptonic sector. It is interesting that the neutrino masses and the
leptonic mixings also have the hierarchical structure, but it is very different from the
corresponding hierarchy in the quark sector: contrary to the suppressed quark mix-
ings, the leptonic mixing is maximal; the hierarchy of neutrino masses is at the same
time moderate. A modern theory which succesfully explains the fermion masses should

35
Figure 17: A model with extra space dimensions which explains the mass hierarchy.

motivate both hierarchical structures and explain why they are different.
Meanwhile, even without the neutrino sector, the intergeneration mass hierarchy
is very difficult to explain. A natural idea is to suppose that there is an extra global
symmetry which relates the fermionic generations to each other and which is sponta-
neously broken; however, this approach is not succesful because it implies the existence
of a massless Goldstone boson, the so-called familon, whose parameters are strictly
constrained by experiments [6].
A model of fermion masses should explain only the origin of the hierarchy: its
stability is provided automatically by the fact that all radiative corrections to the
fermion-Higgs Yukawa constants, to which the fermion masses are proportional, depend
on the energy logarithmically, that is weakly; this does not, however, make the issue
significantly less complicated.
An explanation of the hierarchy may be obtained in a model with extra space dimen-
sions (Fig. 17), in which a single generation of particles in six-dimensional spacetime
effectively describes three generations in four dimensions [137, 138]. Each multidimen-
sional fermionic field has three linearly independent solutions which are localized on
the four-dimensional hypersurface and have different behaviour close to the brane. De-
noting as r, θ the polar coordinates in two extra dimensions and considering the brane
at r = 0, one gets for the three solutions at r → 0,

u0 ∼ const = r0 ei0θ , u1 ∼ r1 ei1θ , u2 ∼ r2 ei2θ .

The Higgs scalar has a vacuum expectation value v(r) which depends on r and is
nonzero only in the immediate vicinity of the brane. The effective observable fermion

36
masses are proportional to the overlap integrals
Z
mi ∝ dr dθ v(r)|ui |2 (r, θ)

of the coordinate-dependent vacuum expectation value v and extra-dimensional parts


of the fermionic wave functions which correspond to the three localized solutions
(i = 0, 1, 2 enumerates three generation of fermions). One can see from Fig. 17 that the
resulting mi are hierarchically different. Therefore, in this model the mass hierarchy
follows from the linear independence of eigenfunctions of the Dirac operator in a partic-
ular external field. The same model automatically describes the required structure of
neutrino masses and mixings [139]. Presently, this model is the only one known in which
the hierarchy of families of both charged fermions and neutrinos are obtained on the
common grounds. Note that, contrary to other multidimensional models (e.g. [140]),
in this model the number of free parameters is smaller than the number of parameters
it describes.
Compared to the hierarchy of masses of particles with identical interactions from
different generations, the question of the difference of masses of particles within a
generation is much easier. For instance, the difference between masses of the τ lepton
and the b and t quarks may be explained by different (because of different quantum
numbers) renormalization-group evolution of the Yukawa couplings, so that at the
Grand-unification scale these constants are equal while at low energies they are different.

5 Theoretical challenges in the description of hadrons.


5.1 Problems of the perturbative QCD.
In this section, we discuss the question about the practical applicability of the quantum
field theory to the description of interactions with large coupling constants, and in
particular to the low-energy limit of QCD. It would not be an exaggeration to say
that most of the theoretical achievements in the quantum field theory in the past two
decades were related to this problem. Before proceeding to the discussion of these
achievements, let us note that despite a significant progress, the problem of description
of strong interactions at low energies in terms of QCD is not solved, so the development
of the corresponding methods remains one of the basic tasks of the quantum field theory.
Recall that QCD, which describes the strong interaction at high energies, is a gauge
theory with the gauge group SU (3)C and Nf = 6 fermions, quarks, which transform
under its fundamental representation, and the same number of antiquarks transforming
under the conjugated representation. A peculiarity of the model is that the asymptotic
states, in terms of which the quantum theory is constructed, do not coincide with the
fundamental fields in terms of which the Lagrangian is written, that is with fermions
(quarks) and gauge bosons (gluons). Contrary, the observable particles do not carry the
SU (3)C quantum numbers (this phenomenon is called confinement). The observable
strongly interacting particles are hadrons, whose classification and interactions allow
to interpret them as bound states of quarks. At the same time, the theory which
describes interaction of quarks, QCD, is unable to calculate properties of these bound
states. Intuitively, it seems possible to relate confinement and formation of hadrons
with the energy dependence of the QCD gauge coupling constant which grows up with
the decrease in energy (that is with the increase in distance; the so-called asymptotic
freedom) and becomes large, αs ∼ 1, at the scale ΛQCD ∼ 150 MeV: when the distance

37
Figure 18: Electromagnetic pion formfactor [142]: experimental data versus theoretical cal-
culations, perturbative (QCD, the dashed line) and nonperturbative (full lines
representing working models which are not derived from QCD). Up to the energy
scale ∼ 2 GeV, there are no signs of approaching the perturbative regime.

between quarks is increased, the force between them increases as well, and maybe this
force binds them to hadrons. This picture is however not fully consistent because at
αs & 1, the perturbative expansion stops to work and the true energy dependence of the
coupling constant is unknown. Indeed, there exist examples of theories with asymptotic
freedom but without confinement [141].
To understand the nature of confinement and to describe properties of hadrons
from the first principles (and, in the end, to answer whether QCD is applicable to the
description of hadrons), one require the methods of the field theory which do not make
use of the expansion in powers of the coupling constants (non-perturbative methods).
It is natural to assume (and it was assumed for a long time) that the perturbative
QCD has to describe well the physics of strong interactions at characteristic energies
above few hundred MeV, because the coupling constant becomes large at ∼ 150 MeV.
A number of recent experimental results related to the measurement of the form factors
of π mesons question the applicability of perturbative methods at considerably higher
momentum transfer (a few GeV). In general, formfactors are the coefficients by which
the true amplitude of a process with composite or extended particles involved differs
from the same amplitude calculated for point-like particles with the same interaction.
These coefficients are determined by the internal structure of particles (for instance,
by the distribution of the electric charge); their particular form depends on the process
considered and on the value of the square of the momentum transfer, Q2 . A full theory
describing the interaction which keeps the particles in the bound state should allow
for derivation of form factors from the first principles. The results of the experimental
determination of formfactors of π mesons related to various processes are given in
Figs. 18, 19. One may see that the perturbative QCD experience some difficulties in
explaining the experiment at the momentum transfer . 4 GeV.
Approaches to nonperturbative description of QCD may be divided into two classes:
(1) calculations in QCD beyond the perturbation theory (the only available method here
is the numerical calculation of the functional integral on the lattice) and (2) construction

38
Figure 19: The transitional form factor of the π meson which describes the process π 0 →
γγ: experimental data [143] versus calculations of perturbative QCD. The QCD
predicts
p the behaviour Q2 F (Q2 ) ∼ const (the horizonthal full line); at least up to
Q ∼ 4 GeV, the experiment points to Q2 F (Q2 ) ∼ (Q2 )0.5 (the dotted line).
2

of an effective theory in terms of degrees of freedom which correspond to obsevrable


particles. In the latter case the main unsolved question is, as a rule, to justify the
connection of the effective theory to QCD. To some extent, a progress in this direction
became possible within the concept of dual theories discussed below.

5.2 The lattice results.


The Feynman functional integral is a formally strict approach to the quantization of
fields, equivalent to other approaches in the domain of applicability of the perturbation
theory. It is natural to suppose that in the nonperturbative domain, this method also
reproduces the results which would be obtained within the standard frameworks if the
means to get them existed. Numerical calculation of the functional integral is possible
in lattice calculations in which the continuous and infinite spacetime is replaced by
a finite discrete lattice (see e.g. [144]). In modern calculations, the lattices 323 × 64,
that is 32 points in each of the space coordinates and 64 points in time, are used. For
physics applications, it is very important that the gauge invariance may be defined in
the lattice theory in a strict way.
One of the first serious achievements of the lattice field theory was a discovery that
the lattice model with symmetries and field content of QCD exhibits confinement [145].
Subsequent works allowed to refine which particular field configurations are responsible
for confinement; the work on this question continues.
The lattice approach allows to calculate the values of masses and decay constants of
hadrons, and in recent years, a significant progress in this direction has been achieved
(see Fig. 20). The most precise for today results [146] are obtained for the so-called
“2+1” parametrization in which masses of u and s quarks are free parameters, the
d-quark mass is assumed to be equal to that of the u quark and the contributions of
heavy c, b and t quarks are neglected. Besides these two parameters (mu = md and ms ),
there is one more, the physical length which corresponds to a unit step of the lattice.
To determine the masses of hadrons, these three parameters should be specified, so in

39
Figure 20: Results of the lattice calculations of the hadron masses. Masses of π, K and Ω
mesons are taken as input parameters. The calculations have been performed in
the three-quark approximation, mu = md 6= ms . The histogram gives the experi-
mentally measured values of masses [6], the points (with the error-bar rectangles)
represent the results of calculations [146].

real calculations one assumes that the masses of, say, π, K and Ω mesons are known
while all other masses and decay constants are expressed through them. One might try
to fix masses of heavier particles and to calculate those of the lightest ones, but for a
confident calculation of masses of light hadrons a large lattice is required. Currently,
the mass of the π meson may be calculated only up to an order of magnitude in this
way.
At high temperature, one expects a transition to the state in which quarks cannot
be confined in hadrons, that is a phase transition. In reality, these conditions appear in
nuclei collisions at high-energy colliders; probably, they also took place in a very early
Universe. By means of the lattice methods, the existence of this phase transition has
been demonstrated, its temperature has been defined and the dependence of the order
of the phase transition from the quark masses has been studied [147, 148].
It is an open theoretical question to prove that the continuum limit of a lattice
field theory exists (that is the physical results do not depend on the way in which
the lattice size tends to infinity and the lattice step tends to zero) and coincides with
QCD. It may happen that this proof is impossible in principle unless one finds an
alternative way to work with QCD at strong coupling. However, there exist a series
of arguments suggesting that the lattice theory indeed describes QCD (first of all, it
is the fact that the lattice calculations reproduce experimental results). At the same
time, theoretically, the difference between the lattice and continuum theories is large;
for instance, topologically stable in the continuum theory configurations, instantons,
which determine the structure of vacuum in nonabelian gauge theories, are not always
stable on the lattice; the lattice description of chiral fermions (automatic in a continuum
theory) requires complicated constructions etc.

5.3 Dual theories: supersymmetric duality and holography.


In the past two decades, in attempts to relate low-energy models of strong interactions
to QCD, theorists created a number of succesful descriptions of dynamics of theories
with large coupling constants in terms of other theories, in which the perturbation

40
theory works. These theories, called dual to each other, have coupling constants g1 and
g2 , for which g1 ∼ 1/g2 ; the knowledge of the Green functions of one theory allows to
calculate, following some known rules, the Green functions of another. Note that the
theory dual to QCD has not been constructed up to now.
The simplest example of duality (see e.g. [149]) is a theory of the electromagnetic
field with magnetic charges. The Maxwell equations in vacuum are invariant with
respect to the exchange of the electric field E and the magnetic field B:

E 7→ B, B 7→ −E. (6)

This duality breaks down in the presence of electic charges and currents. It may
be restored, however, if one assumes that sources of the other kind exist in Nature,
namely magnetic charges and currents which correspond to their motion. The self-
consistency of the theory requires the Dirac quantization condition: the unit electic
charge e and the unit magnetic charge ẽ have to satisfy the relation eẽ = 2π. The
charge e is the coupling constant of the usual electrodynamics while the magnetic
charge ẽ is the coupling constant of the theory of interaction of magnetic charges which
is obtained from electrodynamics by the duality transformation (6). Therefore, the
weak coupling of electric charges, e  1, corresponds to the strong coupling of magnetic
ones, ẽ = 2π/e  1.
The electromagnetic duality is based on the geometrical properties of abelian gauge
fields which cannot be directly transferred to the nonabelian case, which is the most
interesting phenomenologically. In a way similar but much more complicated dualities
appear in supersymmetric nonabelian gauge theories. The best known one is the “elec-
tromagnetic” duality in SU (2) supersymmetric theory with two supercharges (N = 2)
which is related to the names of Seiberg and Witten [150]. From the particle-physics
point of view, this model is a SU (2) gauge theory with scalar and fermionic fields
transforming under the adjoint representation of the gauge group, whose interaction is
invariant under special symmetry. For this model, the effective theory has been calcu-
lated which describes the interaction of light composite particles at low energies and
the correspondence has been given between the effective low-energy and fundamental
degrees of freedom. Like QCD, the fundamental theory is asymptotically free and is
in the strong-coupling regime at low energies; the effective theory describes weakly
interacting composite particles.
The success of the Seiberg-Witten model gave rise to a hope that the low-energy
effective theory for a nonsupersymmetric gauge model with strong coupling, for instance
for QCD, may be obtained from the problem already solved by means of addition of
supersymmetry-breaking terms to the lagrangians of both the fundamental and the dual
theories. The first step in this direction was to consider N = 1 supersymmetric gauge
theories. Earlier, starting from mid-1980s, a number of exact results have been obtained
in these theories by making use of (gouverned by supersymmetry) analitical properties
of the effective action [151]. In contrast with the case of N = 2 supersymmetry,
this is insufficient for the reconstuction of the full effective theory, but the models
dual to supersymmetric gauge theories with different gauge groups and matter content
have been suggested [152]. Contrary to the N = 2 case, it is impossible to prove the
duality here, but the conjecture withstood all checks carried out. Moreover, it has
been shown that the addition of small soft breaking terms in the Lagrangians of N = 1
theories corresponds to a controllable soft supersymmetry breaking in dual models [153].
Unfortunately, one may prove that with the increase of the supersymmetry-breaking
parameters (for instance, when superpartner masses tend to infinity, so the N = 1

41
theory becomes QCD), a phase transition happens and the dual description stops to
work, so the straightforward application of this approach to QCD is not possible [154].
Also, it is worth noting that the approach does not allow for a quantitative description
of dynamics at intermediate energies, when the coupling constants of dual theories are
both large. Nevertheless, these methods themselves, as well as the physics intuition
based on their application, have played an important role in the development of other
modern approaches to the study of dynamics of strongly-coupled theories.
One of the theoretically most beautiful and practically most prospective approaches
to the analysis of dynamics of strong interactions at low and intermediate energies is
the so-called holographic approach. Its idea is that the dual theories may be for-
mulated in spacetime of different dimensions, in such a way that, for instance, the
four-dimensional dynamics of a theory with large coupling constant is equivalent to the
five-dimensional dynamics of another theory which is weakly coupled (in a way simi-
lar to the two-dimensional description of a three-dimensional object with a hologram).
The best-known realization of this approach is based on the AdS/CFT correspondence
[155, 156], a practical realization of the duality between a strongly coupled gauge the-
ory with a four-dimensional conformal invariance (CFT = conformal field theory) and
a multidimensional supergravity with weak coupling constant. The four-dimensional
conformal symmetry includes the Poincare invariance supplemented by dilatations and
inversions. An example of a nontrivial four-dimensional conformal theory with large
coupling constant g is the N = 4 supersymmetric Yang-Mills theory with the gauge
group SU (Nc ) which, in the limit Nc → ∞, g 2 Nc  1, appears to be dynamically
equivalent to a certain supergravity theory living on the ten-dimensional AdS5 × S5
manifold, where AdS5 is the (4+1)-dimensional space with the anti-de-Sitter metrics
(5) and S5 is the five-dimensional sphere (the S5 factor is almost irrelevant in appli-
cations, hence the name, AdS/CFT correspondense). In the limit considered, these
two models are equivalent. To proceed with phenomenological applications, one has
to break the conformal invariance. As a result, the theory has less symmetries, so the
results proven by making use (direct or indirect) of these symmetries are downgraded to
conjectures. Nevertheless, this not fully strict approach (called sometimes AdS/QCD)
brings interesting phenomenological results.
An example is provided by a five-dimensional gauge theory determined at a finite
interval in the z coordinate of the AdS5 space (other geometries of the extra dimensions
are also considered). For the SU (2) × SU (2) gauge group and a special matter set one
gets the effective theory with QCD symmetries. The series of the Kalutza-Klein states
corresponds to the sequence of mesons whose masses and decay constants may therefore
be calculated directly in the five-dimensional theory. This approach was succesful; it
allows to calculate various physical observables (in particular, the π-meson formfactor
discussed above) which agree reasonably with data. A disadvantage of the method is
that the duality between QCD and the five-dimensional effective theory is not proven.
As a result, the choice of the latter is somewhat arbitrary. An undisputable advantage
of this approach is its phenomenological success achieved without a large number of
tuning parameters, as well as the possibility to calculate observables for intermediate
energies and not only in the zero-energy limit. One may hope that in the future, a low-
energy effective theory for QCD might be derived in the frameworks of this approach.

42
6 Conclusions.
The Standard model of particle physics gives an excellent description of almost all
data obtained at accelerators already for several decades. At the same time, results of
both a number of non-accelerator experiments (neutrino oscillations) and astrophysical
observations cannot be explained in the frameworks of SM and undoubtedly point to
its incompleteness. A more complete theory, yet to be constructed, should allow for
a derivation of the SM parameters and for explanation of their, theoretically not fully
natural, values. The main unsolved problem of SM itself is to describe the dynamics of
gauge theories at strong coupling which would allow to apply QCD to the description
of hadrons at low and intermediate energies.
One may hope that in the next few years, the particle theory will get additional
experimental information both from the Large Hadron Collider, a powerful accelera-
tor which is bound to explore the entire range of energies related to the electroweak
symmetry breaking, and from numerous experiments of smaller scales (in particular,
those studying neutrino oscillations, rare processes etc.) and astrophysical observa-
tions. Possibly, this information will allow to construct a succesful extension of the
Standard Model already in the coming decade.
This work was born (and grew up) from a review lecture given by the author at the
Physics department of the Moscow State University. I am indebted to V. Belokurov
who suggested to convert this lecture into a printed text, read the manuscript carefully
and discussed many points. I thank V. Rubakov and V. Troitsky for attentive reading
of the manuscript and numerous discussions, to M. Vysotsky and M. Chernodub for
useful discussions related to particular topics and to A. Strumia for his kind permission
to use Fig. 15. The work was supported in part by the RFBR grants 10-02-01406 and
11-02-01528, by the FASI state contract 02.740.11.0244, by the grant of the President
of the Russian Federation NS-5525.2010.2 and by the “Dynasty” foundation.

References
[1] Krasnikov N V, Matveev V A New physics at the Large Hadron Collider Moscow,
Krasand, 2011 (in Russian); Krasnikov N V, Matveev V A Phys. Usp. 47 643
(2004)
[2] Burgess C P, Moore G D The standard model: A primer, Cambridge University
Press, 2006
[3] Cheng T P, Li L F Gauge Theory Of Elementary Particle Physics, Oxford, Claren-
don, 1984
[4] Gorbunov D S, Rubakov V A Introduction to the theory of the early universe:
hot big bang theory, World Scientific, 2011
[5] Kobayashi M Phys. Usp. 52 12 (2009)
[6] Nakamura K et al. [Particle Data Group] J. Phys. G 37 075021 (2010)
[7] Giunti C, Kim C W Fundamentals of Neutrino Physics and Astrophysics, Oxford
University Press 2007
[8] Bilenky S M Phys. Usp. 46 1137 (2003)

43
[9] Akhmedov E K Phys. Usp. 47 117 (2004)
[10] Kudenko Yu G Phys. Usp. 54 549 (2011)
[11] Evans J J arXiv:1107.3846 [hep-ex]
[12] Pontecorvo B Sov. Phys. JETP 6 429 (1957)
[13] Pontecorvo B Sov. Phys. JETP 7 152 (1957)
[14] Maki Z, Nakagawa M, Sakata S Prog. Theor. Phys. 28 870 (1962)
[15] Pontecorvo B Sov. Phys. JETP 26, 984 (1968)
[16] Gribov V N, Pontecorvo B Phys. Lett. B 28 493 (1969)
[17] Bilenky S M, Pontecorvo B Sov. J. Nucl. Phys. 24 316 (1976)
[18] Bilenky S M, Pontecorvo B Lett. Nuovo Cim. 17 569 (1976)
[19] Eliezer S, Swift A R Nucl. Phys. B 105 45 (1976)
[20] Fritzsch H, Minkowski P Phys. Lett. B 62 72 (1976)
[21] Mikheev S P, Smirnov A Y Sov. J. Nucl. Phys. 42 913 (1985)
[22] Mikheev S P, Smirnov A Y Nuovo Cim. C 9 17 (1986).
[23] Wolfenstein L Phys. Rev. D 17 2369 (1978)
[24] Davis R J, Harmer D S, Hoffman K C Phys. Rev. Lett. 20 1205 (1968)
[25] Hirata K S et al. [KAMIOKANDE-II Collaboration] Phys. Rev. Lett. 63 16 (1989)
[26] Abazov A I et al. [SAGE Collaboration] Phys. Rev. Lett. 67 3332 (1991)
[27] Hampel W et al. [GALLEX Collaboration] Phys. Lett. B 447 127 (1999)
[28] Ashie Y et al. [Super-Kamiokande Collaboration] Phys. Rev. Lett. 93 101801
(2004)
[29] Hirata K S et al. [KAMIOKANDE-II Collaboration] Phys. Lett. B 205 416 (1988)
[30] Casper D et al. Phys. Rev. Lett. 66 2561 (1991)
[31] Allison W W M et al. Phys. Lett. B 391 491 (1997)
[32] Ambrosio M et al. [MACRO Collaboration] Phys. Lett. B 434 451 (1998)
[33] Fukuda Y et al. [Super-Kamiokande Collaboration] Phys. Rev. Lett. 82 2644
(1999)
[34] Ahmad Q R et al. [SNO Collaboration] Phys. Rev. Lett. 89 011301 (2002)
[35] Abe S et al. [KamLAND Collaboration] Phys. Rev. Lett. 100 221803 (2008)
[36] Aharmim B et al. [SNO Collaboration] arXiv:1109.0763

44
[37] Bellini G et al. [Borexino Collaboration] arXiv:1104.1816 [hep-ex].
[38] Ashie Y et al. [Super-Kamiokande Collaboration] Phys. Rev. D 71 112005 (2005)
[39] Takeuchi Y et al. [Super-Kamiokande Collaboration] talk at Neutrino-2010,
Athens, 14-19 June 2010.
[40] Ahn M H et al. [K2K Collaboration] Phys. Rev. D 74 072003 (2006)
[41] Michael D G et al. [MINOS Collaboration] Phys. Rev. Lett. 97 191801 (2006)
[42] Adamson P et al. [The MINOS Collaboration] Phys. Rev. Lett. 106 181801 (2011)
[43] Agafonova N et al. [OPERA Collaboration] Phys. Lett. B 691 138 (2010)
[44] Fogli G L et al., Phys. Rev. Lett. 101 141801 (2008)
[45] Abe K et al. [T2K Collaboration] Phys. Rev. Lett. 107 041801 (2011)
[46] Adamson P et al. [MINOS Collaboration] arXiv:1108.0015 [hep-ex]
[47] Fogli G L et al., arXiv:1106.6028 [hep-ph]
[48] Aguilar A et al. [LSND Collaboration] Phys. Rev. D 64 112007 (2001)
[49] Church E D et al., Phys. Rev. D 66 013001 (2002)
[50] Aguilar-Arevalo A A et al. [The MiniBooNE Collaboration] Phys. Rev. Lett. 105
181801 (2010)
[51] Djurcic Z, talk at 13th International Workshop on Neutrino Factories, Super
Beams and Beta Beams, Geneva, 1–6 August 2011
[52] Mention G et al. Phys. Rev. D 83 073006 (2011)
[53] Aguilar-Arevalo A A et al. [MiniBooNE Collaboration] Phys. Rev. Lett. 103
111801 (2009)
[54] Thomas J, talk at Lepton-Photon 2011, Mumbai, 22–27 August 2011
[55] Abe K et al. [Kamiokande Collaboration] arXiv:1109.1621
[56] Anselmann P et al. [GALLEX Collaboration.] Phys. Lett. B 342 440 (1995);
Kaether F et al. Phys. Lett. B 685 47 (2010)
[57] Abdurashitov D N et al. [SAGE Collaboration] Phys. Rev. Lett. 77 4708 (1996);
Abdurashitov D N et al. [SAGE Collaboration] Phys. Rev. C 73 045805 (2006)
[58] Giunti C, Laveder M Phys. Rev. C 83 065504 (2011)
[59] Giunti C, Laveder M Phys. Rev. D 82 113009 (2010)
[60] Aguilar-Arevalo A A et al. [MiniBooNE Collaboration] Phys. Rev. Lett. 102
101802 (2009)
[61] Lobashev V M et al. Phys. Lett. B 460 227 (1999)

45
[62] Aguilar-Arevalo A A et al. [MiniBooNE Collaboration] [arXiv:1109.3480 [hep-ex]]
[63] Adam T et al. [OPERA Collaboration] arXiv:1109.4897 [hep-ex]
[64] Strumia A Phys. Lett. B 539 91 (2002)
[65] Maltoni M et al. Nucl. Phys. B 643 321 (2002)
[66] Akhmedov E, Schwetz T JHEP 1010 115 (2010)
[67] Murayama H, Yanagida T Phys. Lett. B 520 263 (2001)
[68] Bogolyubov N N, Shirkov D V Introduction to the theory of quantized fields,
Intersci. Monogr. Phys. Astron. 3 1 (1959)
[69] Tsukerman I S Phys. Usp. 48 825 (2005); Tsukerman I S arXiv:1006.4989 [hep-ph]
[70] Diaz J S, Kostelecky A arXiv:1108.1799 [hep-ph]
[71] Engelhardt N, Nelson A E, Walsh J R Phys. Rev. D 81 113001 (2010)
[72] Kopp J, Machado P A N, Parke S J Phys. Rev. D 82 113002 (2010)
[73] Schwetz T arXiv:0805.2234 [hep-ph]
[74] Yasuda O arXiv:1012.3478 [hep-ph]
[75] Aseev V N et al., arXiv:1108.5034 [hep-ex]
[76] Kraus C et al. Eur. Phys. J. C 40 447 (2005)
[77] Hannestad S et al. JCAP 1008 001 (2010)
[78] Gorbunov D S, Rubakov V A Introduction to the theory of the early universe,
Cosmological perturbations and inflationalry theory, World Scientific, 2011
[79] Rubakov V A Phys. Usp. 42 1193 (1999)
[80] Rubakov V A Phys. Usp. 54 633 (2011)
[81] Sajharov A D JETP Lett., 5 24 (1967)
[82] Rubakov V A, Shaposhnikov M E Phys. Usp. 39 461 (1996)
[83] Rubin V C, Thonnard N, Ford W K Astrophys. J. 238 471 (1980)
[84] Begeman K G Astron. Astrophys. 223 47 (1989)
[85] The digitized sky survey (DSS), in [88].
[86] Zwicky F Astrophys. J. 86 217 (1937)
[87] Limousin M et al. Astrophys. J. 668 643 (2007); http://www.dark-cosmology.dk
[88] The Multimission Archive at the Space Telescope Science Institute (MAST),
http://archive.stsci.edu/ . STScI is operated by the Association of Universities
for Research in Astronomy, Inc., under NASA contract NAS5-26555.

46
[89] The Chandra Data Archive (CDA), http://asc.harvard.edu/cda/ .
[90] Clowe D et al. Astrophys. J. 648 L109 (2006)
[91] Bradac M et al. Astrophys. J. 652 937 (2006)
[92] Alcock C et al. [MACHO Collaboration] Astrophys. J. 542 281 (2000); Tisserand
P et al. [EROS-2 Collaboration] Astron. Astrophys. 469 387 (2007)
[93] Riess A G et al. [Supernova Search Team Collaboration] Astron. J. 116 (1998)
1009
[94] Perlmutter S et al. [Supernova Cosmology Project Collaboration] Astrophys. J.
517 565 (1999)
[95] Perlmutter S Physics Today 56 (4) 53 (2003)
[96] Riess A G, Press W H, Kirshner R P Astrophys. J. 473 88 (1996)
[97] Perlmutter S et al. [Supernova Cosmology Project Collaboration] Astrophys. J.
483 (1997) 565
[98] Amanullah R et al. Astrophys. J. 716 712 (2010); data for Fig. 11 are taken from
http://supernova.lbl.gov/Union/
[99] Jullo E et al. Science 329 924 (2010)
[100] Komatsu E et al. [WMAP Collaboration] Astrophys. J. Suppl. 192 18 (2011)
[101] Marinoni C, Buzzi A Nature 468 539 (2010)
[102] Khoury J, Weltman A Phys. Rev. D 69 044026 (2004)
[103] Linde A D Particle physics and inflationary cosmology, Harwood Academic Pub-
lishers, 1990
[104] Barate R et al. [LEP Working Group for Higgs boson searches, ALEPH, DELPHI,
L3 and OPAL Collaborations] Phys. Lett. B 565 61 (2003)
[105] Sharma V et al. [CMS Collaboration], talk at Lepton-Photon 2011, Mumbai,
22–27 August 2011
[106] Nisati A et al. [ATLAS Collaboration], talk at Lepton-Photon 2011, Mumbai,
22–27 August 2011
[107] Verzocchi M et al. [CDF and D0 Collaborations], talk at Lepton-Photon 2011,
Mumbai, 22–27 August 2011
[108] Baak M et al. arXiv:1107.0975
[109] Grojean C Phys. Usp. 50 1 (2007)
[110] Lane K arXiv:hep-ph/0202255
[111] Manton N S Nucl. Phys. B 158 141 (1979)
[112] Csaki et al. Phys. Rev. D 69 055006 (2004)

47
[113] Rubakov V A Phys. Usp. 50 390 (2007)
[114] Atwood D, Gupta S K, Soni A arXiv:1104.3874 [hep-ph]
[115] Ghilencea D, Lanzagorta M, Ross G G Phys. Lett. B 415 253 (1997)
[116] Rubakov V A, Troitsky S V arXiv:hep-ph/0001213
[117] Rubakov V A Phys. Usp. 46 211 (2003)
[118] Kaluza T Sitzungsber. Preuss. Akad. Wiss. Berlin, Math.-Phys. Kl. (1) 966 (1921)
[119] Klein O Z. Phys. 37 895 (1926)
[120] Akama K Lecture Notes Phys. 176 267 (1983)
[121] Rubakov V A, Shaposhnikov M E Phys. Lett. B 125 136 (1983)
[122] Visser M Phys. Lett. B 159 22 (1985)
[123] von Klitzing K, Nobel lecture (1985); Fu L, Kane C L Phys. Rev. Lett. 100 096407
(2008); Ghaemi P, Wilczek F arXiv:0709.2626; Bergman D L, Le Hur K Phys.
Rev. B 79 184520 (2009); Volovik G E The Universe in a helium droplet, Int.
Ser. Monogr. Phys. 117 (2006)
[124] Arkani-Hamed N, Dimopoulos S, Dvali G Phys. Lett. B 429 263 (1998)
[125] Kapner D J et al. Phys. Rev. Lett. 98 021101 (2007)
[126] Dvali G, Shifman M Phys. Lett. B 396 64 (1997)
[127] Rubakov V A, Shaposhnikov M E Phys. Lett. B 125 139 (1983)
[128] Gogberashvili M Mod. Phys. Lett. A 14 2025 (1999)
[129] Randall L, Sundrum R Phys. Rev. Lett. 83 3370 (1999)
[130] Oda I Phys. Lett. B 496 113 (2000)
[131] Vysotsky M I, Nevzorov R B Phys. Usp. 44 919 (2001)
[132] Gorbunov D S, Dubovsky S L, Troitsky S V Phys. Usp. 42 623 (1999)
[133] Kazakov D I arXiv:hep-ph/0012288
[134] Strumia A JHEP 1104 073 (2011)
[135] Schmaltz M, Tucker-Smith D Ann. Rev. Nucl. Part. Sci. 55 229 (2005)
[136] Gherghetta T arXiv:1008.2570 [hep-ph].
[137] Libanov M, Troitsky S Nucl. Phys. B 599 319 (2001)
[138] Frere J-M, Libanov M, Troitsky S Phys. Lett. B 512 169 (2001)
[139] Frere J-M, Libanov M, Ling F S JHEP 1009 081 (2010)
[140] Dvali G R, Shifman M A, Phys. Lett. B475 295 (2000)

48
[141] Iwasaki Y et al. Phys. Rev. Lett. 69 21 (1992)
[142] Krutov A F, Troitsky V E, Tsirova N A Phys. Rev. C 80 055210 (2009)
[143] Aubert B et al. [The BABAR Collaboration] Phys. Rev. D 80 052002 (2009)
[144] Di Giacomo A Lattice gauge theory, in: Encyclopedia of Mathematical Physics,
Academic Press, Oxford (2006)
[145] Creutz M Phys. Rev. D 21 2308 (1980)
[146] Aoki S et al. [PACS-CS Collaboration] Phys. Rev. D 81 074503 (2010)
[147] Kajantie K, Montonen C, Pietarinen E Z. Phys. C 9 253 (1981)
[148] Aoki Y et al. Nature 443 675 (2006)
[149] Tsun T S Electric–magnetic duality, in: Encyclopedia of Mathematical Physics,
Academic Press, Oxford (2006)
[150] Seiberg N, Witten E Nucl. Phys. B 426 19 (1994) [Erratum-ibid. 430 486 (1994)]
[151] Affleck I, Dine M, Seiberg N Nucl. Phys. B 241 493 (1984)
[152] Seiberg N Nucl. Phys. B 435 129 (1995)
[153] Evans N J, Hsu S D H, Schwetz M Phys. Lett. B 355 475 (1995)
[154] Aharony O et al. Phys. Rev. D 52 6157 (1995)
[155] Maldacena J M Adv. Theor. Math. Phys. 2 231 (1998) [Int. J. Theor. Phys. 38
1113 (1999)]
[156] Witten E Adv. Theor. Math. Phys. 2 253 (1998)

49

You might also like