Neural network approach to parton distributions fitting
Introduction
The requirements of precision physics at hadron colliders have recently led to a rapid improvement in the techniques for the determination of the structure of the nucleon. Playing this game factorization is a crucial issue. Indeed, it ensures that we can extract the parton structure of the nucleon from a process with only one initial proton (say, Deep Inelastic Scattering at HERA), and then we can use this as an input for a process where two initial protons are involved (Drell-Yan at LHC). In the QCD improved parton model the DIS structure function of the nucleon can be written aswhere , , and p, k and are the momenta of the initial nucleon, the incoming lepton, and the scattered lepton, respectively; are the coefficient functions pertubatively calculable, and the quarks and the gluon distributions that describe the nonpertubative dynamics, the so called Parton Distribution Functions (PDFs).
The extraction of a PDF from experimental data is not trivial, even if it is a well-estabilished task. In order to do that we have to evolve the PDFs to the scale of data, perform the x-convolution, add theoretical uncertainties (resummation, nuclear corrections, higher twist, heavy quark thresholds, ), and then deconvolute in order to have a function of x at a common scale .
Recently, it has been pointed out that the uncertainty associated with a PDFs set is crucial [1], [2], [3]. The uncertainty on a PDF is given by the probability density in the space of functions , that is, the measure we use to perform the functional integral that gives us the expectation valuewhere is an arbitrary function of . Thus, when we extract a PDF we want to determine an infinite-dimensional object (a function) from finite set of data points, and this is a mathematically ill-posed problem.
The standard approach is to choose a simple functional form with enough free parameters , and to fit parameters by minimizing . Some difficulties arise: errors and correlations of parameters require at least fully correlated analysis of data errors; error propagation to observables is difficult: many observables are nonlinear/nonlocal functional of parameters; theoretical bias due to choice of parametrization is difficult to assess (effects can be large if data are not precise or hardly compatible).
Here, we present an alternative approach to this problem. First, we will show our technique applied to the determination of the Structure Functions. This is the easiest case, since no evolution is required, but only data fitting, thus it is a good application to test the technique. Then, we will show how this approach can be extended for the determination of the PDFs.
Section snippets
Structure functions
The strategy presented in Refs. [4], [5] to address the problem of parametrizing deep inelastic structure functions is a combination of two techniques: a Monte Carlo sampling of the experimental data and a neural network training on each data replica.
The Monte Carlo sampling of experimental data is performed generating replicas of the original experimental data,where , r are Gaussian random numbers with
Parton distributions
The strategy presented in the above section can be used to parametrize parton distributions as well, provided one now takes into account Altarelli–Parisi QCD evolution.
Now neural networks are used to parametrize the PDF at a reference scale. We choose an architecture with 2 inputs , two hidden layers with two neurons each, and one output, . The training on each replica is performed only with the Genetic Algorithm, since we have a nonlocal function to be minimized (see Eqs. (1
References (13)
- et al.
Phys. Lett. B
(2004) - et al.
JHEP
(2004) - W.K. Tung, AIP Conference Proceedings, vol. 753, 2005, p....
- et al.
JHEP
(2002) - et al.
JHEP
(2005) - C. Peterson, T. Rognvaldsson, LU-TP-91-23, Lectures given at 1991 CERN School of Computing, Ystad,...
Cited by (3)
Modeling p′p and recent LHC pp total cross-sections
2014, Modern Physics Letters ATotal cross section prediction of the collisions of positrons and electrons with alkali atoms using gradient tree boosting
2011, Indian Journal of PhysicsParton distribution function nuclear corrections for charged lepton and neutrino deep inelastic scattering processes
2009, Physical Review D - Particles, Fields, Gravitation and Cosmology