Abstract
Observation of the diphoton decay mode of the recently discovered Higgs boson and measurement of some of its properties are reported. The analysis uses the entire dataset collected by the CMS experiment in proton-proton collisions during the 2011 and 2012 LHC running periods. The data samples correspond to integrated luminosities of 5.1at and 19.7at 8 . A clear signal is observed in the diphoton channel at a mass close to 125 with a local significance of , where a significance of is expected for the standard model Higgs boson. The mass is measured to be , and the best-fit signal strength relative to the standard model prediction is . Additional measurements include the signal strength modifiers associated with different production mechanisms, and hypothesis tests between spin-0 and spin-2 models.
Introduction
In 2012 the ATLAS and CMS Collaborations announced the observation [1, 2] of a new boson with a mass, , of about 125and properties consistent, within uncertainties, with expectations for a standard model (SM) Higgs boson. The Higgs boson is the particle predicted to exist as a consequence of the spontaneous symmetry breaking mechanism acting in the electroweak sector of the SM [3–5]. This mechanism was first suggested nearly fifty years ago [6–11], and introduces a complex scalar field, which also gives masses to the fundamental fermions through a Yukawa interaction. Results using the full available dataset have recently been published by CMS [12–19], and by ATLAS [20–25].
The diphoton decay channel provides a clean final-state topology that allows the mass of the decaying object to be reconstructed with high precision. Having in mind the discovery of a low mass Higgs boson in the diphoton channel, the electromagnetic calorimeter performance was a design priority for CMS. The diphoton decay is mediated by loop diagrams containing charged particles. The top quark loop and the W boson loop diagrams dominate the decay amplitude, though they contribute with opposite sign. The branching fraction is small, reaches a maximum value of 0.23 % at and falls steeply to values less than 0.1 % above 150 [26]. As a consequence the search reported in this paper is limited to the mass range, . Despite the small branching fraction and the presence of a large diphoton continuum background, the diphoton decay mode provides an expected signal significance for the 125SM Higgs boson that is one of the highest among all the decay modes.
This paper presents the analysis performed on the full dataset collected in 2011 and 2012, reconstructed with the final detector calibration values, in collisions at the Large Hadron Collider (LHC), with an integrated luminosity of 5.1at a centre-of-mass energy of 7(herein referred to as the “7dataset”) and 19.7at 8(“8 dataset”). The results supersede those previously reported by CMS for this decay mode [27, 28].
The primary production mechanism of the Higgs boson at the LHC is gluon-gluon fusion (ggH) [29] with additional smaller contributions from vector boson fusion (VBF) [30] and production in association with a or boson (VH) [31] or a pair ( ) [32, 33]. Events from specific production mechanisms are identified and classified by the presence of additional objects in the final state. Requiring the presence of two forward jets, in addition to the photon pair, favours events produced by the VBF mechanism, while event classes designed to preferentially select VH or production require the presence of muons, electrons, missing transverse energy from neutrinos, or jets arising from the hadronization of b quarks. To achieve the best sensitivity, the remaining events, and also the dijet events selected as having a VBF signature, are further separated using multivariate classifiers that provide measures of their probability to be signal rather than background. The signal is measured performing a simultaneous fit to the diphoton invariant mass distributions in the various event classes. The signal model is derived from simulation, while the background is obtained from the fit to data. A very large sample of events is available in which a boson decays to a pair of electrons; treating the electron showers in these events as if they were from photons allows precise and detailed knowledge to be obtained concerning the accuracy of the simulation of the signal, specifically the simulation of the energy reconstruction and selection of photons, and the simulation of the selection and classification of diphoton events.
With respect to analyses of this decay mode previously reported by CMS there are refinements in methodology, which are described in the main body of the paper. In addition, the analysis uses an improved intercalibration of the electromagnetic calorimeter channels and an improved energy regression algorithm to correct the clustered energy, resulting in better energy resolution. The simulation of the signal and Z boson samples is also improved. The changes in the energy-equivalent noise in the electromagnetic calorimeter during the data-taking period are simulated, and a significantly increased time window is used to simulate the effect of deposited energy coming from interactions in earlier bunch crossings.
The paper is organized as follows. After a brief description of the CMS detector and event reconstruction in Sect. 2 and of the data and simulated samples in Sect. 3, the reconstruction and identification of photons is detailed in Sect. 4. The issue of identifying the diphoton vertex is covered in Sect. 5. In Sect. 6 the event classification is described. The section first describes the construction of a multivariate event classifier which takes as input quantities associated with the two photons, and then goes on to describe the tagging of events by the presence of objects in the final state, in addition to the photon pair, that give the event a signature characteristic of one of the production processes. It concludes by detailing the use of two multivariate event classifiers to additionally subdivide into classes both the untagged events, and the events tagged as coming from the VBF process. Sections 7 and 8 describe, respectively, the signal and background models used in the statistical procedures which provide the results of the analysis, and Sect. 9 discusses the systematic uncertainties taken into account in those procedures. Section 10 outlines three alternative analyses that use specific variations of methodology that provide corroboration of particular aspects of the main analysis. Finally, in Sect. 11 the results of the measurements of the Higgs boson production and its properties are presented and discussed.
CMS detector
The central feature of the CMS apparatus is a superconducting solenoid, 13 in length and with an inner diameter of 6 , which provides an axial magnetic field of 3.8 . The bore of the solenoid is instrumented with both the central tracker and the calorimeters. The steel flux-return yoke outside the solenoid hosts gas ionization detectors used to identify and reconstruct muons.
The CMS experiment uses a right-handed coordinate system, with the origin at the nominal interaction point, the axis pointing to the centre of the LHC, the axis pointing up (perpendicular to the LHC plane), and the axis along the anticlockwise-beam direction. The polar angle is measured from the positive axis and the azimuthal angle is measured in the – plane. Transverse energy, denoted by , is defined as the product of energy and , with being measured with respect to the nominal interaction point. Charged-particle trajectories are measured by the silicon pixel and strip tracker, with full azimuthal coverage within , where the pseudorapidity is defined as . A lead tungstate crystal electromagnetic calorimeter (ECAL) and a brass/scintillator hadron calorimeter (HCAL) surround the tracking volume and cover the region . The ECAL barrel extends to while the ECAL endcaps cover the region . A lead/silicon-strip preshower detector is located in front of the ECAL endcap in the region . The preshower detector includes two planes of silicon sensors measuring the and coordinates of the impinging particles. A steel/quartz-fibre Cherenkov forward calorimeter extends the calorimetric coverage to . In the region , the HCAL cells have widths of 0.087 in both and . In the – plane, and for , the HCAL cells map on to 55 ECAL crystal arrays to form calorimeter towers projecting radially outwards from points slightly offset from the nominal interaction point. In the endcap, the ECAL arrays matching the HCAL cells contain fewer crystals.
Calibration of the ECAL is achieved exploiting the –symmetry of the energy flow, and using photons from and decays, and electrons from and decays [34]. Changes in the transparency of the ECAL crystals due to irradiation during the LHC running periods and their subsequent recovery are monitored continuously, and corrected for, using light injected from a laser system [34].
The first level of the CMS trigger system, composed of custom hardware processors, uses information from the calorimeters and muon detectors to select the most interesting events in a fixed time interval of less than 4 . The high-level trigger processor farm further decreases the event rate from around 100 to around 400 , before data storage.
A more detailed description of the CMS detector can be found in Ref. [35].
Reconstruction of the photons used in this analysis is described in Sect. 4, and uses a clustering of the energy recorded in the ECAL, known as a “supercluster”, which may be extended in the direction to form an extended cluster or group of clusters.
The global event reconstruction (also called particle-flow event reconstruction) consists of reconstructing and identifying each particle with an optimized combination of all subdetector information [36, 37]. In this process, the identification of the particle type (photon, electron, muon, charged hadron, neutral hadron) plays an important role in the determination of the particle direction and energy. Photons are identified as ECAL energy clusters not linked to the extrapolation of any charged-particle trajectory to the ECAL. Electrons are identified as a primary charged-particle track associated with ECAL energy clusters corresponding to this track’s extrapolation to the ECAL and to possible bremsstrahlung photons emitted along the way through the tracker material. Muons are identified as a track in the central tracker consistent with either a track or several hits in the muon system, associated with less energy in the calorimeters than would be deposited by a charged hadron or electron. Charged hadrons are identified as charged-particle tracks neither identified as electrons, nor as muons. Finally, neutral hadrons are identified as HCAL energy clusters not linked to any charged hadron trajectory, or as ECAL and HCAL energy excesses with respect to the expected energy deposited by a matching charged hadron.
The energy of photons used in the global event reconstruction is directly obtained from the ECAL measurement. The energy of electrons is determined from a combination of the track momentum at the main interaction vertex, the corresponding ECAL cluster energy, and the energy sum of all bremsstrahlung photons attached to the track. The energy of muons is obtained from the corresponding track momentum. The energy of charged hadrons is determined from a combination of the track momentum and the corresponding ECAL and HCAL energy, calibrated for the nonlinear response of the calorimeters. Finally, the energy of neutral hadrons is obtained from the corresponding calibrated ECAL and HCAL energies.
For each event, hadronic jets are clustered from these reconstructed particles using the infrared- and collinear-safe anti- algorithm [38] with a size parameter of 0.5. The jet momentum is determined as the vectorial sum of all particle momenta in the jet, and the scale is found in the simulation to be within 5–10 % of the true momentum over the whole transverse momentum spectrum and detector acceptance. Jet energy corrections are derived from simulation, and are confirmed with in situ measurements using the energy balance of dijet and events [39]. The jet energy resolution typically amounts to 15 % (8 %) at 10 (100) , to be compared to about 40 % (12 %) obtained when the calorimeters alone are used for jet clustering.
To identify jets originating from the hadronization of bottom quarks, the combined secondary vertex b-tagging algorithm [40] is employed. The algorithm tags jets from b-hadron decays by identifying their displaced decay vertex. The working point of the tagging algorithm used provides an efficiency for identifying b-quark jets of about 70 % and a misidentification probability for jets from light quarks and gluons of about 1 %.
The missing transverse energy vector is taken as the negative vector sum of all reconstructed particle candidate transverse momenta in the global event reconstruction, and its magnitude is referred to as .
Data sample and simulated events
The events used in the analysis were selected by diphoton triggers with asymmetric transverse energy thresholds and complementary photon selections. One selection requires a loose calorimetric identification based on the shape of the electromagnetic shower and loose isolation requirements on the photon candidates, while the other requires only that the photon candidate has a high value of the shower shape variable. High trigger efficiency is maintained by allowing both photons to satisfy either selection. The variable is defined as the energy sum of 33 crystals centred on the most energetic crystal in the supercluster divided by the energy of the supercluster. Photons that convert before reaching the calorimeter tend to have wider showers and lower values of than unconverted photons. To cover the entire data taking period two trigger threshold configurations are used: on the leading (trailing) photon, and . The measured trigger efficiency is for events satisfying the diphoton preselection required for events entering the analysis, as described in Sect. 4.
The Monte Carlo (MC) simulation of detector response employs a detailed description of the CMS detector, and uses version 9.4 (patch 03) [41]. Simulated events include simulation of the multiple interactions taking place in each bunch crossing and are weighted to reproduce the distribution of the number of interactions in data. They thus simulate the effects of pileup—the presence of signals from multiple interactions, in multiple bunch crossings, in each recorded event. The interactions used to simulate pileup are generated with the same versions of pythia [42], 6.424 or 6.426, that are used for other purposes as described below. The pythia tunes used for the underlying event activity are Z2 and Z2* for the 7 and 8samples, respectively [43]. Simulated Higgs boson signal events are used both for training of multivariate discriminants and to construct the signal model used in the statistical procedures employed to extract the results. Sufficient samples have been produced to ensure that the samples of simulated signal events used for construction of the signal model (Sect. 7) are not used for training the multivariate discriminants. The MC signal event samples for the ggH and VBF processes are obtained using the next-to-leading order (NLO) matrix-element generator powheg (version 1.0) [44–48] interfaced with pythia. For the 7samples, events are weighted so that the transverse momentum spectrum of Higgs bosons produced by the ggH process agrees with the next-to-next-to-leading logarithm + NLO distribution computed by hqt (version 1.0) [49–51]. At 8, powheg has been tuned following the recommendations of the LHC Higgs Cross Section Working Group [52] and reproduces the hqt spectrum. The ggH process cross section is reduced by 2.5 % for all values of to account for the interference with nonresonant diphoton production [53]. For the VH and processes pythia is used alone; processes are generated at leading-order by pythia, and higher order diagrams are accounted for only by pythia’s “parton showering” model. The SM Higgs boson cross sections and branching fractions used are taken from Ref. [54]. Samples used for the testing of spin hypotheses were generated with leading-order accuracy by jhugen [55, 56], interfaced to pythia.
Simulated samples of , , and events used for comparison with data, and for the derivation of energy scale and resolution smearing corrections are generated with MadGraph, sherpa, and powheg [57], allowing comparisons to be made between the different generators.
Simulated background samples are used only for training multivariate discriminants and defining selection and classification criteria. The background is simulated using a combination of samples. At the diphoton processes are simulated using a combination of MadGraph 5 [58] interfaced to pythia for processes apart from the gluon-fusion box diagram, and pythia alone for the box diagram. At the diphoton continuum processes involving two prompt photons are simulated using sherpa 1.4.2 [59]. The sherpa samples give a noticeably improved description of diphoton continuum events accompanied by one or two jets, and enable training of a more effective multivariate discriminant in the case of diphoton-plus-dijet events. The remaining processes where one of the photon candidates arises from misidentified jet fragments are simulated using pythia alone, the cross sections of the processes are scaled by -factors derived from CMS measurements [60, 61].
Photon reconstruction and identification
Photon candidates for the analysis are reconstructed from energy deposits in the ECAL using algorithms that constrain the superclusters in and to the shapes expected from electrons and photons with high . The algorithms do not make any hypothesis as to whether the particle originating from the interaction point is a photon or an electron; when reconstructed in this way, electrons from events provide measurements of the photon trigger, reconstruction, and identification efficiencies, and of the photon energy scale and resolution. The clustering algorithms achieve a rather complete (95 %) collection of the energy of photons and electrons, even those that undergo conversion and bremsstrahlung in the material in front of the ECAL. In the barrel region, superclusters are formed from five-crystal-wide strips in , centred on the locally most energetic crystal (seed), and have a variable extension in . In the endcaps, where the crystals are arranged according to an – rather than an – geometry, matrices of 55 crystals, which may partially overlap and are centred on a locally most energetic crystal, are summed if they lie within a narrow road. The photon candidates are required to be within the fiducial region , excluding the barrel-endcap transition region , where the photon reconstruction is suboptimal. The fiducial region requirement is applied to the supercluster position in the ECAL, i.e. the value of is calculated with respect to the origin of the coordinate system. The exclusion of the barrel-endcap transition region ensures complete clustering of the accepted showers in either the ECAL barrel or endcaps.
About half of the photons convert in the material upstream of the ECAL. If the resulting charged particle tracks originate sufficiently close to the interaction point so as to pass through three or more tracking layers, conversion track pairs may be reconstructed and matched to the photon candidate.
Photon energy
The photon energy is computed from the signals recorded by the ECAL. In the region covered by the preshower detector () the signals recorded in it are also considered. In order to obtain the best energy resolution, the calorimeter signals are calibrated and corrected for several detector effects [34]. The variation of crystal transparency during the run is continuously monitored and corrected for using a factor based on the measured change in response to the light from the laser system, with the response for each crystal being computed approximately every 40 minutes. The single-channel response of the ECAL is equalized exploiting the -symmetry of the energy flow, the mass constraint on the energy of the two photons in and decays, and the momentum constraint on the energy of isolated electrons from - and -boson decays. Finally, the containment of the shower in the clustered crystals, the shower losses for photons that convert in the material upstream of the calorimeter, and the effects of pileup, are corrected using a multivariate regression technique. The photon energy response distribution is parameterized by a function with a Gaussian core and two power law tails, an extended form of the Crystal Ball function [62]. The regression provides a per-photon estimate of the parameters of the function, and therefore a prediction of the distribution of the ratio of true energy to uncorrected supercluster energy. The most probable value of this distribution is taken as the corrected photon energy. The width of the Gaussian core is further used as a per-photon estimator of the energy uncertainty. The regression input variables are a collection of shower shape variables including of the supercluster, the ratio of the 55 crystal energy centred around the seed crystal to the uncorrected supercluster energy sum, the energy-weighted -width and -width of the supercluster, and the ratio between the hadronic energy behind the supercluster and the electromagnetic energy of the cluster. The global coordinate of the supercluster is included, and for the barrel the global coordinate and the coordinates of the seed cluster with respect to the crystal centre are also included. In the endcap, the ratio of preshower energy to raw supercluster energy is included. Finally, the number of primary vertices and the median energy density [63] in the event are included in order to allow for the correction of residual energy scale effects due to pileup.
A multistep procedure has been implemented to correct the energy scale in data, and to determine the parameters of Gaussian smearing to be applied to showers in simulated events so as to reproduce the energy resolution seen in data. First, the energy scale in data is equalized with that in simulated events, and residual long-term drifts in the response are corrected, using decays in which the electron showers are reconstructed as photons. The data are corrected as a function of the time at which they were taken, using 8 epochs in the 7dataset and 51 epochs in the 8dataset. Following this, the photon energy resolution predicted by the simulation is made more realistic by adding a Gaussian smearing determined from the comparison between the line-shape in data and in simulated events. The amount of smearing required is extracted differentially in (two bins in the barrel and two in the endcap) and (two bins). In the fits from which the required amount of smearing is extracted, the data energy scale is allowed to float, and a residual scale correction for the data is extracted in the same eight bins. A sufficient number of events is available in the 8data to allow a third step, in which the energy scale for the ECAL barrel is further corrected in 20 bins defined by ranges in , , and , and the smearing magnitude is allowed to have an energy dependence; the additional energy resolution () is parameterized as the quadratic sum of a constant term and a term proportional to , and the relative magnitude of the two components extracted from the fits.
Figure 1 shows the invariant mass of electron pairs reconstructed in events in the 8data and in simulated events in which the electron showers are reconstructed as photons, and the full set of corrections to the data, and smearings of the simulated energies, are applied. The selection applied to the diphoton candidates is the same, apart from the inversion of the electron veto, as is applied to diphoton candidates entering the analysis (as described in Sect. 6). There is excellent agreement between the data and the simulation in the core of the distributions. A slight discrepancy is present in the low-mass tail in the endcaps, where the Gaussian smearing is not enough to account for some noticeable non-Gaussian energy loss. The mass peaks are shifted from the true -boson mass, both in data and simulation, because the electron showers are reconstructed as photons.
Photon preselection
The continuum background to the process is mainly due to prompt diphoton production, with a reducible contribution from and dijet processes where at least one of the objects reconstructed as a photon comes from a jet. Typically these photon candidates come from one or more neutral mesons that take a substantial fraction of the total jet and are thus relatively isolated from hadronic activity in the detector. In the transverse momentum range of interest, the photons from neutral pion decays are rather collimated and are reconstructed as a single photon. In the events used for the analysis, i.e. after all selection and classification criteria are applied, MC simulation predicts that about 70 % of the total background is due to the irreducible prompt diphoton production.
The photons entering the analysis are required to satisfy preselection criteria similar to, but slightly more stringent than, the trigger requirements. These consist of
and , where and are the transverse momenta of the leading (in ) and subleading photons, respectively.
a selection on the hadronic leakage of the shower, measured as the ratio of energy in HCAL cells behind the supercluster to the energy in the supercluster,
a loose selection based on isolation and the shape of the shower,
an electron veto, which removes the photon candidate if its supercluster is matched to an electron track with no missing hits in the innermost tracker layers, thus excluding almost all events.
The selection requirements are applied with different stringency in four categories defined to match the different selections used in the trigger. The four categories are shown in Table 1.
Table 1.
Preselection category | (%) | (%) | / |
---|---|---|---|
7dataset | |||
Barrel; 0.90 | 98.7 0.3 | 99.1 | 0.996 0.003 |
Barrel; 0.90 | 96.2 0.5 | 96.7 | 0.995 0.006 |
Endcap; 0.90 | 99.1 0.9 | 98.2 | 1.008 0.009 |
Endcap; 0.90 | 96.1 1.5 | 95.6 | 1.005 0.018 |
8dataset | |||
Barrel; 0.90 | 98.8 0.3 | 98.6 | 0.999 0.003 |
Barrel; 0.90 | 95.7 0.6 | 96.1 | 0.995 0.006 |
Endcap; 0.90 | 98.4 0.9 | 97.9 | 1.005 0.009 |
Endcap; 0.90 | 95.5 1.7 | 94.5 | 1.011 0.018 |
The efficiency of the photon preselection is measured in data using a “tag-and-probe” technique [64]. The efficiency of all preselection criteria, except the electron veto requirement, is measured using events. The efficiency for photons to satisfy the electron veto requirement is measured using events, in which the photon is produced by final-state radiation, which provide a more than pure source of prompt photons. The ratio of the photon efficiency measured in data to that found in simulated events, , is consistent with unity in all categories. The complete set of efficiencies, in data and in simulated events, and the ratios , are shown in Table 1. The systematic uncertainty in the measurement is included in both the efficiencies and the ratio. The statistical uncertainties in the efficiencies measured in simulated events are negligible. The measured ratios are used to correct the simulated signal sample, and the associated uncertainties are taken into account as systematic uncertainties in the signal extraction procedure. For photons in simulated Higgs boson events the efficiency of the preselection criteria in the four categories ranges from 92 to 99 %.
Photon identification
A boosted decision tree (BDT), implemented using the tmva [65] framework, is trained to separate prompt photons from photon candidates resulting from misidentification of jet fragments passing the preselection requirements. The following variables are used as inputs to the photon identification BDT:
Lateral shower shape variables, six of which use data from the ECAL crystals, and one of which measures the shower spread in the preshower detector (where it is present). The shape variables obtained in the MC simulation are compared to those observed in and data samples. No significant differences are observed.
Isolation variables, based on the particle-flow algorithm [37], and using sums of the of photons, and of charged hadrons, within regions of around the candidate, where . Two charged-hadron isolation variables are used: one that considers charged hadrons coming from the vertex chosen for the event (described in Sect. 5), and one that is the largest of all such sums among those made for each reconstructed vertex. The second variable is effective when a photon candidate originating from misidentification of jet fragments comes from a vertex other than the chosen one (Sect. 5 describes the vertex choice).
The energy median density per unit area in the event, . This variable is introduced to allow the BDT classifier to take into account the pileup dependence of the isolation variables.
The pseudorapidity and energy of the supercluster corresponding to the reconstructed photon. These variables are introduced to allow the dependence of the shower topology and isolation variables on and to be taken into account.
Figure 2 shows the photon identification BDT score of the lower-scoring photon in diphoton pairs with an invariant mass, , in the range , for events passing the preselection in the 8dataset and for simulated background events (histogram with shaded error bands showing the statistical uncertainty). The tall histogram on the right corresponds to simulated Higgs boson signal events. Although the simulated background events are only used for training the BDT, it is worth noting that the agreement of their BDT score distribution with that in data is good. The bump that can be seen in both distributions at a BDT score of slightly above 0.1 corresponds to events where both photons are prompt and, therefore, signal-like.
The agreement between data and simulation for photon identification is assessed using electrons from decays, photons from decays, and the highest- photon in diphoton events with in which the relative magnitude of the contribution from misidentified jet fragments is small. Figure 3 shows a comparison of the photon identification BDT score for electron showers reconstructed as photons in the barrel, for data and MC simulated events. The events must pass all the preselection requirements, but the electron veto condition is inverted. The systematic uncertainty assigned to the photon identification BDT score is shown as a band, and corresponds to a shift of 0.01 in the score. The comparison is made for the 8dataset, and is shown for two sets of events with different numbers of primary vertices, , to demonstrate the independence of the result from effects coming from pileup. The differences between the distributions for the data and the simulation fall within the assigned systematic uncertainties for both the lower-pileup () and higher-pileup () sets of events, and the difference between the distributions in the two sets is negligible.
Diphoton vertex
The mean number of interactions per bunch crossing is 9 in the 7dataset and 21 in the 8dataset. In the longitudinal direction, , the interaction vertices, built from the reconstructed tracks, have a distribution with an rms spread of about 6 (5) in the 7 (8)dataset.
The diphoton mass resolution has contributions from the resolution of the measurement of the photon energies and the measurement of the angle between the two photons. If the vertex from which the photons originate is known to within about 10 , then the experimental resolution on the angle between them makes a negligible contribution to the mass resolution. Thus, if the diphoton is associated with the charged particle vertex corresponding to the interaction in which it originated, then the mass resolution will be entirely dominated by the photon energy resolution, since the longitudinal coordinate of the charged particle vertices is known to greater precision than 10 .
Diphoton vertex identification
No charged particle tracks result from photons that do not convert, so the diphoton vertex is identified indirectly, using the kinematic properties of the diphoton system and its correlations with the kinematic properties of the recoiling tracks. If either of the photons converts, the direction of the resulting tracks can provide additional information.
Three discriminating variables are calculated for each reconstructed primary vertex: the sum of the squared transverse momenta of the charged particle tracks associated with the vertex, and two variables that quantify the vector and scalar balance of between the diphoton system and the charged particle tracks associated with the vertex. The three variables are:
, and
,
where the sums are over the transverse momentum vectors of the charged tracks, , and is the transverse momentum vector of the diphoton system. In addition, if either photon is associated with any charged particle tracks that have been identified as resulting from conversion, then a further variable, , is used, as defined below. An estimate of the primary vertex longitudinal position, , is obtained from the conversion track(s), and the additional variable is defined as the pull between and the longitudinal position of the reconstructed vertex, : , where is the uncertainty in . The variables are used as the inputs to a multivariate system based on a BDT to choose the reconstructed vertex to be associated with the diphoton system.
The vertex finding efficiency, defined as the efficiency that the chosen vertex is within 10 of the true vertex location, has been measured using events. The performance of the algorithm is evaluated after re-reconstruction of the vertices following removal of the muon tracks, so that the event mimics a diphoton event. The use of tracks from a converted photon to locate the vertex is validated in events. In both cases the ratio of the efficiency measured in data to that measured in MC simulation is within 1 % of unity when viewed as a function of the number of vertices in the event. When viewed as a function of the -boson , the deviation of the ratio from unity increases to a few percent in the region where . The measured ratio as a function of the -boson is used as a correction to the vertex finding efficiency in simulated Higgs boson signal events. The vertex finding efficiency for a Higgs boson of mass 125, integrated over its spectrum, is computed to be 85.4 (79.6) % in the 7 (8)dataset. Figure 4 shows the efficiency with which a diphoton system is assigned to a vertex reconstructed within 10 of the true diphoton vertex in simulated Higgs boson events () in the 8dataset, as a function of the transverse momentum of the diphoton system.
Per-event vertex probability
A second vertex-related multivariate discriminant has been designed to estimate, event-by-event, the probability for the vertex assignment to be within 10 of the diphoton interaction point. This, in conjunction with the event-by-event estimate of the energy resolution of each photon, is used to estimate the diphoton mass resolution for each individual event, and this estimate is used in the event classification, as described in Sect. 6. The inputs of the vertex probability BDT are
the values of the vertex identification BDT output for the three most likely vertices in the event,
the total number of reconstructed vertices in the event,
the transverse momentum of the diphoton system, ,
the distances between the chosen vertex and the second- and third-best vertices,
the number of photons with an associated conversion track or tracks.
The vertex probability BDT is tested with simulated signal events as shown in Fig. 4, and the performance in data is tested using events. Validation of the vertex probability BDT for events in which conversion tracks are present is achieved using events in which one or more conversion tracks are reconstructed. The probability to identify a close-enough vertex (vertex probability) has a linear relationship with the vertex probability BDT score, the parameters of which are obtained from a fit using a sample of simulated signal events. Figure 5 shows the distribution of the vertex probability estimate, obtained from the BDT score, in events. The charged particle tracks belonging to the muon pair are used to identify the vertex, and are then removed from the event before re-reconstructing the vertices and passing them to the vertex identification and the vertex probability BDTs. The of the dimuon pair is used in the BDT calculation in place of . The vertex identified by the muons is assumed to be the correct or true vertex, so that if the vertex assignment BDT chooses that vertex, it chooses the right vertex, otherwise it chooses the wrong vertex. The vertex probability estimates in data (points), are compared to MC simulation (histograms). The comparison is made separately for events in which the vertex assignment BDT assigns the right vertex, and for those in which it assigns a wrong vertex.
Event classification
The analysis uses events with two photon candidates satisfying the preselection requirements (described in Sect. 4.3) with an invariant mass, , in the range , and with and . In the rare case of multiple diphoton candidates, the one with the highest is selected. The use of thresholds scaled by prevents the distortion of the low end of the spectrum that results if a fixed threshold is used. An additional requirement is applied on the photon identification BDT scores for both photons, which are required to be greater than (see Fig. 2). This requirement retains more than 99 % of simulated signal events fulfilling the other analysis selection requirements, while removing about 24 % of events in data. The requirements listed above are referred to as the “full diphoton preselection”.
To achieve the best analysis performance, the events are separated into classes based on both their mass resolution and their relative probability to be due to signal rather than background. The first step in the classification of the events involves the extraction of those tagged by the presence of objects in the final state, in addition to the photon pair, that give the event a signature characteristic of one of the production processes. The remaining untagged events, which constitute the majority (99 %) of the events used in the analysis, are classified according to a variable constructed using multivariate techniques.
The classification procedure, which is described in detail below, results in 11 event classes for the 7dataset and 14 for the 8dataset. The event classes, and the expected number of SM Higgs boson events and estimated background in those classes, are set out later, in Table 3, together with the composition of the expected SM Higgs boson signal in terms of the production processes, and the diphoton mass resolution expected for the signal in each of the classes. To ensure that the classes are mutually exclusive, events are tested against the class selection requirements in a fixed order as described in Sect. 6.4.
Table 3.
Event classes | Expected SM Higgs boson signal yield (=125) | Bkg. () | ||||||||
---|---|---|---|---|---|---|---|---|---|---|
Total | ggH (%) | VBF (%) | WH (%) | ZH (%) | (%) | |||||
75.1 | Untagged 0 | 5.8 | 79.8 | 9.9 | 6.0 | 3.5 | 0.8 | 1.11 | 0.98 | 11.0 |
Untagged 1 | 22.7 | 91.9 | 4.2 | 2.4 | 1.3 | 0.2 | 1.27 | 1.09 | 69.5 | |
Untagged 2 | 27.1 | 91.9 | 4.1 | 2.4 | 1.4 | 0.2 | 1.78 | 1.40 | 135 | |
Untagged 3 | 34.1 | 92.1 | 4.0 | 2.4 | 1.3 | 0.2 | 2.36 | 2.01 | 312 | |
VBF dijet 0 | 1.6 | 19.3 | 80.1 | 0.3 | 0.2 | 0.1 | 1.41 | 1.17 | 0.5 | |
VBF dijet 1 | 3.0 | 38.1 | 59.5 | 1.2 | 0.7 | 0.4 | 1.65 | 1.32 | 3.5 | |
VH tight | 0.3 | – | – | 77.2 | 20.6 | 2.2 | 1.61 | 1.31 | 0.1 | |
VH loose | 0.2 | 3.6 | 1.1 | 79.1 | 15.2 | 1.0 | 1.63 | 1.32 | 0.2 | |
VH | 0.3 | 4.5 | 1.1 | 41.5 | 44.6 | 8.2 | 1.60 | 1.14 | 0.2 | |
VH dijet | 0.4 | 27.1 | 2.8 | 43.7 | 24.3 | 2.1 | 1.54 | 1.24 | 0.5 | |
tags | 0.2 | 3.1 | 1.1 | 2.2 | 1.3 | 92.3 | 1.40 | 1.13 | 0.2 | |
819.7 | Untagged 0 | 6.0 | 75.7 | 11.9 | 6.9 | 3.6 | 1.9 | 1.05 | 0.79 | 4.7 |
Untagged 1 | 50.8 | 85.2 | 7.9 | 4.0 | 2.4 | 0.6 | 1.19 | 1.00 | 120. | |
Untagged 2 | 117. | 91.1 | 4.7 | 2.5 | 1.4 | 0.3 | 1.46 | 1.15 | 418 | |
Untagged 3 | 153. | 91.6 | 4.4 | 2.4 | 1.4 | 0.3 | 2.04 | 1.56 | 870 | |
Untagged 4 | 121. | 93.1 | 3.6 | 2.0 | 1.1 | 0.2 | 2.62 | 2.14 | 1,400 | |
VBF dijet 0 | 4.5 | 17.8 | 81.8 | 0.2 | 0.1 | 0.1 | 1.30 | 0.94 | 0.8 | |
VBF dijet 1 | 5.6 | 28.5 | 70.5 | 0.6 | 0.2 | 0.2 | 1.43 | 1.07 | 2.7 | |
VBF dijet 2 | 13.7 | 43.8 | 53.2 | 1.4 | 0.8 | 0.8 | 1.59 | 1.24 | 22.1 | |
VH tight | 1.4 | 0.2 | 0.2 | 76.9 | 19.0 | 3.7 | 1.63 | 1.24 | 0.4 | |
VH loose | 0.9 | 2.6 | 1.1 | 77.9 | 16.8 | 1.5 | 1.60 | 1.16 | 1.2 | |
VH | 1.8 | 16.3 | 2.7 | 34.4 | 35.4 | 11.1 | 1.68 | 1.17 | 1.3 | |
VH dijet | 1.6 | 30.3 | 3.1 | 40.6 | 23.4 | 2.6 | 1.31 | 1.06 | 1.0 | |
lepton | 0.5 | – | – | 1.6 | 1.6 | 96.8 | 1.34 | 1.03 | 0.2 | |
multijet | 0.6 | 4.1 | 0.9 | 0.8 | 0.9 | 93.3 | 1.34 | 1.03 | 0.6 |
Multivariate event classifier
A multivariate event classifier, the diphoton BDT, is constructed to satisfy the following criteria:
- The diphoton BDT should assign a high score to events that have
- good diphoton mass resolution,
- high probability of being signal rather than background.
The classifier should not select events according to the mass of the diphoton system relative to the particular mass of the Higgs boson signal used for training.
The classifier incorporates a per-event estimate of the diphoton mass resolution, the identification BDT scores of both photons, and the kinematic properties of the diphoton system, except for To avoid any dependence on the transverse momenta and resolutions are divided by .
The complete list of variables used in the BDT is the same as used in previous versions of the analysis [28]: the scaled photon transverse momenta ( and ), the pseudorapidities of both photons, the photon identification BDT classifier values for both photons, the cosine of the angle between the two photons in the transverse plane, the expected relative diphoton mass resolutions under the hypotheses of selecting the correct/a wrong interaction vertex, and also the probability of selecting the correct vertex.
The diphoton mass resolution depends on several factors: the location of the associated energy deposits in the calorimeter; whether or not one or both photons converted in the detector volume in front of the calorimeter; and the probability that the true diphoton vertex has been identified. Events in which one of the photons has a low identification BDT score are more likely to be due to background processes. The Higgs signal-to-background ratio, , varies with the kinematic properties of the diphoton system mainly through the of the photons (highest when both are in the barrel), and (highest for large ). The BDT is trained using a simulated signal sample having a mass, , near the centre of the mass range of the analysis. The relative abundance of events from different production processes in the sample is set according to the expectations for a SM Higgs boson with that mass.
The multivariate classifier assigns a score to each event. It has been verified that selecting simulated background events with high diphoton BDT score does not result in any peak in the diphoton invariant mass distribution of the selected events. Figure 6 shows, for the 8dataset, how the BDT performs on simulated SM signal events with , and on data satisfying the full diphoton preselection. The classifier score has been transformed such that the sum of signal events from all processes has a uniform, flat, distribution. This transformation assists visualization of the performance of the BDT. The outlined histogram, following the data points, is for simulated background events. The vertical dashed lines indicate the boundaries of the untagged event classes, the determination of which is described in Sect. 6.3. Given that the data are completely dominated by background events, it can be seen that the signal-to-background ratio increases substantially with the classifier score, and that the VBF, VH, and processes tend to achieve high scores, due to their significantly harder spectrum [66, 67].
Figure 7 shows a comparison of the transformed classifier score for data and for MC simulated events, in which for both cases the electrons are reconstructed as photons. The electron showers in the events satisfy the full diphoton preselection requirements with the electron veto condition inverted. The classifier score has been subjected to the same transformation as was used for Fig. 6. The score for events peaks at low values whilst Higgs boson signal events have a flat distribution, reflecting the differences between the two types of event, but it can be seen that sufficient numbers of events are present even at high values of the classifier score to enable the agreement between data and MC simulation to be adequately tested there. The good agreement between MC simulation and data for events constitutes an important check that the modeling of the BDT input variables and their correlations in the simulation of the Higgs boson signal is accurate. The simulated events have been weighted so that the -boson distribution matches that observed in data. The band indicates the systematic uncertainty resulting from propagating to the diphoton BDT event classifier both the uncertainty associated with the photon identification BDT score (which corresponds to a shift of 0.01 of the score) and the uncertainty in the per-photon estimate of the energy resolution (which amounts to a scaling of its value by 10 %). Since the magnitudes of these two uncertainties were chosen to cover the discrepancies between data and simulation in the tails of the distributions of the two variables, the resulting uncertainty in the diphoton BDT event classifier appears to be slightly overestimated.
Events tagged by exclusive signatures
Selections enriched in Higgs boson production mechanisms other than ggH can be made by requiring, in addition to the diphoton pair, the presence of other objects which provide signatures of the production mechanism. Higgs bosons produced by VBF are accompanied by a pair of jets separated by a large rapidity gap. Those resulting from the VH production mechanism may be accompanied by one or more charged leptons, large , or jets from the decay of the or boson. Those resulting from production are, as a result of the decay of the top quarks, accompanied by b quarks, and may be accompanied by charged leptons or additional jets.
The tagging of dijet events, targeting VBF production, significantly increases the overall sensitivity of the analysis and precision on the measured signal strength, and increases the sensitivity to deviations of the Higgs boson couplings from their expected values. The tagging aimed at the VH process increases the sensitivity to deviations of the couplings, and the tagging further probes the compatibility of the observed signal with a SM Higgs boson.
The spectrum of Higgs bosons produced by the VBF, VH, and processes is significantly harder than that of Higgs bosons produced by ggH, or of background diphotons. This results in a harder leading-photon spectrum. In the tagged-class selections advantage is taken of this difference by raising the requirement on the leading photon.
Dijet-tagged event selection and BDT classifiers for VBF production
Vector boson fusion production results in two forward jets, originating from the two scattered quarks. Separating events tagged by the presence of dijets compatible with the VBF process into specific event classes not only increases the separation between signal and background, it also increases the separation between signal production processes. In the purest VBF dijet-tagged class the signal is expected to have a contribution of only 18 % from ggH production. A loose preselection of dijet events is defined and a dijet BDT is trained to separate VBF signal from diphoton background using samples of MC events satisfying this dijet preselection. Signal events from ggH satisfying the dijet preselection are included as background in the training. Details of the dijet preselection and the BDT input variables are given below. A further, “combined”, BDT is then trained. This BDT has only three input variables: the score of the dijet BDT, the score of the diphoton BDT, and the transverse momentum of the diphoton system divided by its mass, . Events for the VBF dijet-tagged classes are selected, from those satisfying the loose dijet preselection, by placing a minimum requirement on their combined BDT score, and the selected events are then classified using that score.
The dijet preselection is applied to diphoton events satisfying the full diphoton preselection and requires the leading (in ) and subleading jets in the event, within , to have and 20respectively, and for the pair to have an invariant mass . The pseudorapidity requirement () is more restrictive than the full detector acceptance (), to avoid the use of jets for which the energy corrections are large and less reliable, and is found to decrease the signal acceptance by 2 %. Additionally, the threshold of the leading photon is raised, requiring for VBF dijet-tagged events.
The jet energy measurement is calibrated to correct for detector effects using samples of dijet, , and events [39]. The energy from pileup interactions and from the underlying event is also included in the reconstructed jets. This energy is subtracted using an -dependent transverse momentum density calculated with the jet areas technique [63, 68, 69], evaluated on an event-by-event basis. Particles produced in pileup interactions may be clustered into jets of relatively large , referred to as pileup jets. These pileup jets are largely removed using selection criteria based on the width of the jet or the compatibility of the tracks in a jet with the primary vertex [70]. Finally, jets within of either of the photons are rejected to exclude the possibility of photons having been included in the reconstruction of the jet.
The variables used in the dijet BDT are the scaled transverse momenta of the photons, and , the transverse momenta of the leading and subleading jets, and , the dijet invariant mass, , the difference between the pseudorapidities of the jets, , the difference between the average pseudorapidity of the two jets and the pseudorapidity of the diphoton system, [71], and the absolute difference in the azimuthal angle between the diphoton system and the dijet system, . Because of the large theoretical uncertainty in the cross section due to higher-order contributions to the ggH process accompanied by two jets in the region very close to [54, 72], the maximum value of the variable is restricted to ; events with are treated as if the value was .
Lepton-, dijet-, and -tagged event classes for VH production
The selection requirements for the classes aimed at selecting events produced by the VH process have been obtained by minimizing the expected uncertainty in the measurement of signal strength of the process, using data in control regions to estimate the background and MC signal samples to estimate the signal efficiency. Four classes are defined: events with a muon or an electron are separated into two classes, according to whether there is significant or another lepton in the event, or there is not; a third class selects events with two or more jets; and the fourth class consists of events with large . The leading photon in the events selected for the lepton classes and for the -tagged class is required to satisfy ; for the dijet-tagged VH class the requirement is tighter, .
Muons are reconstructed with the particle-flow algorithm and are required to be within . A tight selection is applied, based on the quality of the track and the number of hits in the tracker and muon spectrometer. A strict match between the tracker and the muon spectrometer segments is also applied to reduce the contamination from muons produced in decays of hadrons and from beam halo interactions. Finally, a loose particle-flow isolation requirement is applied.
Electrons are identified as clusters of energy deposited in the ECAL matched to tracks. Electron candidates are required to have an ECAL supercluster within the same fiducial region as for photons. Electron identification is based on a multivariate technique [14]. The electron track has to fulfil requirements on the transverse and longitudinal impact parameter with respect to the electron vertex and cannot have more than one missing hit in the innermost layers of the tracker. Electrons from conversions are excluded as described in Ref. [73] and a loose particle-flow isolation requirement is applied.
The tightly selected lepton class (“VH tight ”) is characterised by the full signature of a leptonically decaying or boson, and requires, in addition to the electron or muon, the presence of or another lepton of the same flavour as the first and with opposite sign. For the lepton plus signature the of the lepton is required to be greater than 20 . For the dilepton signature the lepton requirement is relaxed to , but the invariant mass of the pair is required to be between 70 and 110 . For the loose lepton class (“VH loose ”) only a single electron or muon with is required but additional requirements are made to reduce background from leptonic decays of bosons with initial- or final-state radiation: muons and electrons are required to be separated from the closest photon by , and the invariant mass of electron-photon pairs is required to be more than 10away from the -boson mass. In addition, a conversion veto is applied to the electrons to reduce the number of electrons originating from photon conversions.
Events selected for the dijet-tagged VH class are required to have a pair of jets with , within the region , and with an invariant mass within the range ; additional jets may also be present. The of the diphoton system is required to satisfy . The selection also exploits the expected angular distribution of the diphoton pair with respect to the dijet pair from the vector boson decay. The angle, , that the diphoton system makes, in the diphoton-dijet centre-of-mass frame, with respect to the direction of motion of the diphoton-dijet system in the lab frame is computed. The distribution of for signal events coming from VH production is rather flat, whereas background and signal events from ggH production result in distributions strongly peaked at . Consequently is required.
For the tag, additional selection criteria are applied on the azimuthal angular separation between the diphoton system and the direction, , and between the diphoton system and the leading jet in the event, . Discrepancies between data and simulated events in the direction and magnitude of the vector have been studied in detail and a set of corrections derived, some of which need to be applied to simulated events, and others to data. The corrected is required to satisfy .
In addition to the requirements described above, a minimum requirement is also made on the diphoton BDT classifier score for entry into the event classes tagging VH production. The severity of the requirement is optimized for each class: 0.17 for the two lepton-tagged classes, 0.62 for the -tagged class, and 0.76 for the VH dijet-tagged class, where the numerical scale is the classifier score shown in Figs. 6 and 7.
Event classes tagged for production
The production of Higgs bosons in association with top quarks has a small cross section, and so the overall cross section times branching fraction of the decay to photons is only 0.3 at NLO. Therefore, in the full dataset only a handful of events are expected. To maximize signal efficiency we devise event selections that collect both leptonic and hadronic decays of the top quarks, defining both a lepton-tagged and a multijet-tagged event class.
As for the VH event classes, the selection requirements for the classes aimed at selecting events produced by the process have been obtained by minimizing the expected uncertainty in the measurement of signal strength of the process, using data in control regions to estimate the background, and MC signal samples to estimate the signal efficiency. The leading photon is required to have . Jets are required to have and both classes require the presence of at least one b-tagged jet. The lepton tag is then defined by requiring at least one more jet in the event and at least one electron or muon with , and the multijet tag is defined by the requirement of at least four more jets in the event and no lepton. Requirements are also made on the minimum diphoton BDT classifier score for entry into the two classes tagging : 0.17 for the lepton class, and 0.48 for the multijet class, where the numerical scale is the classifier score shown in Figs. 6 and 7. For the 7dataset the events in the two classes are combined after selection to form a single event class.
Classification of VBF dijet-tagged and untagged events
Classes for the VBF dijet-tagged events and the untagged events are defined using the scores of the classification BDTs: the combined dijet-diphoton BDT score is used to select and define the dijet-tagged classes, and the diphoton BDT score defines the untagged class into which the untagged events are placed. The BDT score requirements that constitute the event class boundaries are set by an optimization procedure, using simulated event samples, aimed at minimizing the expected uncertainty in the signal strength. To avoid biases, the simulated events are divided into three non-overlapping sets, which are then used only for the training of the BDTs, or the optimization of event class boundaries, or to model the signal in the extraction of the final results. The number of available simulated events limits the statistical precision in the optimization procedure. The small number of simulated events for some background processes where one or more of the photon candidates result from misidentified jet fragments, results in a very uneven and spikey distribution of the event classifier scores for the simulated background in the range of BDT scores in which there is some contribution from these processes, but it is rare. So, for the event class boundary optimization procedure, the event classifier BDT scores are smoothed, using an adaptive-width Gaussian smoothing in the RooFit package [74]. Differences in performance of less than about 2 % are indistinguishable from statistical fluctuations and are regarded as insignificant.
As a result of the optimization procedure, four untagged event classes and two VBF dijet-tagged classes are defined for the 7dataset. For the 8dataset five untagged and three dijet-tagged classes are defined. Events that fail the requirement on the combined dijet-diphoton BDT score to enter the VBF dijet-tagged classes may enter other event classes. Untagged events that have a diphoton BDT score less than the lower boundaries of the untagged classes in the two datasets are not used in the final statistical analysis. The goal of the optimization setting the diphoton BDT score requirements, which define the untagged classes, is to minimize the expected uncertainty in the overall signal strength measurement. The goal of the optimization for the setting of the combined dijet-diphoton BDT score boundaries, which define the VBF dijet-tagged classes, is to minimize the expected uncertainty in the signal strength associated with the VBF production mechanism. When optimizing the boundaries for the 7dataset, for which the number of MC background events available is particularly limited, the number of dijet-tagged classes is limited to two and the lower boundary of the lowest dijet-tagged class is fixed so that the same efficiency times acceptance is obtained for VBF signal events as in the 8dataset.
Figure 8 shows the combined dijet-diphoton BDT score for events satisfying the dijet preselection in 8data, and for simulated signal events from the four production processes. The outlined histogram is for simulated background events; the shaded error bands on the histogram show the statistical uncertainty in the simulation. The VBF dijet-tagged class boundaries used for the 8dataset are shown by vertical dashed lines. The classifier score is transformed such that signal events produced by the VBF process have a uniform, flat, distribution across the full range of the score. This allows the visualization of the extent to which signal events produced by the VBF process are favoured over background (which predominates in the data), and signal events produced by other processes. Events with scores below the lower boundary fail the VBF dijet-tagged selection, but remain candidates for inclusion in other classes.
The lower boundary on the untagged event class with the lowest signal-to-background ratio controls the total number of events used in the analysis and the overall signal efficiency times acceptance of the analysis (see Fig. 6). The boundary excludes events with very low score in the diphoton BDT for which the background is poorly modelled by MC simulation. Exclusion of these events has the advantage of allowing a better assessment of the expected sensitivity of the analysis, but the exact placement of the boundary is of little consequence.
It is found that, within the statistical uncertainty described above, it makes no difference if the optimization goal is the expected overall uncertainty in signal strength, the expected significance of the signal, or the expected uncertainty in the measured signal strength associated with the VBF production mechanism. It is also found that the performance maxima that fix the event class boundaries are rather shallow, so that the boundaries can be moved without significantly changing the expected performance. Adding further event classes for either the untagged or the VBF dijet-tagged events does not significantly improve the expected performance.
The overall efficiency times acceptance for SM Higgs boson events with is 49.3 % (48.6 %) in the 8 (7)analysis. Investigating the properties of the simulated signal events in the untagged classes reveals, as expected, that the best untagged class (“untagged 0”) contains events in which the diphoton system has high (almost all events have ), while the second best class (“untagged 1”) is dominated by events in which both photons are unconverted and situated in the central barrel region of the ECAL.
Procedure of classification
In total there are 14 event classes for the analysis of the 8dataset and 11 for the analysis of the 7dataset. To ensure that the classes are mutually exclusive, events are tested against the class selection requirements in a fixed order: first the production-signature tagged classes ranked by expected signal-to-background ratio, then the untagged classes. Once selected, events are no longer candidates for inclusion in other classes. The ordering is that shown in Table 2, which lists the classes together with their key selection requirements.
Table 2.
Label | No. of classes | Main requirements | |
---|---|---|---|
7 | 8 | ||
lepton tag | 1 | ||
1 b-tagged jet + 1 electron or muon | |||
VH tight tag | 1 | 1 | |
[ or , , and ] or | |||
[ or , ; ] | |||
VH loose tag | 1 | 1 | |
or , | |||
VBF dijet tag 0–2 | 2 | 3 | |
2 jets; classified using combined diphoton-dijet BDT | |||
VH tag | 1 | 1 | |
multijet tag | 1 | ||
1 b-tagged jet + 4 more jets | |||
VH dijet tag | 1 | 1 | |
jet pair, and | |||
Untagged 0–4 | 4 | 5 | The remaining events, |
classified using diphoton BDT |
For the 7dataset, events in the lepton tag and multijet tag classes are selected first, and combined to form a single event class
Signal model
A parametric signal model is constructed separately for each event class and for each production mechanism from a fit of the simulated invariant mass shape, after applying the corrections determined from comparisons of data and simulation for and events, for nine values of in the range , at 5intervals. The two possible cases regarding diphoton vertex identification, correct vertex and wrong (misidentified) vertex, are fitted separately. Good descriptions of the distributions, including the tails, can be achieved using a sum of Gaussian functions, where the means are not required to be identical. The fits are first performed for the MC sample to determine the number of Gaussian functions to be used and the starting values of their parameters for the further fits to the other eight samples. As many as five Gaussian functions are used, although in most cases the use of two or three results in a good fit. Signal models for intermediate values of are obtained by linear interpolation of the fitted parameters.
Table 3 shows the number of expected signal events from a SM Higgs boson with as well as the background density at that mass for each of the event classes in the 7 and 8datasets. The background estimate is obtained from a fit to the data, as described in Sect. 8, and is given as the differential rate, (events/GeV), at . The table also shows the fraction of each Higgs boson production process (as predicted by MC simulation) as well as the mass resolution, measured both by half the width of the narrowest interval containing 68.3 % of the invariant mass distribution, , and by the full width at half maximum of the distribution divided by 2.35, .
It can be seen that in all classes since the tails of the signal mass distribution are always somewhat larger relative to the width of the core of the distribution than would be the case for a Gaussian distribution. Untagged events with the best mass resolution are selected to the best event classes, and even ignoring the improving mass resolution, and considering a wide window to include all the signal events, the signal-to-background ratio improves by an order of magnitude going from the worst to the best untagged class—a significantly larger variation than the change in resolution. The highest signal-to-background ratio is achieved in the tagged classes, many of which manage to also achieve high levels of purity with respect to contamination from the ggH process.
The mass resolution achieved has improved significantly with respect to analyses of this decay mode previously reported by CMS [28], due to improved intercalibration of the ECAL, complemented by the improved supercluster energy correction regression described in Sect. 4.1. For events in which both photons are in the barrel the has been reduced by around 5 % in 7 data, and by more than 20 % in 8 data. When at least one photon is in the endcap region the has been reduced by around 20 % in 7data, and by more than 30 % in 8data. The reduction in , representing the core of the distribution, is slightly larger, generally an additional 5 % better, when compared to .
Statistical methodology
To extract a result or measurement a simultaneous binned maximum-likelihood fit to the diphoton invariant mass distributions in all the event classes is performed over the range . Binned fits are used for speed of computation, and the bin size chosen, 250 , is sufficiently small compared to the mass resolution that no information is lost. It has been verified that a binned fit with this bin size gives the same result as an unbinned fit. The signal model is derived from MC simulation after applying the corrections determined from data/MC comparisons of and events, as described in the previous section. The background is evaluated by fitting the distribution in data, without reference to the MC simulation. Thus the likelihood to be evaluated in a signal-plus-background fit is
1 |
where comprises those parameters of the signal, such as or the signal strength, that are allowed to vary in the fit, is the parametric signal model, and the background fit function.
The chosen test statistic, used to determine how signal- or background-like the data are, is based on the profile likelihood ratio. Systematic uncertainties are incorporated into the analysis via nuisance parameters and treated according to the frequentist paradigm. A description of the general methodology can be found in Refs. [75, 76]. Unless stated otherwise, the results presented here are obtained using asymptotic formulae [77], including updates introduced in the RooStats package [78].
It is important that the choice of background fit function does not bias the estimate of background obtained from the fit for any signal mass hypothesis, , in the range of the search.
A change has been made with respect to the method used to obtain previous results, which is described in Ref. [28]. Previously, a single fit function was chosen for each class after a study of the potential bias on the estimated background. The potential bias using the chosen function was required to be negligible. The number of degrees of freedom of the fit was increased until the bias became at least five times smaller than the statistical uncertainty in the number of fitted events in a mass window corresponding to the full width at half maximum of the corresponding signal model, for any mass in the range .
For the results reported in this paper a method, the discrete profiling method, has been developed [79] to treat the uncertainty associated with the choice of the function used to fit the background, in a similar way to systematic uncertainties associated with the measurements. The choice of the function used to fit the background, in any particular event class, is included as a discrete nuisance parameter in the likelihood function used to extract the result. All reasonable families of functions should be considered, although in practice it is found that the choice needs to be made between functions in the same families as were previously considered: exponentials, power-law functions, polynomials in the Bernstein basis, and Laurent series. When performing either a background-only fit, or a signal-plus-background fit, by minimizing the value of twice the negative logarithm of the likelihood all functions in these families are tried, with a penalty term added to account for the number of free parameters in the fitting function.
The penalized likelihood function, , for a single fixed background fitting function, , is defined as
2 |
where is the unpenalized likelihood function, is the number of free parameters in , and is a constant. When measuring a quantity, , the likelihood ratio, , is used:
3 |
where the numerator represents the maximum of given , achieved for the best-fit values of the nuisance parameters, , and a particular background function, . The denominator corresponds to the global maximum of , where , , and . Choosing the functional form of the background that maximizes for any particular value of yields confidence intervals on that can only be wider than those obtained using the single fixed functional form from the global best fit, .
Two values of , which sets the magnitude of the penalty for increasing the number of free parameters in the fit, have been tested in detail. The values of and can be justified, respectively, by the -value and the Akaike information criterion [80]. It is found in tests made with pseudo-experiments that with a value of the method gives consistently good coverage and negligible bias.
In order to test coverage and bias we generate pseudo-data. To do that we need first to fit the data, thus facing a problem similar to, but not to be confused with, the original problem of choosing the background fit function to model the background in the analysis. The method used to generate pseudo-data is as follows. For each event class in turn, functions from each of the families used in the discrete profiling method, and listed above, are fit to the data. In each family, the number of degrees of freedom (number of exponentials, number of terms in the series, degree of the polynomial, etc.) is increased until the between N+1 degrees of freedom and N degrees of freedom for the fit to data shows no significant improvement ( obtained from the F-distribution [81]). At that point the function with N degrees of freedom is retained as representative of that family of functions. For each event class, the fits to the data with the retained representative functions for that class, are used to generate pseudo-background distributions.
The discrete profiling method is applied to pseudo-experiments in which signals having a range of strengths, from half to twice that of the SM, are added to the pseudo-background. The tests have demonstrated that the discrete profiling method provides good coverage of the uncertainty associated with the choice of the function, for all the functions considered as generators of background, and provides an estimate of the signal strength with negligible bias. The criterion used for this is similar and approximately equivalent to that used previously [28], the median of the distribution of the pull on the signal strength, , should be less than 0.14. This value is chosen because satisfaction of this criterion ensures that any underestimation of the uncertainty in the signal strength is less than 1 %.
The distributions in the 25 event classes in the 7 and 8data samples, together with the results of a simultaneous fit of the signal-plus-background model, are shown in Figs. 9–16. The distribution of the combined event classes is shown in Sect. 11. The distributions are labeled with the and integrated luminosity of the combined datasets, reflecting the fact that the signal-plus-background fit is a simultaneous fit to the 25 event classes. Data points are drawn for all bins, including those in which there are no events. The error bars are calculated using the Garwood procedure [82] to provide correct coverage of the Poisson uncertainty. The and uncertainty bands shown for the background component of the fit include the uncertainty due to the choice of function and the uncertainty in the fitted parameters, and are computed from the variation in pseudo-experiments on the fitted background yield in bins corresponding to those used to display the data. These bands do not contain the Poisson uncertainty that must be included when the full uncertainty in the number of background events in any given mass range is estimated. The fit is performed on the data from all event class distributions simultaneously, with a single overall value of the signal strength free to vary in the fit.
Systematic uncertainties
The uncertainty related to the background modelling, and how it is handled, has been discussed in the previous section. The systematic uncertainties related to the signal model are described below. A useful measure of the relative importance of the various systematic uncertainties can be obtained by tabulating their contributions to the total uncertainty in the final results for the best-fit signal strength and the best-fit mass. This is done in Tables 7 and 8 in Sect. 11 where the results of the analysis are discussed.
Table 7.
Source of uncertainty | Uncertainty in |
---|---|
PDF and theory | 0.11 |
Shower shape modelling (Sect. 9) | 0.06 |
Energy scale and resolution | 0.02 |
Other | 0.04 |
All syst. uncert. in the signal model | 0.13 |
Statistical | 0.21 |
Total | 0.25 |
Table 8.
Source of uncertainty | Uncertainty in ( ) |
---|---|
Imperfect simulation of electron–photon differences | 0.10 |
Linearity of the energy scale | 0.10 |
Energy scale calibration and resolution | 0.05 |
Other | 0.04 |
All systematic uncertainties in the signal model | 0.15 |
Statistical | 0.31 |
Total | 0.34 |
The systematic uncertainties assigned to all events are
PDF, and theory uncertainties: the theory systematic uncertainties in the production cross section and the diphoton branching fraction follow the recommendations of the LHC Higgs Cross Section Working Group [54, 83]. As can be seen in Table 7, these uncertainties make up the largest contribution to the uncertainty in the signal strength, and are dominated by the uncertainty in the ggH process cross section, coming from both uncertainties due to the missing higher orders and uncertainties related to the parton distribution functions. The effect of these theory uncertainties on the overall acceptance and on the classification of the accepted events is included by varying the and rapidity distributions of the simulated Higgs boson events as they are changed by the theory uncertainties.
Integrated luminosity: the luminosity uncertainty is estimated as described in Refs. [84, 85], and amounts to a 2.2 % (2.6 %) uncertainty in the signal yield in the 7 (8)datasets, respectively.
Vertex finding efficiency: the uncertainty in the vertex finding efficiency is taken from the uncertainty in the measurement of the corresponding data/MC scale factor obtained using events. We assign an additional 1 % uncertainty in the vertex finding efficiency, related to the amount of activity resulting in charged particle tracks in signal events, which is derived by varying the pythia underlying event tunes in ggH events. Since the vertex-finding efficiency varies considerably with , there is an uncertainty in the overall efficiency coming from the uncertainty in the signal distribution, leading to a further uncertainty of 0.2 % to be added to the uncertainty in the data/MC scale factor for both the 7 and 8datasets.
Trigger efficiency: the uncertainty in the trigger efficiency is extracted from events using a tag-and-probe technique. Rescaling is used to take into account the difference in the distributions of electrons and photons. The uncertainty value obtained is slightly less than 1 %, but an uncertainty of 1 % has been assigned.
The systematic uncertainties related to individual photons are
Photon energy scale uncertainty resulting from electron/photon differences: an important source of uncertainty in the energy scale of photons is the imperfect modelling of the difference between electrons and photons by the MC simulation, the most important cause of which is an imperfect description of the material between the interaction point and the ECAL. Studies of electron bremsstrahlung, photon conversion vertices, and the multiple scattering of pions suggest a deficit of material in the simulation. Although the deficit is almost certainly in specific structures and localized regions—and this hypothesis is supported by the studies—the data/MC discrepancies are slightly smaller than what would be caused by a 10 % uniform deficit of material in the region and a 20 % uniform deficit for . The resulting uncertainty in the energy scale has been assessed using simulated samples in which the tracker material is increased uniformly by 10 and 20 %, and an uncertainty, with differing magnitude in eight bins (: three barrel and one endcap, and : two bins) is assigned to photon energies. The systematic uncertainty in the energy scale ranges from 0.03 % in the central ECAL barrel up to 0.3 % in the outer endcap. Two nuisance parameters, one for and one for the remainder of the range used in the analysis, are introduced to model this uncertainty, which is fully correlated between the 7 and 8datasets. Another difference between data and simulation, relevant to electron-photon differences, is the modelling of the varying fraction of scintillation light reaching the photodetector as a function of the longitudinal depth in the crystal at which it was emitted. Ensuring adequate uniformity was a major accomplishment in the lead tungstate crystal development that was achieved by depolishing one face of each barrel crystal, but an uncertainty in the degree of uniformity achieved remains [86, 87]. In addition, the uniformity is modified by the radiation-induced loss of transparency of the crystals. The effect of the uncertainty, including the effect of radiation-induced transparency loss, has been simulated. It results in a difference in the energy scale between electrons and unconverted photons which is not present in the standard simulation. The magnitude of the uncertainty in the photon energy scale is 0.04 % for photons with and 0.06 % for those with , but the signs of the energy shifts are opposed, and the two anti-correlated uncertainties result in an uncertainty about 0.015 % in the mass scale. A further small uncertainty is added to account for imperfect electromagnetic shower simulation by version 9.4.p03. A simulation made with an improved shower description, using the Seltzer–Berger model for the bremsstrahlung energy spectrum [88], changes the energy scale for both electrons and photons. The much smaller changes in the difference between the electron and photon energy scales, although mostly consistent with zero, are interpreted as a limitation on our knowledge of the correct simulation of the showers, leading to a further uncertainty of 0.05 %.
Energy scale nonlinearity: possible differences between MC simulation and data in the extrapolation from shower energies typical of electrons from decays, to those typical of photons from decays, have been investigated with data samples by binning the events according to the scalar sum of the of the two electron showers, and by studying electron showers in events in which the electron is also measured by the tracker. The effect of the differential nonlinearity in the measurement of photon energies has an effect of up to 0.1 % on the diphoton mass scale for diphoton masses close to . In the best untagged event class, in which the diphoton transverse momentum is particularly high, the effect is up to 0.2 %. The uncertainties are not completely correlated between the 7 and 8datasets, since the energy response regression (Sect. 4.1), which would be strongly implicated in any nonlinearity, uses independent sets of regression weights for the two datasets. Moreover, -dependent scale corrections have been applied at 8for barrel photons, while the corrections at 7are not -dependent. Studies suggest that there may be as much as 20 % correlation between the uncertainties in the energy scale nonlinearities in the 7 and 8datasets, and this correlation is included in the implementation of the uncertainties. This uncertainty makes a significant contribution to the uncertainty in the measured Higgs boson mass, as can be seen in Table 8.
Measuring and correcting the energy scale in data, and the energy resolution in simulation: the energy scale and resolution in data are measured with electrons from decays. The statistical uncertainties in the measurements are small, but the methodology, which is described in Sect. 4.1, gives rise to a number of systematic uncertainties related to the imperfect agreement between data and MC simulation. These are estimated and accounted for in the same eight bins (4 bins in and 2 bins in ) as are used to derive the scale corrections and the resolution smearings for simulated events. The uncertainties range from 0.05 % for unconverted photons in the ECAL central barrel, to 0.1 % for converted photons in the ECAL outer endcaps. In addition, for the barrel region, the uncertainty in the energy dependence of the Gaussian smearing applied to the simulation, is also accounted for. The energy dependence of the smearing is controlled by a parameter that shares the smearing between a constant term and a term proportional to , and the uncertainty pertains to this sharing. Finally, there is an overall uncertainty that accounts for possible misdescription of the line-shape in simulation.
Photon identification BDT score, and estimate of the per-photon energy resolution: the uncertainties in these two quantities are discussed together since they are studied in the same way, and the dominant underlying cause of the observed differences between data and simulation is, almost certainly, the imperfect simulation of the shower shape—despite the fact that no obvious differences between data and simulation can be observed when the shower shape variables are examined individually. The combined contribution of the uncertainties in these two quantities dominates the experimental contribution to the systematic uncertainty in the signal strength, and has been labeled “shower shape modelling” in Table 7. The agreement between data and simulation is examined when the photon candidates are electron showers reconstructed as photons in events, photons in events, and leading photons in preselected diphoton events where . It is found that among the input variables to the diphoton BDT, only the distributions of the photon identification BDT score and the per-photon energy resolution estimate show significant differences between data and simulation. A variation of 0.01 on the photon identification BDT score, together with an uncertainty in the per-photon energy resolution estimate, parameterized as a rescaling of the resolution estimate by 10 % about its nominal value, fully covers the differences observed in all three of the above data samples.
Photon preselection efficiency: the uncertainty in the photon preselection efficiency is taken as the uncertainty in the data/MC preselection efficiency scale factors, which are measured using events with a tag-and-probe technique (see Table 1).
The effect of the single photon uncertainties is propagated to the diphoton quantities: diphoton efficiency, diphoton mass scale, and diphoton mass resolution. For instance, to obtain the magnitude of the mass-scale uncertainty resulting from a particular photon energy uncertainty, which may relate only to certain photons (such as barrel photons with ), the energy of photons in simulated signal events to which the uncertainty applies is shifted by the single photon uncertainty. The resulting shift of the mean of the diphoton mass distribution in each event class is determined. This shift corresponds to the effect of the single photon energy uncertainty in the diphoton mass scale and may be different for each event class. The effect of single photon uncertainties on the diphoton selection efficiency and diphoton resolution are determined in a similar way.
The sources of systematic uncertainty for the event classes targeting specific production modes are
Uncertainties in jet requirements: the largest uncertainty related to the tagging of production processes comes from a theory uncertainty and concerns the probability of producing additional jets in gluon-fusion Higgs boson production. The Stewart–Tackmann procedure [72] recommended by the LHC Higgs Cross Section Working Group [54] has been used to quantify the uncertainty in the yield of ggH events in the VBF dijet-tagged classes. The resulting uncertainty agrees comfortably with our previous estimation [28] derived by varying the underlying event tunes in ggH events produced by pythia, and that method is retained to estimate the uncertainty associated with additional jet production in the yield of ggH events in the multijet-tagged class. There is a further contribution to the uncertainty in the yield of ggH events in the multijet-tagged class arising from the uncertainty in the probability of gluon splitting to , which is estimated from the discrepancy observed between data and powheg simulation in the fraction of additional b-tagged jets in samples of +jets events, where the pair is identified by the presence of two charged leptons in the final state. Additionally, since few events from the simulated signal samples of ggH are selected for the multijet-tagged class, there is a contribution due to the limited sample size. For the VBF dijet-tagged classes, the VH dijet-tagged class, and the multijet-tagged class there is an uncertainty in the effect of the algorithm used to reject jets from pileup (in the 8dataset only). Further small contributions are due to the uncertainties in the jet energy scale and resolution corrections.
Lepton identification efficiency: for both electrons and muons, the uncertainty in the identification efficiency is computed by varying the data/simulation efficiency scale factor by its uncertainty. The resulting differences in the selection efficiency for the event classes tagged by leptons, range from 0.2 to 0.5 % depending on the event category, and are taken as systematic uncertainties.
selection efficiency: systematic uncertainties due to reconstruction are estimated both in signal events in which real is expected (such as in production) and in the other Higgs production mechanisms. For WH events the uncertainty is estimated by applying or not the corrections and taking the difference in efficiency of 2.6 % as a systematic uncertainty. For the other processes, ggH, VBF, and , what is uncertain is the fraction of events in the tail of the distribution. This is evaluated by comparing diphoton data and simulated events in control samples enriched in +jet events, which have a similar distribution to the Higgs signal events. The systematic uncertainty amounts to 4 %.
b-tagging efficiency: the uncertainty in the b-tagging efficiency used in the selection for the -tagged classes, is evaluated by varying the measured b-tagging efficiency scale factors between data and simulation within their uncertainty. The resulting uncertainty in the signal yield is 1.3 % in the lepton-tagged class and 1.1 % in the multijet-tagged class.
Alternative analyses
Three alternative analyses are performed using particular variations of methodology, which help to provide verification of different aspects of the analysis described in the previous sections.
Cut-based analysis
The first of these, the “cut-based” analysis described in Ref. [28], does not use multivariate techniques for selection or classification of events. Photon identification is performed by dividing photons into four mutually exclusive categories depending on whether the photon is in the barrel or endcap, and on whether or not it has . The identification selection requirements are then particular to the category, and use a subset of the discriminating variables that are used in the multivariate photon identification described in Sect. 4.3.
Four mutually exclusive diphoton event classes are constructed by splitting the events according to the same categorization criteria as is used for single photons in the photon identification. Subsequently these four classes are each split according to the transverse momentum of the diphoton system. The four event classes are
-
0.
Both photons are in the barrel and have .
-
1.
Both photons are in the barrel and at least one of them fails the requirement of .
-
2.
At least one photon is in the endcap and both photons have .
-
3.
At least one photon is in the endcap and at least one of them fails the requirement .
Photons with a high value of the variable are predominantly unconverted and have a better energy resolution than those with a lower value, and photon candidates with a high value of are also less likely to arise from misidentification of jet fragments. Similarly, photons in the barrel have both better energy resolution and are more likely to be signal photons. Thus, the classification serves a similar purpose to the one using the BDT event classifier: events with good diphoton mass resolution, resulting from photons with good energy resolution, and with better signal-to-background ratio are grouped together. Each of the four event classes is then split into two according to the transverse momentum of the diphoton system. Since the spectrum resulting from Higgs bosons produced by the VBF, VH, or processes is significantly harder than that of the diphoton background, this separation improves the sensitivity of the analysis by increasing the expected signal-to-background ratio in the high- event classes. The magnitude of the improvement in sensitivity is about 5 %, and has a very weak dependence on the precise value of the threshold chosen. To avoid modification of the shape of the invariant mass spectrum by the threshold, the classification uses the ratio , with a threshold value of 0.32, corresponding to at .
Event classes tagged by signatures of VBF, VH, and production are also included in the cut-based analysis. The event classes tagged for VH and production are defined in exactly the same way as described in Sect. 6.2, with the exception that the minimum requirements on the diphoton BDT scores are replaced by the cut-based photon identification requirements. A dijet tag is defined to select signal events produced by the VBF process by requiring a pair of jets satisfying requirements on the same variables as are used by the main analysis in the dijet BDT described in Sect. 6.2.1. These selection requirements are listed in Table 4. The tagged events are subdivided into two classes depending on whether they additionally satisfy tighter requirements on the of the second jet and the dijet mass, .
Table 4.
Variable | Requirement |
---|---|
0.5 | |
25 | |
30 | |
20 | |
3 | |
2.5 | |
250 | |
2.6 |
Signal and background models are constructed in the same way as in the main analysis and are fitted to the distributions. Since this analysis does not use multivariate techniques for event selection or for event classification, it provides some degree of cross-checking on their use in the main analysis.
Sideband background model analysis
The second alternative analysis approach, the “sideband background model” analysis described in Ref. [28], uses the same multivariate techniques as the standard analysis to select the events, but employs a very different procedure to model the background. For any given mass hypothesis, , a signal region is defined as the range centred on . A contiguous set of sidebands is defined in the mass distribution on either side of the signal region, from which the background is extracted. Each sideband is defined to have the same width of relative to the diphoton mass that corresponds to its centre. A total of eight sidebands are defined, four on either side of the signal region. Six sidebands are used to obtain the background estimate, with a sideband on either side of the signal region left unused in order to avoid signal contamination.
The result is extracted by counting events in the signal region, in bins that are defined using two-dimensional (2D) distributions of the diphoton BDT score and the diphoton mass in the form , where and is the Higgs boson mass hypothesis. The distributions, for simulated signal and background events, are in the form of histograms, and after applying a smoothing algorithm to them, seven event bins are defined for the untagged events by defining regions ranked by signal-to-background ratio in the 2D plane. For the tagged events, the event bins correspond to the tagged classes described in Sect. 6.2.
The overall normalization of the background model is obtained from a parametric fit to the inclusive mass spectrum, with the signal region excluded from the fit, and it is easy to account for the small uncertainty associated with the choice of function in this single fit. The number of events in each event bin is obtained from the data in each of the six sidebands. It is assumed that, for any sideband, the fraction of events in each bin is a linear function of the invariant mass of the sideband central mass, and that there is negligible signal contamination in the sidebands. These assumptions have been verified within the assigned systematic uncertainties. The sideband background analysis does not rely on a parametric fit to the distribution to model the background shape in the signal region, and thus provides a valuable cross-check of the background modelling used in the main analysis.
Dijet 2D analysis
The third alternative analysis, the “dijet 2D” analysis, uses a different method for extracting the signal produced by the VBF production process. The dijet invariant mass, , of the pair of jets that accompany the production of a Higgs boson by the VBF mechanism, tends to be larger than that of pairs of jets found in either background events or in events produced by the ggH process. The analysis takes advantage of this by extracting the VBF signal in a parametric 2D fit of signal and background in the (, ) plane. The initial selection of events for the analysis makes a requirement on the photon identification BDT score (Sect. 4.3). Dijet-tagged events are required to satisfy the same requirements as for the VBF dijet tag in the cut-based analysis, shown in Table 4. The invariant mass of the dijet pair is required to satisfy , and the selected events in the 7 and 8datasets are divided in two and four event classes, respectively, based solely on the estimated diphoton mass resolution. The remaining events, not selected for the VBF dijet-tagged classes, are classified in the same way as in the main analysis. The 2D fit is applied to the events in the dijet-tagged classes using parametric 2D signal and background models. The signal in the other event classes is extracted using a one-dimensional fit to the distribution, as in the main analysis. This analysis provides an alternative approach to extracting the VBF signal, which provides most of the sensitivity in the measurement of vector-boson-initiated production.
Results
Figure 17 shows the distribution of the combined data in the 7 and 8samples, together with the sum of the signal-plus-background fits to the 25 event classes which results in a best-fit mass . The uncertainty bands shown on the background component of the fit include the uncertainty due to the choice of function and the uncertainty in the fitted parameters. These bands do not contain the Poisson uncertainty which must be included when the full uncertainty in the number of background events in any given mass range is estimated. The excess of events over the background expectation visible near can be seen more clearly after subtraction of the background component, shown in the lower plot.
Significance of the signal and its strength
The local -value quantifies the probability for the background to produce a fluctuation as large, or larger, than the apparent signal observed, within a specified search range and uncorrected for the “look-elsewhere effect” [89]. Figure 18 shows the local -value, in the mass range , calculated separately for the 7 and 8datasets as well as their combination. Lines indicating the -values expected for a SM Higgs boson, for the three cases, are also shown. The values of expected significance have been calculated using the background expectation obtained from the signal-plus-background fit, the so-called post-fit expectation. The post-fit model corresponds to the parametric bootstrap described in the statistics literature [90, 91], and includes information gained in the fit regarding the values of all parameters, including the best-fit mass.
The significance of the minimum of the local -value, at 124.7 , is 5.7 where a local significance of 5.2 is expected from the SM Higgs boson. To better visualize the excess of events, with respect to the background expectation, and its significance, the diphoton mass spectrum is plotted with each event used in the analysis weighted by a factor depending on the category in which it falls. The weight is proportional to , where and are the numbers of expected signal and background events, respectively, counted in a mass window corresponding to and centred on . The background is calculated from the signal-plus-background fit. The motivation for this choice of weights is explained in Ref. [92]. The weighted data, the weighted signal model, and the weighted background model are normalized such that the integral of the weighted signal model matches the number of signal events obtained from the best fit. The resulting distribution, and the corresponding background subtracted spectrum, are shown in Fig. 19.
The signal strength is quantified by , where denotes the production cross section times the relevant branching fractions, relative to the SM expectation. In Fig. 20 the combined best-fit signal strength, , is shown as a function of the Higgs boson mass hypothesis, both for the standard analysis (top) and for the cut-based analysis (bottom). The two analyses agree well across the entire mass range. In addition to the signal around 125, both analyses see a small upward fluctuation at 150, which is found to have a maximum local significance of just over at —slightly beyond the mass range of our analysis.
The best-fit signal strength for the main analysis, when the value of is treated as an unconstrained parameter in the fit, is , with the corresponding best-fit mass being . The expected uncertainties in the best-fit signal strength, at this mass, are +0.24 and . The values of the best-fit signal strength, derived separately for the 7 and 8datasets, are listed in Table 5. For the cut-based analysis the corresponding value is at , and for the sideband background model analysis the value measured is at . These values are shown in Table 6 together with the expected uncertainty, and the corresponding values for the main analysis.
Table 5.
( ) | ||
---|---|---|
7 | 124.2 | |
8 | 124.9 | |
Combined | 124.7 |
Table 6.
Expected | Observed | |
---|---|---|
Main analysis | ||
Cut-based analysis | ||
Sideband bkg. model analysis |
The uncertainty in the signal strength may be separated into statistical and systematic contributions, with the latter further divided into those having, or not, a theoretical origin: , where the statistical contribution includes all uncertainties in the background modelling. The separation of contributions can be taken further and Table 7 lists a finer breakdown of the contributions to the systematic uncertainty, where the contributions of the 81 nuisance parameters in the analysis are grouped according to their physical origin, as relevant to the signal strength uncertainty.
In Fig. 21 the best-fit signal strength, , is shown for each event class in the combined 7 and 8datasets, fixing in the fits. The horizontal bars indicate uncertainties in the values, and the vertical line and band indicate the best-fit signal strength in the combined fit to the data and its uncertainty. The signal-plus-background fit for the VH tight-lepton tagged class in the 7dataset, when done alone, does not converge because in this class and in the region of where the signal is expected there are no events in the data. No value for the signal strength in this class is shown in the figure. The probability of the values for the 24 remaining classes being compatible with the overall best-fit signal strength is 74 %.
Mass measurement
The four main Higgs boson production mechanisms can be associated with either fermion couplings (ggH and ) or vector boson couplings (VBF and VH). To make the measurement of the mass of the observed resonance less model dependent the signal strengths of the production processes involving the Higgs boson coupling to fermions and the production processes involving the coupling to vector bosons, are allowed to vary independently. The two signal strength modifiers are denoted and . Figure 22 (top) shows the resulting scan of the negative-log-likelihood ratio, , defined in Equation 3, as a function of the mass hypothesis, where and are treated as unconstrained parameters in the fit, giving the mass of the observed boson as .
Figure 22 (bottom) shows a map of the value of in a two-dimensional scan of the (, ) plane. Here only a single signal strength modifier is allowed to vary, thus requiring , and the mass measured is unchanged. If the mass is measured in the 7 and 8datasets separately the values are found to differ by less than . The uncertainty in the measured mass can be separated into statistical and systematic contributions: . Systematic uncertainties from theory play a negligible role. However, the effect of interference between ggH and the continuum diphoton background produced via quark loops has not been taken into account. This interference is expected to result in a downward shift of the observed mass [93, 94]. Taking the parameterization given in Ref. [94] we expect a shift of less than 20 in our analysis.
The calibration of the energy scale is achieved using events as a reference, as described in Sect. 4.1. Systematic uncertainties related to individual photons as described in Sect. 9 are propagated to the signal model, where they result in uncertainties in the signal peak position and width. The three main sources of systematic uncertainty in the energy scale that contribute to the uncertainty in the measured mass are shown in Table 8, where the contributions of the 81 nuisance parameters in the analysis are grouped according to their physical origin, as relevant to the mass uncertainty. The largest contributions are due to the possible imperfect simulation of (i) differences in detector response to electrons and photons arising from a number of factors that have been discussed in Sect. 9, and (ii) the energy scale nonlinearity in the extrapolation from the -boson mass to the Higgs boson mass. A further contribution comes from the uncertainties in the setting of the energy scale itself, that is, in the procedure and methodology of using measurement of the invariant mass in events in which the electron showers are reconstructed as photons. Other sources of systematic uncertainty contribute little.
Additional possible sources of uncertainty that have been investigated and found to be negligible are a possible bias related to the choice of background parameterization, which has been studied using pseudo-experiments where the effect is found to be less than 10 ; the effect of the switch of preamplifier when very large signals, in the barrel and in the endcaps, are digitized using a preamplifier with lower gain; and the effect of imperfect simulation of the effect of signals from interactions in previous bunch crossings.
Production mechanisms and coupling modifiers
Figure 23 shows the and contours, computed as the variations around the likelihood maximum, for the signal strength modifiers and . The best-fit values of these signal strength modifiers, when they are both allowed to vary, and is treated as an unconstrained parameter in the fit, are found to be and . These numbers are tabulated in Table 9, together with the expected uncertainty in each signal strength modifier.
Table 9.
Expected | Observed | |
---|---|---|
If the signal strengths of all four production processes are allowed to vary independently in the fit, the values of measured for each process are compatible with the expectations for a SM Higgs boson, as shown in Fig. 24. The signal mass, common to all four processes, is treated as an unconstrained parameter in the fit. The horizontal bars indicate uncertaintiesin the values. For comparison, the dijet 2D analysis obtains the value , whereas the result of the main analysis, shown in the plot, is . Table 10 shows the four signal strengths observed, with the contributions to their uncertainties separated into statistical and systematic components. The systematic uncertainty has been separated, where feasible, into the contributions from theoretical uncertainties, and other (experimental) uncertainties.
Table 10.
Process | Uncertainty | ||||
---|---|---|---|---|---|
Total | Stat | Systematic | |||
Theo | Exp | ||||
ggH | 0.34 | 0.30 | 0.13 | 0.09 | |
VBF | 0.73 | 0.69 | 0.20 | 0.15 | |
VH | 0.97 | 0.97 | 0.08 | ||
2.2 | 2.1 | 0.4 |
Various parameterizations of the couplings can be used to further test the compatibility of the observed new particle with the predictions for a SM Higgs boson [54]. Figure 25 shows two-dimensional likelihood scans of versus (top) and versus (bottom). The variables and are, respectively, the coupling modifiers of the new particle to vector bosons and to fermions; alternatively, and are the effective coupling modifiers to photons and to gluons; all four variables are expressed relative to the SM expectations. For each scan a fixed value of is used, and it has been verified that allowing to vary produces an indistinguishable result. The best-fit points are , and .
Decay width
It is possible to set a limit on the width of the observed signal, albeit a limit far in excess of the SM expectation of 4 for . To accommodate the natural width of the Higgs boson, the Gaussian components used in the signal model of the SM analysis, where the signal width is assumed to be negligible as compared to the detector resolution, are replaced by an analytic convolution of a Breit–Wigner distribution (modelling a nonzero decay width) with a Gaussian distribution (modelling the detector resolution).
A profile likelihood estimator is used to calculate upper limits on the width of the observed boson whilst allowing the Higgs boson mass to vary in the fit. Figure 26 shows a scan of the negative-log-likelihood ratio as a function of the observed new particle’s decay width for the combined 7 and 8dataset. The observed (expected) upper limit on the width is found to be 2.4 (3.1)at a 95 % confidence level (CL).
Search for additional Higgs-boson-like states
To search for a possible additional Higgs-boson-like state, , in the mass range , the observed signal around 125is added to the background model and its mass and signal strength are allowed to vary in the fit. An additional, independent signal model is introduced as a second Higgs boson, for which the exclusion limits are calculated using the modified frequentist method and the criterion [95, 96]. In order to set limits for the combined 7 and 8datasets it is necessary to make an assumption about the ratio of cross sections of the new state at 7 and 8. By expressing the limit in terms of the SM cross section times branching fraction we implicitly assume that the ratio is that of the SM. The resulting exclusion limit is shown in Fig. 27. Once sufficiently away from 125, the same limit is obtained as when searching for a single SM Higgs boson. The shading indicates a window with a width of 10 , centred at the best-fit mass, where the expected sensitivity to a second Higgs boson is severely degraded due to the presence of the already observed state.
A further particular case of interest is when the second state couples only to fermions, for example in the alignment limit of some two-Higgs-doublet models [97]. We also examine the case where the second state couples only to bosons at the tree level. Figure 28 shows the exclusion limits obtained when the observed signal near 125is added to the background model and its mass and signal strength are allowed to vary in the fit, and an additional state produced (top) only by the gluon-fusion process, or (bottom) only by the VBF and VH processes. The limits are given in terms of the SM cross section times branching fraction for those processes. Even for the VBF and VH processes, which have lower cross sections, an additional state with SM-like signal strength is excluded or disfavoured over much of the mass range.
The shaded regions in Figs. 27 and 28, where the expected sensitivity to a second Higgs boson is severely degraded due to the presence of the already observed state, are probed by a dedicated search using the high resolution of the diphoton channel to provide sensitivity to a pair of states separated by only a few . The signal model is re-parameterized with two signals, having masses and . The relative strengths of the two signals, parameterized by the variable , is allowed to vary such that the two signals are modulated by and respectively, where is the total signal strength and is the fraction of signal contained in the state lower in mass. A two-dimensional scan of and is obtained, while allowing both and to vary as free parameters in the fit. Figure 29 shows the expected (upper plot) and observed (lower plot) negative-log-likelihood ratio in the plane. Sensitivity is expected in regions where is close to or greater than the experimental mass resolution and where the two signal strengths are similar. The black cross shows the best-fit value, and the lines correspond to the and uncertainty contours for the SM (i.e. a single state). It can be seen that a region of the parameter space is disfavoured at more than : where the ratio of the signal strengths is between 0.2 and 0.8 and the mass difference is greater than values ranging between 2.5 and 4depending on the ratio of the signal strengths. The somewhat asymmetrical shape of the excluded region and the position of the best-fit value, are a reflection of the slightly asymmetrical mass peak seen in Fig. 19, also reflected in the figures showing the local -value, and exclusion limit as a function of .
Testing spin hypotheses
The Landau–Yang theorem forbids the direct decay of a spin-1 particle into a pair of photons [98, 99]. However, it is of interest to compare the hypothesis of a spin-2 “graviton-like” model with minimal couplings, , [55], to that of a spin-0 SM-Higgs-boson-like, , model. As the is just one of many possible realizations of the spin-2 tensor structure, an attempt has been made to make the analysis as model independent as possible. Tests have been performed for hypotheses in which the resonance is produced entirely by gluon-fusion (), in which it is produced entirely by quark-antiquark annihilation (), and for cases in which it is produced by a mixture of the two processes. The cosine of the scattering angle in the Collins–Soper frame, [100], is used to discriminate between the two hypotheses. The angle is defined, in the diphoton rest frame, as that between the collinear photons and the line that bisects the acute angle between the colliding protons:
4 |
where and are the energies of the leading and subleading photons, and are the components of their momenta, and and are the invariant mass and transverse momentum of the diphoton system. In the rest frame of a spin-0 boson the decay photons are isotropic, and so, before the acceptance requirements, the distribution of is uniformly flat under the hypothesis. In general this is not the case for the decay of a spin-2 particle.
To increase the sensitivity, the events are categorized using the same four diphoton event classes used in the cut-based analysis, described in Sect. 10.1, but without the addition classification based on used there. Within each diphoton class, the events are binned in to discriminate between the different spin hypotheses. The events are thus split into 20 event classes, four diphoton classes with five bins each, for both the 7 and 8datasets, giving a total of 40 event classes.
Although the acceptance times efficiency, , varies considerably as a function of , this variation is, for gluon-fusion production, independent of the spin-parity models tested. This is also true in the restricted ranges of and defined by the diphoton classes, which allows the extraction of the signal yield in bins of in a reasonably model independent way. Figure 30 shows for (all SM production modes), (gluon-fusion) and ( production) as a function of , as calculated for the 8dataset. The bin boundaries are shown by vertical dashed lines. The value of for the models divided by for SM is shown below, where the bands indicate the spread of values among the four diphoton classes. It can be seen that the ratio is flat, independent of , except at the highest values of where the relative contribution from SM VBF production is significant. The events in the region where the ratio falls from its flat level, , are collected in a separate bin, and the bin boundaries for the remaining events are chosen to maintain approximately the same event yield in each bin.
Figure 31 shows histograms of the expected signal strength, , relative to the SM expectation in the five bins of for the SM, and for two models: where the resonance is produced entirely by gluon-fusion (), and where it is produced entirely by quark-antiquark annihilation (). The expected values in the five bins are obtained by constructing a representative pseudo-data model in which the overall signal strength has been set to be that obtained from fitting the model in question, plus background, to the data. When generating pseudo-experiments for a particular model, the value of all the free parameters, including the signal nuisance parameters, the background shape parameters, and the overall signal strength, are set to their best-fit values obtained by fitting the model in question to the data with a single overall value of the signal strength. The post-fit expected value of the signal strength for the SM signal model is thus that which is observed when simultaneously fitting the 40 event classes with a single signal strength, i.e. . The observed values in the five bins shown in the figure are obtained from a simultaneous fit of the SM-signal-plus-background model to the 40 event classes, with five signal strength variables (one for each bin) and a common allowed to vary.
The separation between the two models is extracted using a test statistic defined as twice the negative logarithm of the ratio of the likelihoods for the signal plus background hypothesis and the signal plus background hypothesis when performing a simultaneous fit of all forty event classes together, . The test is made under the assumption that the state is produced entirely by either gluon-fusion, or entirely by quark-antiquark annihilation, or by three intermediate mixtures of and spin-2 production. The fraction of the spin-2 state produced by annihilation is parameterized by the variable , so that the total signal plus background, , is given by
5 |
where is the -produced signal, the -produced signal, is a signal strength modifier, and is the background. Figure 32 shows the values of the test statistic as a function of . Table 11 gives the values of , expected and observed, which measures the extent to which the spin-2 model is disfavoured, for different values of . The hypothesis of the signal being is disfavoured for all values of tested. When produced entirely by gluon fusion, it is disfavoured with a value of 94 % (92 % expected). When produced entirely by annihilation it is disfavoured with a value of 85 % (83 % expected). Intermediate mixtures, where there is less sensitivity to distinguish between the models, are somewhat less disfavoured.
Table 11.
Expected | Observed | |
---|---|---|
0 | 0.92 | 0.94 |
0.25 | 0.78 | 0.83 |
0.50 | 0.64 | 0.71 |
0.75 | 0.69 | 0.75 |
1 | 0.83 | 0.85 |
Summary
We report the observation of the diphoton decay mode of the recently discovered Higgs boson and measurement of some of its properties. The analysis uses the entire dataset collected by the CMS experiment in proton-proton collisions during the 2011 and 2012 LHC running periods. The data samples correspond to integrated luminosities of 5.1at and 19.7at 8. The selected events are subdivided into classes, designed to enhance the overall sensitivity and to increase the sensitivity to individual Higgs production mechanisms, and the results of the search in all classes are reported.
A clear signal is observed in the diphoton channel at a mass of 124.7with a local significance of , where a significance of is expected for the standard model Higgs boson. The mass is measured to be , and the best-fit signal strength relative to the standard model prediction is . The best-fit values for the signal strength modifiers associated with the ggH and production mechanisms, and with the VBF and VH mechanisms are found to be and .
A direct upper limit on the natural width of the state is set at 2.4(3.1expected) at a 95 % confidence level, and additional SM-like Higgs bosons are excluded at a 95 % confidence level in a large fraction of the mass range between 110 and 150. The SM spin-0 hypothesis for the observed state is compared to a graviton-like spin-2 hypothesis with minimal couplings. The hypothesis of the signal being is disfavoured. When produced entirely by gluon fusion, it is disfavoured with a value of 94 % (92 % expected).
All the results are compatible with the expectations from a standard model Higgs boson.
Acknowledgments
We congratulate our colleagues in the CERN accelerator departments for the excellent performance of the LHC and thank the technical and administrative staffs at CERN and at other CMS institutes for their contributions to the success of the CMS effort. In addition, we gratefully acknowledge the computing centres and personnel of the Worldwide LHC Computing Grid for delivering so effectively the computing infrastructure essential to our analyses. Finally, we acknowledge the enduring support for the construction and operation of the LHC and the CMS detector provided by the following funding agencies: the Austrian Federal Ministry of Science, Research and Economy and the Austrian Science Fund; the Belgian Fonds de la Recherche Scientifique, and Fonds voor Wetenschappelijk Onderzoek; the Brazilian Funding Agencies (CNPq, CAPES, FAPERJ, and FAPESP); the Bulgarian Ministry of Education and Science; CERN; the Chinese Academy of Sciences, Ministry of Science and Technology, and National Natural Science Foundation of China; the Colombian Funding Agency (COLCIENCIAS); the Croatian Ministry of Science, Education and Sport, and the Croatian Science Foundation; the Research Promotion Foundation, Cyprus; the Ministry of Education and Research, Estonian Research Council via IUT23-4 and IUT23-6 and European Regional Development Fund, Estonia; the Academy of Finland, Finnish Ministry of Education and Culture, and Helsinki Institute of Physics; the Institut National de Physique Nucléaire et de Physique des Particules / CNRS, and Commissariat à l’Énergie Atomique et aux Énergies Alternatives / CEA, France; the Bundesministerium für Bildung und Forschung, Deutsche Forschungsgemeinschaft, and Helmholtz-Gemeinschaft Deutscher Forschungszentren, Germany; the General Secretariat for Research and Technology, Greece; the National Scientific Research Foundation, and National Innovation Office, Hungary; the Department of Atomic Energy and the Department of Science and Technology, India; the Institute for Studies in Theoretical Physics and Mathematics, Iran; the Science Foundation, Ireland; the Istituto Nazionale di Fisica Nucleare, Italy; the Korean Ministry of Education, Science and Technology and the World Class University program of NRF, Republic of Korea; the Lithuanian Academy of Sciences; the Ministry of Education, and University of Malaya (Malaysia); the Mexican Funding Agencies (CINVESTAV, CONACYT, SEP, and UASLP-FAI); the Ministry of Business, Innovation and Employment, New Zealand; the Pakistan Atomic Energy Commission; the Ministry of Science and Higher Education and the National Science Centre, Poland; the Fundação para a Ciência e a Tecnologia, Portugal; JINR, Dubna; the Ministry of Education and Science of the Russian Federation, the Federal Agency of Atomic Energy of the Russian Federation, Russian Academy of Sciences, and the Russian Foundation for Basic Research; the Ministry of Education, Science and Technological Development of Serbia; the Secretaría de Estado de Investigación, Desarrollo e Innovación and Programa Consolider-Ingenio 2010, Spain; the Swiss Funding Agencies (ETH Board, ETH Zurich, PSI, SNF, UniZH, Canton Zurich, and SER); the Ministry of Science and Technology, Taipei; the Thailand Center of Excellence in Physics, the Institute for the Promotion of Teaching Science and Technology of Thailand, Special Task Force for Activating Research and the National Science and Technology Development Agency of Thailand; the Scientific and Technical Research Council of Turkey, and Turkish Atomic Energy Authority; the National Academy of Sciences of Ukraine, and State Fund for Fundamental Researches, Ukraine; the Science and Technology Facilities Council, UK; the US Department of Energy, and the US National Science Foundation. Individuals have received support from the Marie-Curie programme and the European Research Council and EPLANET (European Union); the Leventis Foundation; the A. P. Sloan Foundation; the Alexander von Humboldt Foundation; the Belgian Federal Science Policy Office; the Fonds pour la Formation à la Recherche dans l’Industrie et dans l’Agriculture (FRIA-Belgium); the Agentschap voor Innovatie door Wetenschap en Technologie (IWT-Belgium); the Ministry of Education, Youth and Sports (MEYS) of the Czech Republic; the Council of Science and Industrial Research, India; the HOMING PLUS programme of Foundation for Polish Science, cofinanced from European Union, Regional Development Fund; the Compagnia di San Paolo (Torino); and the Thalis and Aristeia programmes cofinanced by EU-ESF and the Greek NSRF.
References
- 1.ATLAS Collaboration, Observation of a new particle in the search for the Standard Model Higgs boson with the ATLAS detector at the LHC. Phys. Lett. B 716, 1 (2012). doi:10.1016/j.physletb.2012.08.020. arXiv:1207.7214
- 2.CMS Collaboration, Observation of a new boson at a mass of 125 GeV with the CMS experiment at the LHC. Phys. Lett. B 716, 30 (2012). doi:10.1016/j.physletb.2012.08.021. arXiv:1207.7235
- 3.Glashow SL. Partial-symmetries of weak interactions. Nucl. Phys. 1961;22:579. doi: 10.1016/0029-5582(61)90469-2. [DOI] [Google Scholar]
- 4.Weinberg S. A model of leptons. Phys. Rev. Lett. 1967;19:1264. doi: 10.1103/PhysRevLett.19.1264. [DOI] [Google Scholar]
- 5.A. Salam, in Weak and Electromagnetic Interactions. ed. by N. Svartholm. Elementary Particle Physics: Relativistic Groups And Analyticity. Proceedings Of The Eighth Nobel Symposium. Almqvist & Wiskell, Stockholm (1968), p. 367
- 6.Englert F, Brout R. Broken symmetry and the mass of gauge vector mesons. Phys. Rev. Lett. 1964;13:321. doi: 10.1103/PhysRevLett.13.321. [DOI] [Google Scholar]
- 7.Higgs PW. Broken symmetries, massless particles and gauge fields. Phys. Lett. 1964;12:132. doi: 10.1016/0031-9163(64)91136-9. [DOI] [Google Scholar]
- 8.Higgs PW. Broken symmetries and the masses of gauge bosons. Phys. Rev. Lett. 1964;13:508. doi: 10.1103/PhysRevLett.13.508. [DOI] [Google Scholar]
- 9.Guralnik GS, Hagen CR, Kibble TWB. Global conservation laws and massless particles. Phys. Rev. Lett. 1964;13:585. doi: 10.1103/PhysRevLett.13.585. [DOI] [Google Scholar]
- 10.Higgs PW. Spontaneous symmetry breakdown without massless bosons. Phys. Rev. 1966;145:1156. doi: 10.1103/PhysRev.145.1156. [DOI] [Google Scholar]
- 11.Kibble TWB. Symmetry breaking in non-Abelian gauge theories. Phys. Rev. 1967;155:1554. doi: 10.1103/PhysRev.155.1554. [DOI] [Google Scholar]
- 12.CMS Collaboration, Search for the standard model Higgs boson produced in association with a W or a Z boson and decaying to bottom quarks. Phys. Rev. D 89, 012003 (2014). doi:10.1103/PhysRevD.89.012003. arXiv:1310.3687
- 13.CMS Collaboration, Measurement of Higgs boson production and properties in the WW decay channel with leptonic final states. JHEP 01, 096 (2014). doi:10.1007/JHEP01(2014)096. arXiv:1312.1129
- 14.CMS Collaboration, Measurement of the properties of a Higgs boson in the four-lepton final state. Phys. Rev. D 89, 092007 (2014). doi:10.1103/PhysRevD.89.092007. arXiv:1312.5353
- 15.CMS Collaboration, Evidence for the 125 GeV Higgs boson decaying to a pair of leptons. JHEP 05, 104 (2014). doi:10.1007/JHEP05(2014)104. arXiv:1401.5041
- 16.CMS Collaboration, Search for the standard model Higgs boson produced in association with a top-quark pair in pp collisions at the LHC. JHEP 05, 145 (2013). doi:10.1007/JHEP05(2013)145. arXiv:1303.0763
- 17.CMS Collaboration, Study of the mass and spin-parity of the Higgs boson candidate via its decays to Z boson pairs. Phys. Rev. Lett. 110, 081803 (2013). doi:10.1103/PhysRevLett.110.081803. arXiv:1212.6639 [DOI] [PubMed]
- 18.CMS Collaboration, Search for a Higgs boson decaying into a Z and a photon in pp collisions at and 8 TeV. Phys. Lett. B 726, 587 (2013). doi:10.1016/j.physletb.2013.09.057. arXiv:1307.5515
- 19.CMS Collaboration, Search for invisible decays of Higgs bosons in the vector boson fusion and associated ZH production modes. Eur. Phys. J. C. (2014, Submitted). arXiv:1404.1344 [DOI] [PMC free article] [PubMed]
- 20.ATLAS Collaboration, Measurements of Higgs boson production and couplings in diboson final states with the ATLAS detector at the LHC. Phys. Lett. B 726, 88 (2013). doi:10.1016/j.physletb.2013.08.010. arXiv:1307.1427
- 21.ATLAS Collaboration, Evidence for the spin-0 nature of the Higgs boson using ATLAS data. Phys. Lett. B 726, 120 (2013). doi:10.1016/j.physletb.2013.08.026. arXiv:1307.1432
- 22.ATLAS Collaboration, Measurement of the Higgs boson mass from the and channels with the ATLAS detector using 25 fb of collision data. Phys. Rev. D. (2014, Submitted). arXiv:1406.3827 [DOI] [PubMed]
- 23.ATLAS Collaboration, Search for Higgs boson decays to a photon and a Z boson in pp collisions at and 8 TeV with the ATLAS detector. Phys. Lett. B 732, 8 (2014). doi:10.1016/j.physletb.2014.03.015. arXiv:1402.3051
- 24.ATLAS Collaboration, Search for invisible decays of a Higgs Boson produced in association with a Boson in ATLAS. Phys. Rev. Lett. 112, 201802 (2014). doi:10.1103/PhysRevLett.112.201802. arXiv:1402.3244
- 25.ATLAS Collaboration, Measurement of Higgs boson production in the diphoton decay channel in collisions at center-of-mass energies of 7 and 8 TeV with the ATLAS detector. Phys. Rev. D. (2014, Submitted). arXiv:1408.7084
- 26.Actis S, Passarino G, Sturm C, Uccirati S. NNLO computational techniques: the cases and . Nucl. Phys. B. 2009;811:182. doi: 10.1016/j.nuclphysb.2008.11.024. [DOI] [Google Scholar]
- 27.CMS Collaboration, Search for the standard model Higgs boson decaying into two photons in collisions at TeV. Phys. Lett. B 710, 403 (2012). doi:10.1016/j.physletb.2012.03.003. arXiv:1202.1487
- 28.CMS Collaboration, Observation of a new boson with mass near 125 GeV in pp collisions at = 7 and 8 TeV. JHEP 06, 081 (2013). doi:10.1007/JHEP06(2013)081. arXiv:1303.4571
- 29.Georgi HM, Glashow SL, Machacek ME, Nanopoulos DV. Higgs Bosons from two-gluon annihilation in proton–proton collisions. Phys. Rev. Lett. 1978;40:692. doi: 10.1103/PhysRevLett.40.692. [DOI] [Google Scholar]
- 30.Cahn RN, Ellis SD, Kleiss R, Stirling WJ. Transverse momentum signatures for heavy Higgs bosons. Phys. Rev. D. 1987;35:1626. doi: 10.1103/PhysRevD.35.1626. [DOI] [PubMed] [Google Scholar]
- 31.Glashow SL, Nanopoulos DV, Yildiz A. Associated production of Higgs bosons and Z particles. Phys. Rev. D. 1978;18:1724. doi: 10.1103/PhysRevD.18.1724. [DOI] [Google Scholar]
- 32.Raitio R, Wada WW. Higgs-boson production at large transverse momentum in quantum chromodynamics. Phys. Rev. D. 1979;19:941. doi: 10.1103/PhysRevD.19.941. [DOI] [Google Scholar]
- 33.Kunszt Z. Associated production of heavy Higgs boson with top quarks. Nucl. Phys. B. 1984;247:339. doi: 10.1016/0550-3213(84)90553-4. [DOI] [Google Scholar]
- 34.CMS Collaboration, Energy calibration and resolution of the CMS electromagnetic calorimeter in pp collisions at = 7 TeV. JINST 8, P09009 (2013). doi:10.1088/1748-0221/8/09/P09009. arXiv:1306.2016
- 35.CMS Collaboration, The CMS experiment at the CERN LHC. JINST 3, S08004 (2008). doi:10.1088/1748-0221/3/08/S08004
- 36.CMS Collaboration, Particle-flow event reconstruction in CMS and performance for jets, taus, and MET. CMS Phys. Anal. Summary CMS-PAS-PFT-09-001 (2009)
- 37.CMS Collaboration, Commissioning of the particle-flow event reconstruction with the first LHC collisions recorded in the CMS detector. CMS Phys. Anal. Summ. CMS-PAS-PFT-10-001 (2010)
- 38.Cacciari M, Salam GP, Soyez G. The anti- jet clustering algorithm. JHEP. 2008;04:063. doi: 10.1088/1126-6708/2008/04/063. [DOI] [Google Scholar]
- 39.CMS Collaboration, Determination of jet energy calibration and transverse momentum resolution in CMS. JINST 6, P11002 (2011). doi:10.1088/1748-0221/6/11/P11002
- 40.CMS Collaboration, Identification of b-quark jets with the CMS experiment. JINST 8, P04013 (2013). doi:10.1088/1748-0221/8/04/P04013. arXiv:1211.4462
- 41.Nucl. Instrum. Meth. A GEANT4—a simulation toolkit. 506, 250 (2003). doi:10.1016/S0168-9002(03)01368-8
- 42.T. Sjöstrand, S. Mrenna, P.Z. Skands, PYTHIA 6.4 physics and manual. JHEP 05, 026 (2006). doi:10.1088/1126-6708/2006/05/026. arXiv:hep-ph/0603175
- 43.CMS Collaboration, Measurement of the underlying event activity at the LHC with TeV and comparison with TeV. JHEP 09, 109 (2011). doi:10.1007/JHEP09. arXiv:1107.0330
- 44.Nason P. A new method for combining NLO QCD with shower Monte Carlo algorithms. JHEP. 2004;11:040. doi: 10.1088/1126-6708/2004/11/040. [DOI] [Google Scholar]
- 45.Frixione S, Nason P, Oleari C. Matching NLO QCD computations with parton shower simulations: the POWHEG method. JHEP. 2007;11:070. doi: 10.1088/1126-6708/2007/11/070. [DOI] [Google Scholar]
- 46.Alioli S, Nason P, Oleari C, Re E. A general framework for implementing NLO calculations in shower Monte Carlo programs: the POWHEG BOX. JHEP. 2010;06:043. doi: 10.1007/JHEP06(2010)043. [DOI] [Google Scholar]
- 47.Alioli S, Nason P, Oleari C, Re E. NLO Higgs boson production via gluon fusion matched with shower in POWHEG. JHEP. 2009;04:002. doi: 10.1088/1126-6708/2009/04/002. [DOI] [Google Scholar]
- 48.Nason P, Oleari C. NLO Higgs boson production via vector-boson fusion matched with shower in POWHEG. JHEP. 2010;02:037. doi: 10.1007/JHEP02(2010)037. [DOI] [Google Scholar]
- 49.Bozzi G, Catani S, de Florian D, Grazzini M. The spectrum of the Higgs boson at the LHC in QCD perturbation theory. Phys. Lett. B. 2003;564:65. doi: 10.1016/S0370-2693(03)00656-7. [DOI] [Google Scholar]
- 50.Bozzi G, Catani S, de Florian D, Grazzini M. Transverse-momentum resummation and the spectrum of the Higgs boson at the LHC. Nucl. Phys. B. 2006;737:73. doi: 10.1016/j.nuclphysb.2005.12.022. [DOI] [Google Scholar]
- 51.de Florian D, Ferrera G, Grazzini M, Tommasini D. Transverse-momentum resummation: Higgs boson production at the tevatron and the LHC. JHEP. 2011;11:064. doi: 10.1007/JHEP11(2011)064. [DOI] [Google Scholar]
- 52.LHC Higgs Cross Section Working Group, Handbook of LHC Higgs Cross Sections: 2. Differential Distributions. CERN Report CERN-2012-002, 2012. doi:10.5170/CERN-2012-002. arXiv:1201.3084
- 53.Dixon LJ, Siu MS. Resonance-continuum interference in the diphoton Higgs signal at the LHC. Phys. Rev. Lett. 2003;90:252001. doi: 10.1103/PhysRevLett.90.252001. [DOI] [PubMed] [Google Scholar]
- 54.LHC Higgs Cross Section Working Group, Handbook of LHC Higgs Cross Sections: 3. Higgs Properties. CERN Report CERN-2013-004 (2013). doi:10.5170/CERN-2013-004. arXiv:1307.1347
- 55.Gao Y, et al. Spin determination of single-produced resonances at hadron colliders. Phys. Rev. D. 2010;81:075022. doi: 10.1103/PhysRevD.81.075022. [DOI] [Google Scholar]
- 56.Bolognesi S, et al. On the spin and parity of a single-produced resonance at the LHC. Phys. Rev. D. 2012;86:095031. doi: 10.1103/PhysRevD.86.095031. [DOI] [Google Scholar]
- 57.Re E. NLO corrections merged with parton showers for Z+2 jets production using the POWHEG method. JHEP. 2012;10:031. doi: 10.1007/JHEP10(2012)031. [DOI] [Google Scholar]
- 58.Alwall J, et al. MadGraph 5: going beyond. JHEP. 2011;06:128. doi: 10.1007/JHEP06(2011)128. [DOI] [Google Scholar]
- 59.T. Gleisberg et al., Event generation with SHERPA 1.1. JHEP 02, 007 (2009). doi:10.1088/1126-6708/2009/02/007. arXiv:0811.4622
- 60.CMS Collaboration, Measurement of the production cross section for pairs of isolated photons in collisions at TeV. JHEP 01, 133 (2012). doi:10.1007/JHEP01(2012)133. arXiv:1110.6461
- 61.CMS Collaboration, Measurement of the differential dijet production cross section in proton–proton collisions at TeV. Phys. Lett. B 700, 187 (2011). doi:10.1016/j.physletb.2011.05.027. arXiv:1104.1693
- 62.M. Oreglia, A study of the reactions . PhD thesis, Stanford University (1980). SLAC Report SLAC-R-236
- 63.Cacciari M, Salam GP. Pileup subtraction using jet areas. Phys. Lett. B. 2008;659:119. doi: 10.1016/j.physletb.2007.09.077. [DOI] [PubMed] [Google Scholar]
- 64.CMS Collaboration, Measurement of the inclusive W and Z production cross sections in pp collisions at TeV with the CMS experiment. JHEP 10, 132 (2011). doi:10.1007/JHEP10(2011)132. arXiv:1107.4789
- 65.H. Voss, A. Höcker, J. Stelzer, and F. Tegenfeldt, in TMVA: Toolkit for Multivariate Data Analysis with ROOT. XIth International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT) (2007), p. 40. arXiv:physics/0703039
- 66.Cahn RN, Dawson S. Production of very massive Higgs bosons. Phys. Lett. B. 1984;136:196. doi: 10.1016/0370-2693(84)91180-8. [DOI] [Google Scholar]
- 67.Altarelli G, Mele B, Pitolli F. Heavy Higgs production at future colliders. Nucl. Phys. B. 1987;287:205. doi: 10.1016/0550-3213(87)90103-9. [DOI] [Google Scholar]
- 68.Cacciari M, Salam GP, Soyez G. The catchment area of jets. JHEP. 2008;04:005. doi: 10.1088/1126-6708/2008/04/005. [DOI] [Google Scholar]
- 69.M. Cacciari, G.P. Salam, G. Soyez, FastJet User Manual (2011). arXiv:1111.6097
- 70.CMS Collaboration, Pileup Jet Identification. CMS Phys. Anal. Summ. CMS-PAS-JME-13-005 (2013)
- 71.Rainwater DL, Szalapski R, Zeppenfeld D. Probing color singlet exchange in Z + two jet events at the CERN LHC. Phys. Rev. D. 1996;54:6680. doi: 10.1103/PhysRevD.54.6680. [DOI] [PubMed] [Google Scholar]
- 72.Stewart IW, Tackmann FJ. Theory uncertainties for Higgs mass and other searches using jet bins. Phys. Rev. D. 2012;85:034011. doi: 10.1103/PhysRevD.85.034011. [DOI] [Google Scholar]
- 73.CMS Collaboration, Studies of Tracker Material. CMS Phys. Anal. Summ. CMS-PAS-TRK-10-003 (2010)
- 74.W. Verkerke, D.P. Kirkby, in The RooFit Toolkit for Data Modeling. Proceedings, 13th International Conference on Computing in High-Enery and Nuclear Physics (CHEP 2003). SLAC-R-636 (2003). arXiv:physics/0306116
- 75.ATLAS and CMS Collaborations, LHC Higgs Combination Group, Procedure for the LHC Higgs boson search combination in Summer 2011. Technical Report ATL-PHYS-PUB 2011–11, CMS-NOTE-2011/005, CERN (2011)
- 76.CMS Collaboration, Combined results of searches for the standard model Higgs boson in pp collisions at TeV. Phys. Lett. B 710, 26 (2012). doi:10.1016/j.physletb.2012.02.064. arXiv:1202.1488
- 77.Cowan G, Cranmer K, Gross E, Vitells O. Asymptotic formulae for likelihood-based tests of new physics. Eur. Phys. J. C. 2011;71:1. doi: 10.1140/epjc/s10052-011-1554-0. [DOI] [Google Scholar]
- 78.L. Moneta, in 13 International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT2010). SISSA (2010). http://pos.sissa.it/archive/conferences/093/057/ACAT2010_057.pdf. arXiv:1009.1003
- 79.P.D. Dauncey, M. Kenzie, N. Wardle, G.J. Davies, Handling uncertainties in background shapes: the discrete profiling method (2014, To be submitted). arXiv:1408.6865
- 80.Akaike H. A new look at the statistical model identification. IEEE Trans. Autom. Control. 1974;19:716. doi: 10.1109/TAC.1974.1100705. [DOI] [Google Scholar]
- 81.E.W. Weisstein, F-distribution (2014). From MathWorld—a Wolfram web resource
- 82.F. Garwood, Fiducial limits for the Poisson distribution. Biometrika 28, 437 (1936)
- 83.LHC Higgs Cross Section Working Group, Handbook of LHC Higgs Cross Sections: 1. Inclusive Observables. CERN Report CERN-2011-002 (2011). doi:10.5170/CERN-2011-002. arXiv:1101.0593
- 84.CMS Collaboration, Absolute calibration of the luminosity measurement at CMS: Winter 2012 update. CMS Phys. Anal. Summ. CMS-PAS-SMP-12-008 (2012)
- 85.CMS Collaboration, CMS luminosity based on pixel cluster counting—Summer 2013 update. CMS Phys. Anal. Summ. CMS-PAS-LUM-13-001 (2013)
- 86.R. Paramatti, CMS ECAL group, Crystal properties in the electromagnetic calorimeter of CMS. AIP Conf. Proc. 867, 245 (2006). doi:10.1063/1.2396960
- 87.Auffray E. Overview of the 63000 PWO barrel crystals for CMS ECAL production. IEEE Trans. Nucl. Sci. 2008;55:1314. doi: 10.1109/TNS.2007.913935. [DOI] [Google Scholar]
- 88.Seltzer SM, Berger MJ. Transmission and reflection of electrons by foils. Nucl. Instrum. Meth. 1974;119:157. doi: 10.1016/0029-554X(74)90747-2. [DOI] [Google Scholar]
- 89.Gross E, Vitells O. Trial factors for the look elsewhere effect in high energy physics. Eur. Phys. J. C. 2010;70:525. doi: 10.1140/epjc/s10052-010-1470-8. [DOI] [Google Scholar]
- 90.B. Efron, Bootstrap methods: another look at the jackknife. Ann. Stat. 7, 1 (1979). doi:10.1214/aos/1176344552. See “Remark K”
- 91.Lee SMS, Young GA. Parametric bootstrapping with nuisance parameters. Stat. Prob. Lett. 2005;71:143. doi: 10.1016/j.spl.2004.10.026. [DOI] [Google Scholar]
- 92.Barlow R. Event classification using weighting methods. J. Comp. Phys. 1987;72:202. doi: 10.1016/0021-9991(87)90078-7. [DOI] [Google Scholar]
- 93.Martin SP. Shift in the LHC Higgs diphoton mass peak from interference with background. Phys. Rev. D. 2012;86:073016. doi: 10.1103/PhysRevD.86.073016. [DOI] [Google Scholar]
- 94.Dixon LJ, Li Y. Bounding the Higgs Boson width through interferometry. Phys. Rev. Lett. 2013;111:111802. doi: 10.1103/PhysRevLett.111.111802. [DOI] [PubMed] [Google Scholar]
- 95.Junk T. Confidence level computation for combining searches with small statistics. Nucl. Instrum. Meth. A. 1999;434:435. doi: 10.1016/S0168-9002(99)00498-2. [DOI] [Google Scholar]
- 96.Read AL. Presentation of search results: the technique. J. Phys. G. 2002;28:2693. doi: 10.1088/0954-3899/28/10/313. [DOI] [Google Scholar]
- 97.Branco GC, et al. Theory and phenomenology of two-Higgs-doublet models. Phys. Rep. 2012;516:1. doi: 10.1016/j.physrep.2012.02.002. [DOI] [Google Scholar]
- 98.Landau LD. On the angular momentum of a two-photon system. Dokl. Akad. Nauk Ser. Fiz. 1948;60:207. [Google Scholar]
- 99.Yang CN. Selection rules for the dematerialization of a particle into two photons. Phys. Rev. 1950;77:242. doi: 10.1103/PhysRev.77.242. [DOI] [Google Scholar]
- 100.Collins JC, Soper DE. Angular distribution of dileptons in high-energy hadron collisions. Phys. Rev. D. 1977;16:2219. doi: 10.1103/PhysRevD.16.2219. [DOI] [Google Scholar]