We present a measurement of the top quark mass using a datasample with integrated luminosity 1.8fb
. We use events where both of the W bosons from ttbar decay decay leptonically. We observe 124 candidate events. Events are reconstructed using the Neutrino Weighting Algorithm to obtain the reconstructed top mass m
. Probability density functions for m
are constructed using kernel density estimation (KDE) for a set of Pythia Monte-Carlo samples. We use local polynomial smoothing (LPS) to obtain value of the probablility density function for arbitrary top mass on a per-event basis. We measure 172.0 +5.0 -4.9 (stat) +/- 3.6 (syst) GeV/c^2 = 172.0 +6.1 -6.0 GeV/c^2
We design the selection to accept ttbar events qhere both W bosons decay into an electron or muon and neutrino pair. We use W+jets dataset which is triggered on a central electron or central muon. The selection criteria are summerised as follows
- Two leptons (e or mu) with pT > 20GeV. One lepton has to be isolated
- Two jets with transverse energy > 15 GeV, jets are corrected for differences in responce in different calorimeter regions and calorimeter nonlinearities
- Missing transverse energy > 25GeV
- Z-veto incorporating missing ET significance cut
- ET > 50 GeV if a lepton is closer than 20o in azimuth from the missing ET vector
- HT > 200 GeV
We use Neutrino Weighting Algorithm to reconstruct events. In the dilepton channel There are not enough measured quantities to fully constrain the event. This is due to the presence of two neutrinos in the final.state. We integrate over neutrino pseudorapidities taking the distribution from the Monte-Carlo simulation. The algorithm procedes as follows:
Assume the value of the top mass.
Choose a particular jet to b-quark assignment (there are two possibilities)
Assume neutrino pseudorapidities.
Using the world average masses of the W boson b quark and leptons we now can solve for the Px and Py of each of the neutrinos. Solutions might not exist for the assumed value of the top quark mass and ´ values. When a solution exists we will have two solutions for each neutrino.
We form four weights comparing each combination of solutions to the measured missing transverse energy with a Gaussian weight. Since the correct combination is not known we sum the four weights.
We integrate over ´1 and ´2 obtaining the weight for the assumed top mass. The integration distribution for neutrino pseudorapidities is taken from the ttbar Monte Carlo and is a Gaussian with width approximately 1. The integration is performed by summing a grid of ´ values with 0.2 spacing
We obtain the weight corresponding to the other jet to b-quark assignment
We sum the two weights. Now we have a handle on probability that the true top mass is the top quark mass we assumed.
We scan the top mass in units of 3GeV.
The maximum weight is found, as well as maximum weights of the two jet to b-quark assignments separately.
The scan is repeated succesively around the maxima until the step size of 0.03GeV is reached.
The assumed top mass which yields the highest weight is taken as the reconstructed top mass mtreco
Kernel Density Estimation:
We use a non-parametric Kernel Density Estimate-based approach to forming probability density functions from fully simulated Pythia MC. The probability for an event with an observale x is given by the linear sum of contributions from all entries in the MC:
Here, f(x) is the probability to observe x given some MC sample with known mass and JES (or the background). The kernel function K is a normalized function that adds varying probability to a measurement at x depending on its distance from xi. The smoothing parameter h is a number that determines the width of the kernel. Larger values of h smooth out the density estimate, and smaller values of h keep most of the probability weight near xi. We use an adaptive method in which the value of h = h(f(x_i)). The peak of the distribution, we use smaller smoothing. In the tails of the distribution, where statistics are poor and we are sensitive to statistiacl fluctuation, we use a larger amount of smoothing.
The figures below show the distribution of mtreco
for signal for several masses of the top quark. Overlayed is PDF calculated using KDE method.
In order to propertly normalize our density estimates, we define hard boundary cuts on our density estimates and make the same cut data, ensuring that the integral of our estimates is 1.0. We require that mt
lies within (100,320) GeV window.
The major background for the dilepton channel are Drell-Yan process, diboson production and Fakes -where a jet mimics a lepton.
The Drell-Yan background is notoriously hard to model given the fact that the signal selection uses a Z-veto. We use more than 50 'matched' Alpgen+Pythia samples which cover on-peak and off-peak regions as well as associated light flavour and heavy flavour jet production. We remove events with heavy flavour jets generated by Pythia showering from light flavor samples and some heavy flavour samples.
We model the fakes background using data. We select events from the W+jets dataset requiring one isolated lepton. We apply a dilepton veto to eliminate ttbar events. We require that a lepton object likely to be a fake be prestnt. All other selection criteria are applied. Remaining events are reconstructed with NWA and form the fake background shape.
Expected numbers of events for signal and background are shown in the table below
We calculate KDE estimate for each of the background subsamples to account available statistice of each of the subsamples.
The background distribution of mtreco
with overlayed PDF is shown below
Likelihood and Local Polynomial Smoothing
We minimize the extended likelihood with respect to the top mass and signal and background expectation to obtain the measurement as well as statistical uncertainty. The form of the likelihood is shown below.
where ns and nb are signal and background expectations and N is the number of events in
the data, Ps is the signal probability density function and Pb is the background probability density function. The first term in the likelihood captures the possibility of Poisson
fluctuations in the number of observed events. The second term in the product expresses the
Gaussian constraints on the background expectation. We use the a-priori estimate nb0 and
its uncertainty sigma_nb0 to improve sensitivity. Shape information is used in the third term where
probability density functions are used to discern between signal and background events and
to extract mass information.
Kernel density estimation allows only for calculation of probability density function at the values of the top mass where Monte-Carlo samples are available. To evaluate pdf at arbitrary M_top for each event we use local polynomial smoothing. A fit to a quadratic polynomial will be performed using the values of PDF calculated using the KDE method. The points near the required value have a higher weight than points away from the required point. Deweightig is performed using a 'tricubic' function with width of 15GeV. Value of the quadratic fit at the required M_top point is used as the value of PDF.
To ensure that the method is unbiased and the estimate of statistical uncertainty is valid we perform ensamble tests. We repeatedly draw events from the signal and background model mimicking possible variations of signal and background numbers that may occur in data. A mass measurement is performed on each of these pseudo-datasets. Knowing the M_top of the dataset from which the signal events were drawn we can form residuals (M_top_fitted-M_top_MC) and pulls ((M_top_fitted-M_top_MC)/returned uncertainty). Ideal performance would yield 0 residual and pull distributions centered at 0 and with width 1. The results of the ensamble tests are shown below:
The contributions to the systematic uncertainty are shown below. Total systematic uncertainty is 3.6GeV and is dominated by the jet energy scale uncertainty.
Fit and results
We perform the likelihood fit to the data and obtain 172.0 +5.0 -4.9 (stat)GeV/c^2. The error quoted is already scaled by 2%. Signal and background expectations obtained are: ns=87.3 +12.8 -12.3; nb=36.1 +/- 6.6. The likelihood profile is shown below. At each point in the graph the likelihood is minimized with respect to n_s and n_b
The reconstructed top mass distribution from the data with overlayed background and signal template fitted is depicted below.
We test what is the probability of obtaining the uncertainty observed in data using ensemble tests drawing pseudodata from Monte-Carlo sample at M_top=172.0 GeV.
Several cross checks were performed. We fit the full dataset without the background constraint term obtaining the same central value (with slightly larger uncertainty: 172.0 +/- 5.0 (symmetrized unscaled). Fitted signal and background expectations are: ns=81.5+/-21.4 nb=42.5+/-20.4.
We also split the data based on dilepton type. Results are summarized in the table below.
Wojciech Fedorko for the TMT
Last modified August 10, 2007