Annotation of ttbar/p20_taujets_note/NN.tex, revision 1.3

1.1       uid12904    1: \newpage
1.3     ! uid12904    2: 
1.1       uid12904    3: \section{\label{sub:NN}Neural Network Analysis}
                      4: 
                      5: \subsection{\label{sub:Variables}Variables for NN training}
                      6: 
1.3     ! uid12904    7: \noindent Following the same procedure as in the p17 analysis, we determine the content
1.1       uid12904    8: of signal and background in the preselected sample, increase signal/background rate
1.3     ! uid12904    9: and from this, measure the cross section.
        !            10: In p17, an artificil neural network based on topological characteristics of an event was used to
        !            11: extract signal from a background-enriched region. As before, the criteria used in choosing the variables were:
        !            12: power of discrimination and $\tau$-uncorrelated variables. The following variables were considered:
1.1       uid12904   13: 
                     14: \begin{itemize}
1.3     ! uid12904   15: \item \textit{\textbf{$H_{T}$}} - The scalar sum of all jet $p_{T}$'s (here and below including $\tau$ lepton candidates). 
        !            16: For $H_{T}$ values above $\sim$ 200 GeV we observed a dominance of signal over background.
1.1       uid12904   17: 
1.3     ! uid12904   18: \item \textit{\textbf{$\not\!\! E_{T}$ significance}} - It is computed from calculated resolutions of 
        !            19: physical objects (jets, electrons, muons and unclustered energy) \cite{p17_note,METsig}. 
        !            20: It was chosen to be used and optimized due to its good signal-background discrimination power.
1.1       uid12904   21: 
                     22: \item \textit{\textbf{Aplanarity}} \cite{p17topo} - the normalized momentum tensor is defined as
                     23: 
                     24: \begin{center}
                     25: \begin{equation}
1.3     ! uid12904   26: {\cal M}_{ab} \equiv \frac{\sum_{i}p_{ia}p_{ib}}{\sum_{i}p^{2}_{i}}
1.1       uid12904   27: \label{tensor}
                     28: \end{equation}
                     29: \end{center}
                     30: 
1.3     ! uid12904   31: \noindent where $p_{i}$ is  the momentum-vector
        !            32: and teh index $i$ runs over all the jets and the $W$. From the diagonalization of $\cal M$ we find three eigenvalues
1.1       uid12904   33: $\lambda_{1}\geq\lambda_{2}\geq\lambda_{3}$ with the constraint $\lambda_{1} + \lambda_{2} + \lambda_{3} = 1$.
1.3     ! uid12904   34: The aplanarity is defined as {$\cal A$} = $\frac{3}{2}\lambda_{3}$ and measures the flatness of an event.
        !            35: It assumes values in the range $0 \leq {\cal A} \leq 0.5$. 
        !            36: It was chosen to be used in the NN due to the fact that large values of {$\cal A$} correspond to more spherical events,
        !            37: like $t\bar{t}$ events for instance, since they are typical of cascade decays of heavy objects. On the other hand,
        !            38: both QCD and $W + \mbox{jets}$ events tend to be more collinear since jets in these events are primarily due to
1.1       uid12904   39: initial state radiation.
                     40: 
1.3     ! uid12904   41: \item \textit{\textbf{Sphericity}} \cite{p17topo} - Defined as {$\cal S$} = $\frac{3}{2}(\lambda_{2} + \lambda_{3})$,
        !            42: and ranges as $0 \leq {\cal S} \leq 1.0$, sphericity is a measure of the summed $p^{2}_{\perp}$
        !            43: More isotropic events have {$\cal S$} $\approx 1$ while less isotropic ones have {$\cal S$} $\approx 0$. 
        !            44: Sphericity is a good discrminator since $t\bar{t}$ events are very isotropic as they are typical of the decays of heavy objects
1.1       uid12904   45: and both QCD and $W + \mbox{jets}$ events are less isotropic due to the fact that jets in these events come 
                     46: primarily from initial state radiation.
                     47: 
                     48: \item \textit{\textbf{Top and $W$ mass likelihood}} - a $\chi^{2}$-like variable. 
                     49: $L\equiv\left(\frac{M_{3j}-m_{t}}{\sigma_{t}}\right)^{2}+\left(\frac{M_{2j}-M_{W}}{\sigma_{W}}\right)^{2}$,
                     50: where $m_{t}, M_{W},\sigma_{t},\sigma_{W}$ are top and W masses (172.4
                     51: GeV and 81.02 GeV respectively) and resolution values (19.4 GeV and 8.28
                     52: GeV respectively). $M_{3j}$ and $M_{2j}$ are invariant masses composed
1.3     ! uid12904   53: of  2- and 3-jet combinations. We choose combination that minimizes $L$. 
1.1       uid12904   54: 
                     55: \item \textit{\textbf{Centrality}}, defined as $\frac{H_{T}}{H_{E}}$ , where $H_{E}$ is sum
1.3     ! uid12904   56: of energies of the jets. Used as discrimination variable since highe values ($\sim$ 1.0) are 
        !            57: more signal-dominated while low values ($\sim$ 0) are more background-dominated.
        !            58: 
1.1       uid12904   59: \item \textit{\textbf{$\cos(\theta*)$}} -  The angle between the beam axis and the 
1.3     ! uid12904   60: highest-$p_T$ jet in the rest frame of all the jets in the event. $t\bar{t}$ events tend to have
        !            61: a lower ($\sim$ 0) $\cos(\theta*)$ values. This motivated its choice.
        !            62: 
        !            63: \item \textit{\textbf{$M_{jj\tau}$}} - The invariant mass of all jets and $\tau$s in the event.
1.1       uid12904   64: 
                     65: \end{itemize}
                     66: 
                     67: The chosen variables are in the end a consequence of the method employed  in this
                     68: analysis: use events from the QCD-enriched 
                     69: loose-tight sample to model QCD events in the signal-rich sample, and use
                     70: a b-tag veto sample as an independent control sample to check the validity of such 
1.3     ! uid12904   71: background modeling. Plots of all variables described above are found in Appendix \ref{app:discri_var}.
1.1       uid12904   72: 
                     73: %\clearpage
                     74: 
                     75: \subsection{\label{sub:NN-variables}Topological NN}
                     76: For training the Neural Network we used the Multilayer Perceptron algorithm, as described in 
                     77: \cite{MLPfit}. As explained before in Section \ref{sub:Results-of-the}, the first 1400000 
                     78: events in the ``loose-tight'' sample were used as background 
1.3     ! uid12904   79: for NN training for taus of Types 1 and 2, and the first 600000 of the same sample for NN training for type 3 taus.
1.1       uid12904   80: In both cases 1/3 of the Alpgen sample of $t\bar{t} \rightarrow \tau +jets$ was used for NN training and 2/3 of it
                     81: for the measurement.
                     82: When doing the measurement later on (Section \ref{sub:xsect}) we pick the tau with the highest $NN(\tau)$
                     83: in the signal sample as the tau cadidate at same time that taus in the loose-tight sample are picked at
                     84: random since all of them are regarded as fake taus by being below the cut $NN(\tau)$ = 0.7. By doing this 
                     85: we expect to avoid any bias when selecting real taus for the measurement.
                     86: Figures \ref{fig:nnout_type2_training} and \ref{fig:nnout_type3_training} show the 
                     87: effect of each of the chosen the topological event NN input variables on the final output.
                     88: 
                     89: Figures \ref{fig:nnout_type2} and \ref{fig:nnout_type3} show the NN output as a result of the training
                     90: described above. It is evident from both pictures that high values of NN correspond to 
                     91: the signal-enriched region. 
                     92: 
                     93: 
                     94: 
                     95: \begin{figure}[h]
1.3     ! uid12904   96: \includegraphics[scale=0.49]{plots/SetI_NNout_SM_type2_tauQCD.eps}
        !            97: \caption{Training of topological Neural Network output for Type 1 and 2 $\tau$ channel combined. 
        !            98: Upper left: relative impact of each of the input variables; upper right: relative weights
        !            99: of the synaptic connections of the trained network;
        !           100: lower left: convergence curves; lower right: the output distribution of signal and background
        !           101: test samples after training.}
1.1       uid12904  102: \label{fig:nnout_type2_training}
                    103: \end{figure}
                    104: 
1.3     ! uid12904  105: %\newpage
1.1       uid12904  106: 
                    107: 
                    108: \begin{figure}[h]
1.3     ! uid12904  109: \includegraphics[scale=0.49]{plots/SetI_NNout_SM_type3_tauQCD.eps}
1.1       uid12904  110: \caption{Training of topological Neural Network output for type 3 $\tau$ channel. 
1.3     ! uid12904  111: Upper left: relative impact of each of the input variables; upper right: relative weights
        !           112: of the synaptic connections of the trained network;
        !           113: lower left: convergence curves; lower right: the output distribution of signal and background
        !           114: test samples after training.}
1.1       uid12904  115: \label{fig:nnout_type3_training}
                    116: \end{figure}
                    117: 
                    118: \begin{figure}[h]
                    119: \includegraphics[scale=0.5]{CONTROLPLOTS/Std_TypeI_II/nnout.eps}
                    120: \caption{The topological Neural Network output for type 1 and 2 $\tau$ channel}
                    121: \label{fig:nnout_type2}
                    122: \end{figure}
                    123: 
                    124: \newpage
                    125: 
                    126: \begin{figure}[t]
                    127: \includegraphics[scale=0.5]{CONTROLPLOTS/Std_TypeIII/nnout.eps}
                    128: \caption{The topological Neural Network output for type 3 $\tau$ channel}
                    129: \label{fig:nnout_type3}
                    130: \end{figure}
                    131: 
                    132: 
                    133: \subsection{\label{sub:NN-optimization}NN optimization}
                    134: One difference between this present analysis and the previous p17 is that we performed a NN optimization along with a
                    135: $\not\!\! E_{T}$ significance optimization. Previously a cut of $>$ 3.0 was applied to $\not\!\! E_{T}$ significance 
                    136: at the preselection stage and then it was included as one of the variables for NN training. This time as we 
                    137: chose to optimize it, since it is still a good variable to provide signal-background discrimination (Figure \ref{fig:metl_note}).
                    138: It is important to stress out that after the optimization we performed the 
                    139: analysis with the optimized $\not\!\! E_{T}$ significance cut
                    140: applied when doing both $\tau$ and b ID (Section \ref{sub:Results-of-the}), therefore 
                    141: after the preselection where no $\not\!\! E_{T}$ significance cut was applied. 
                    142: We then went back anp reprocessed (preselected) all MC samples with the optimized cut. Both results,
                    143: with $\not\!\! E_{T}$ significance applied during and after preselectio were identical. 
                    144: We then chose to present this analyis with this cut applied at the preselection 
                    145: level in order to have a consistent cut flow throughout the analysis(Section \ref{sub:Preselection}).
                    146: 
                    147: 
                    148: \begin{figure}[h]
                    149: \includegraphics[scale=0.5]{plots/metl_allEW.eps}
                    150: \caption{$\not\!\! E_{T}$ significance distribution for signal and backgrounds.}
                    151: \label{fig:metl_note}
                    152: \end{figure}
                    153: 
                    154: 
                    155: \newpage
                    156: 
                    157: Below we describe how we split this part of the analysis into two parts:
                    158: 
                    159: \begin{enumerate}
1.3     ! uid12904  160: \item {\bf Set optimization:} We applied an ``reasonable'' cut on $\not\!\! E_{T}$ significance of $\geq$ 4.0 and 
        !           161: varied the set of varibles going into NN training.
        !           162: \item {\bf $\not\!\! E_{T}$ significance optimization:} After chosing the best set based on the lowest RMS of the
        !           163: figure of merith used (see Eq. \ref{merit}), we then optimized the $\not\!\! E_{T}$ significance cut.
1.1       uid12904  164: \end{enumerate}
                    165: 
1.3     ! uid12904  166: For this part of the analysis we present the sets of variables that were taken into account to perform the NN traning:
1.1       uid12904  167: \begin{itemize}
1.3     ! uid12904  168:  \item \textit{\textbf{Set 1}} : {$H_{T}$},  aplan (aplanarity), Mjjtau ($M_{jj\tau}$)
        !           169:  \item \textit{\textbf{Set 2}} : {$H_{T}$},  aplan, cent (centrality)
        !           170:  \item \textit{\textbf{Set 3}} : {$H_{T}$},  aplan, spher (spherecity)
        !           171:  \item \textit{\textbf{Set 4}} : {$H_{T}$}, cent, spher
        !           172:  \item \textit{\textbf{Set 5}} : aplan, cent, spher
        !           173:  \item \textit{\textbf{Set 6}} : {$H_{T}$}, aplan, Mjjtau, spher
        !           174:  \item \textit{\textbf{Set 7}} : {$H_{T}$}, aplan, Mjjtau, cent
        !           175:  \item \textit{\textbf{Set 8}} : {$H_{T}$}, aplan, Mjjtau, costhetastar ($cos(\theta^{*})$)
        !           176:  \item \textit{\textbf{Set 9}} : {$H_{T}$}, aplan, Mjjtau, cent, spher
        !           177:  \item \textit{\textbf{Set 10}} : {$H_{T}$}, aplan, Mjjtau, cent, costhetastar
        !           178:  \item \textit{\textbf{Set 11}} : {$H_{T}$}, aplan, Mjjtau, spher, costhetastar
        !           179:  \item \textit{\textbf{Set 12}} : METsig ($\not\!\! E_{T}$ significance), {$H_{T}$}, aplan, Mjjtau
        !           180:  \item \textit{\textbf{Set 13}} : METsig, {$H_{T}$}, aplan, cent
        !           181:  \item \textit{\textbf{Set 14}} : METsig, {$H_{T}$}, aplan, spher
        !           182:  \item \textit{\textbf{Set 15}} : METsig, {$H_{T}$}, cent, spher
        !           183:  \item \textit{\textbf{Set 16}} : METsig, {$H_{T}$}, aplan
        !           184:  \item \textit{\textbf{Set 17}} : METsig, {$H_{T}$}, Mjjtau
        !           185:  \item \textit{\textbf{Set 18}} : METsig, aplan, Mjjtau
        !           186:  \item \textit{\textbf{Set 19}} : METsig, {$H_{T}$}, cent
        !           187:  \item \textit{\textbf{Set 20}} : METsig, {$H_{T}$}, aplan, Mjjtau, cent
        !           188:  \item \textit{\textbf{Set 21}} : METsig, {$H_{T}$}, aplan, cent, spher
        !           189:  \item \textit{\textbf{Set 22}} : METsig, {$H_{T}$}, aplan, Mjjtau, spher
        !           190:  \item \textit{\textbf{Set 23}} : METsig, {$H_{T}$}, aplan, Mjjtau, costhetastar
        !           191:  \item \textit{\textbf{Set 24}} : METsig, Mjjtau, cent, spher, costhetastar
        !           192:  \item \textit{\textbf{Set 25}} : METsig, {$H_{T}$}, cent, spher, costhetastar
        !           193:  \item \textit{\textbf{Set 26}} : METsig, aplan, cent, spher, costhetastar
        !           194:  \item \textit{\textbf{Set 27}} : METsig, {$H_{T}$}, aplan, cent, costhetastar
        !           195:  \item \textit{\textbf{Set 28}} : {$H_{T}$}, aplan, topmassl
        !           196:  \item \textit{\textbf{Set 29}} : {$H_{T}$}, aplan, Mjjtau, topmassl
        !           197:  \item \textit{\textbf{Set 30}} : {$H_{T}$}, aplan, Mjjtau, cent, topmassl
        !           198:  \item \textit{\textbf{Set 31}} : {$H_{T}$}, aplan, Mjjtau, costhetastar, topmassl
        !           199:  \item \textit{\textbf{Set 32}} : METsig, {$H_{T}$}, topmassl, aplan, Mjjtau
        !           200:  \item \textit{\textbf{Set 33}} : METsig, spher, costhetastar, aplan, cent
        !           201: % \item \textit{\textbf{Set XXXIV}} : metl, spher, Mjjtau, topmassl, ktminp
1.1       uid12904  202:  \end{itemize}
                    203: 
1.3     ! uid12904  204: P17 tried only three different sets among hundreds of possible combinations. We believe that the 
        !           205: 33 sets tested above suffice in giving an optimal result.
1.1       uid12904  206: The criteria used for making a decision on which variable should be used follow:
                    207: \begin{itemize}
1.3     ! uid12904  208:  \item No more than 5 variables to keep NN simple and stable. More require larger training samples.
        !           209:  \item We want to use METsig variable, since it's the one providing best discrimination.
        !           210:  \item We do not want to use highly correlated variables in same NN. Such as $H_{T}$ and jet $p_{T}$'.
1.1       uid12904  211: % \item We can not use tau-based variables. 
                    212:  \item We want to use variables with high discriminating power.
                    213: \end{itemize}
                    214: 
1.3     ! uid12904  215: In order to make the decision about which of these 33 choices is the optimal we created an ensemble of 
1.1       uid12904  216: 20000 pseudo-datasets each containing events randomly (according to a Poisson distribution) picked 
                    217: from QCD, EW and $\ttbar$ templates. Each of these datasets was treated like real data, meaning applying all 
                    218: the cuts and doing the shape fit of event topological NN. QCD templates for fit were made from the same 
                    219: ``loose-tight $\tau$ sample'' from which the QCD component of the ``data'' was drawn. 
1.3     ! uid12904  220: We used the folloing quantity as the figure of merit:
1.1       uid12904  221: 
                    222: \begin{equation}
                    223: f = \displaystyle \frac{(N_{fit} - N_{true})}{N_{true}}
                    224: \label{merit}
                    225: \end{equation}
                    226: 
                    227: 
                    228: \noindent where $N_{fit}$ is the number of $t\bar{t}$ pairs given by 
                    229: the fit and $N_{true}$ is the number of $t\bar{t}$ pairs from the Poisson distribution. 
                    230: In both Set and $\not\!\! E_{T}$ significance optimization, the lowest RMS was used 
1.3     ! uid12904  231: to characterize which configuration is the best in each case.
1.1       uid12904  232: 
                    233: 
1.3     ! uid12904  234: The plots showing results concerning the set optimization are found in Appendix \ref{app:set_opt} and are summarized 
        !           235: in Table \ref{setopt_table} below, where each RMS and mean are shown. The parenthesis after each set ID show the number of 
1.1       uid12904  236: hidden nodes in NN training.
                    237: 
                    238: \begin{table}[htbp]
                    239: \begin{tabular}{|c|r|r|r|} \hline
                    240: Set of variables  & \multicolumn{1}{c|}{RMS}    & \multicolumn{1}{c|}{mean} \\ \hline
                    241: 
                    242: \hline
                    243: 
                    244: 
                    245: Set1(6)      &  \multicolumn{1}{c|}{0.1642}   &  \multicolumn{1}{c|}{0.0265}\\ \hline
                    246: 
                    247: Set2(6)      &  \multicolumn{1}{c|}{0.1840}     &  \multicolumn{1}{c|}{0.0054}\\ \hline
                    248: 
                    249: Set3(6)      &  \multicolumn{1}{c|}{0.1923}   &  \multicolumn{1}{c|}{0.0060}\\ \hline
                    250: 
                    251: Set4(6)      &  \multicolumn{1}{c|}{0.1978}   &  \multicolumn{1}{c|}{0.0175}\\ \hline
                    252: 
                    253: Set5(6)      &  \multicolumn{1}{c|}{0.2385}     &  \multicolumn{1}{c|}{0.0022}\\ \hline
                    254: 
                    255: Set6(8)     &  \multicolumn{1}{c|}{0.1687}   &  \multicolumn{1}{c|}{0.0115}\\ \hline
                    256: 
                    257: Set7(8)     &  \multicolumn{1}{c|}{0.1667}   &  \multicolumn{1}{c|}{0.0134}\\ \hline
                    258: 
                    259: Set8(10)     &  \multicolumn{1}{c|}{0.1668}     &  \multicolumn{1}{c|}{0.0162}\\ \hline
                    260: 
                    261: Set9(10)     &  \multicolumn{1}{c|}{0.1721}     &  \multicolumn{1}{c|}{0.0102}\\ \hline
                    262: 
                    263: Set10(10)     &  \multicolumn{1}{c|}{0.1722}     &  \multicolumn{1}{c|}{0.0210}\\ \hline
                    264: 
                    265: Se11(10)      &  \multicolumn{1}{c|}{0.1716}   &  \multicolumn{1}{c|}{0.0180}\\ \hline
                    266: 
                    267: Set12(8)     &  \multicolumn{1}{c|}{0.1662}     &  \multicolumn{1}{c|}{0.0039}\\ \hline
                    268: 
                    269: Set13(8)     &  \multicolumn{1}{c|}{0.1819}     &  \multicolumn{1}{c|}{0.0018}\\ \hline
                    270: 
                    271: Set14(8)     &  \multicolumn{1}{c|}{0.1879}     &  \multicolumn{1}{c|}{0.0019}\\ \hline
                    272: 
                    273: Set15(8)     &  \multicolumn{1}{c|}{0.1884}     &  \multicolumn{1}{c|}{-0.0004}\\ \hline
                    274: 
                    275: Set16(6)     &  \multicolumn{1}{c|}{0.1912}     &  \multicolumn{1}{c|}{0.0034}\\ \hline
                    276: 
                    277: Set17(6)     &  \multicolumn{1}{c|}{0.1768}     &  \multicolumn{1}{c|}{0.0074}\\ \hline
                    278: 
                    279: Set18(6)     &  \multicolumn{1}{c|}{0.2216}     &  \multicolumn{1}{c|}{-0.0030}\\ \hline
                    280: 
                    281: Set19(6)     &  \multicolumn{1}{c|}{0.1921}     &  \multicolumn{1}{c|}{0.0015}\\ \hline
                    282: 
                    283: Set20(10)     &  \multicolumn{1}{c|}{0.1620}     &  \multicolumn{1}{c|}{0.0262}\\ \hline
                    284: 
                    285: Set21(10)     &  \multicolumn{1}{c|}{0.1753}     &  \multicolumn{1}{c|}{0.0010}\\ \hline
                    286: 
                    287: Set22(10)     &  \multicolumn{1}{c|}{0.1646}     &  \multicolumn{1}{c|}{0.0086}\\ \hline
                    288: 
                    289: Set23(10)     &  \multicolumn{1}{c|}{0.1683}     &  \multicolumn{1}{c|}{0.0132}\\ \hline
                    290: 
                    291: Set24(10)     &  \multicolumn{1}{c|}{0.2053}     &  \multicolumn{1}{c|}{0.0122}\\ \hline
                    292: 
                    293: Set25(10)     &  \multicolumn{1}{c|}{0.1906}     &  \multicolumn{1}{c|}{0.0038}\\ \hline
                    294: 
                    295: Set26(10)     &  \multicolumn{1}{c|}{0.2130}     &  \multicolumn{1}{c|}{0.0028}\\ \hline
                    296: 
                    297: Set27(10)     &  \multicolumn{1}{c|}{0.1859}     &  \multicolumn{1}{c|}{0.0004}\\ \hline
                    298: 
                    299: Set28(6)     &  \multicolumn{1}{c|}{0.1910}     &  \multicolumn{1}{c|}{-0.0022}\\ \hline
                    300: 
                    301: Set29(8)     &  \multicolumn{1}{c|}{0.1587}     &  \multicolumn{1}{c|}{0.0214}\\ \hline
                    302: 
                    303: Set30(10)     &  \multicolumn{1}{c|}{0.1546}     &  \multicolumn{1}{c|}{0.0148}\\ \hline
                    304: 
                    305: Set31(10)     &  \multicolumn{1}{c|}{0.1543}     &  \multicolumn{1}{c|}{0.0203}\\ \hline
                    306: 
                    307: Set32(10)     &  \multicolumn{1}{c|}{0.1468}     &  \multicolumn{1}{c|}{0.0172}\\ \hline
                    308: 
                    309: Set33(10)     &  \multicolumn{1}{c|}{0.2201}     &  \multicolumn{1}{c|}{0.0081}\\ \hline
                    310: 
                    311: %Set34(10)     &  \multicolumn{1}{c|}{0.1955}     &  \multicolumn{1}{c|}{0.0184}\\ \hline
                    312: \end{tabular}
                    313: \caption{Results for set optimization part whit $\not\!\! E_{T}$ significance $>$ 4.0 applied to all sets.
                    314: The number in parenthesis refers to number of hidden nodes in each case.}
                    315: \label{setopt_table} 
                    316: \end{table}
                    317: 
1.3     ! uid12904  318: From Table \ref{setopt_table} we see that Set 32 has the lowest RMS, thus we chose it
1.1       uid12904  319: as the set to be used in $\not\!\! E_{T}$ significance optimization part, whose results are
                    320: shown in Appendix \ref{app:metl_opt} and then summarized in Table \ref{metlopt_table} below
                    321: 
                    322: \begin{table}[htbp]
1.3     ! uid12904  323: \begin{tabular}{|c|r|r|r|r|} \hline
        !           324: Set 32  & Number of hidden nodes &$\not\!\! E_{T}$ significance cut & RMS    & \multicolumn{1}{c|}{mean} \\ \hline
1.1       uid12904  325: 
                    326: \hline
                    327: 
                    328: 
                    329: %Set6(10)      &  \multicolumn{1}{c|}{1.0} &  \multicolumn{1}{c|}{0.2611}   \\ \hline
                    330: 
                    331: %Set6(10)      &  \multicolumn{1}{c|}{1.5} &  \multicolumn{1}{c|}{0.2320}   \\ \hline
                    332: 
                    333: %Set6(10)      &  \multicolumn{1}{c|}{2.0} &  \multicolumn{1}{c|}{0.2102}   \\ \hline
                    334: 
                    335: %Set6(10)      &  \multicolumn{1}{c|}{2.5} &  \multicolumn{1}{c|}{0.2021}   \\ \hline
                    336: 
1.3     ! uid12904  337: 1     &  \multicolumn{1}{c|}{10} &  \multicolumn{1}{c|}{3.0} &  \multicolumn{1}{c|}{0.1507}   &  \multicolumn{1}{c|}{0.0157}\\ \hline
1.1       uid12904  338: 
1.3     ! uid12904  339: 2      &  \multicolumn{1}{c|}{10} &  \multicolumn{1}{c|}{3.5} &  \multicolumn{1}{c|}{0.1559}   &  \multicolumn{1}{c|}{0.0189}\\ \hline
1.1       uid12904  340: 
1.3     ! uid12904  341: 3     &  \multicolumn{1}{c|}{10} &  \multicolumn{1}{c|}{4.0} &  \multicolumn{1}{c|}{0.1468}   &  \multicolumn{1}{c|}{0.0172}\\ \hline
1.1       uid12904  342: 
1.3     ! uid12904  343: 4     &  \multicolumn{1}{c|}{10} &  \multicolumn{1}{c|}{4.5} &  \multicolumn{1}{c|}{0.1511}   &  \multicolumn{1}{c|}{0.0153}\\ \hline
1.1       uid12904  344: 
1.3     ! uid12904  345: 5     &  \multicolumn{1}{c|}{10} &  \multicolumn{1}{c|}{5.0} &  \multicolumn{1}{c|}{0.1552}   &  \multicolumn{1}{c|}{0.0205}\\ \hline
1.1       uid12904  346: 
                    347: %Set6(10)     &  \multicolumn{1}{c|}{5.5} &  \multicolumn{1}{c|}{0.4008}   \\ \hline
                    348: \end{tabular}
                    349: \caption{Results for $\not\!\! E_{T}$ significance optimization part when varying the $\not\!\! E_{T}$ significance cut
                    350: The number in parenthesis refers to number of hidden nodes in each case.}
                    351: \label{metlopt_table} 
                    352: \end{table}
                    353: 
                    354: 
                    355: 
                    356: Combined results from Tables \ref{setopt_table} and \ref{metlopt_table} show that the best configuration found
1.3     ! uid12904  357: was Set 32  with $\not\!\! E_{T}$ significance $\geq$ 4.0. Therefore, this was the 
        !           358: configuration used to perform the cross section measurement.Figure \ref{fig:METsig_RMS} shows the variation of the RMS as function 
1.1       uid12904  359: of the $\not\!\! E_{T}$ significance we applied.
                    360: 
1.3     ! uid12904  361: \begin{figure}[h]
        !           362: \includegraphics[scale=0.35]{plots/METsig-RMS.eps}
1.1       uid12904  363: \caption{Plot of RMS as a function the $\not\!\! E_{T}$ significance applied}
                    364: \label{fig:METsig_RMS}
                    365: \end{figure}
                    366: 
                    367: 
1.3     ! uid12904  368: %\clearpage
1.1       uid12904  369: 
                    370: 
                    371: In order to check the validity of our emsemble tests procedure, it is instructive to plot both the 
                    372: distribution of the predicted number of $t\bar{t}$ and what is called ``pull'', defined in Equation  
                    373: \ref{pull} below:
                    374: 
                    375: \begin{equation}
                    376: p = \displaystyle \frac{(N_{fit}-N_{true})}{\sigma_{fit}}
                    377: \label{pull}
                    378: \end{equation}
                    379: 
                    380: \noindent where $\sigma_{fit}$ is the error on the number of $t\bar{t}$ pairs given by the fit.
                    381: 
                    382: Figures \ref{fig:gaus_ttbar} and \ref{fig:pull} show both beforementioned distributions.
                    383: 
                    384: From Figure \ref{fig:gaus_ttbar} we see a good agreement between the number of $t\bar{t}$ pairs
                    385: initially set in the ensemble and the measured value. And Figure \ref{fig:pull} shows a nice gaussian
                    386: curve, that indicates a good behaviour of the fit uncertainties in the ensembles.
                    387: 
                    388: \begin{figure}[t]
1.3     ! uid12904  389: \includegraphics[scale=0.40]{plots/gaus_ttbar.eps}
1.1       uid12904  390: \caption{Distribution of the output ``measurement'' for an ensemble with 116.9 $\ttbar$ events.}
                    391: \label{fig:gaus_ttbar}
                    392: \end{figure}
                    393: 
1.3     ! uid12904  394: \begin{figure}[b]
        !           395: \includegraphics[scale=0.40]{plots/pull1-40.eps}
1.1       uid12904  396: \caption{The ensemble test's pull.}
                    397: \label{fig:pull}
                    398: \end{figure}
                    399: 
                    400: 
                    401: %\newpage
                    402: 
                    403: 
                    404: 
                    405: 
                    406: 
                    407: \clearpage
                    408: 

FreeBSD-CVSweb <freebsd-cvsweb@FreeBSD.org>