Optimization of W+jets sensitivity to NLO electroweak corrections at the LHC

by Bayley Cornish

mentor: Hugo Beauchamin, Physics and Astronomy; funding source: Summer Scholars student fund

Bayley-Cornish-Tufts-Summer-Scholars-Poster

Particle physics is a weird one. It is very difficult to directly observe the physics you want to investigate because all of the particles produced are extremely high energy and thus extremely unstable, quickly decaying into a mess of other particles, which then recombine and redecay and recombine until they finally hit the detector. The actual initial particles that you want to investigate have to be reconstructed from the energies and trajectories of the shower of final state particles that you’ve measured in your detector. When different initial particles produce very similar, or even the same final state particles, it becomes hard to distinguish and identify the physics and the particles that you want to investigate. To separate out the ‘signal’ of the physics you want to investigate from the ‘background’ you need to understand the specifics of those final state particles you want to disclude. An illustration of this is the basic process of apply cuts to particle physics data. If you know that your particles of interest are particularly mass or high energy the decay products of that particle will also be high energy. Thus you can prune out of a lot of background out of your data by only looking at high energy products.

The purpose of my research is to investigate next-to-leading order (NLO) electroweak corrections as this forms a hole in our scientific knowledge that inhibits us from precisely understanding the background of particle accelerator experiments, especially at higher energies. To investigate these electroweak corrections, I’m using data produced through Monte Carlo simulations. Using simulated data is important because it’s a way of quickly and easily probing the limits of the current scientific models, providing a baseline predication that can later be compared and confirmed with real life experimental data.

The problem with trying to measure the effects of the electroweak corrections is that particle collisions of the type being carried out at the Large Hardon Collider (LHC) at CERN are exceedingly complicated. There is a large amount of systematic uncertainty stemming from fundamental uncertainties in the theory and the physics itself, as well as experimental systematic uncertainties. Having a precise measure of electroweak corrections requires reducing, or at least understanding these systematic uncertainties.

As explained in the poster the systematic uncertainties are primarily dealt with by taking a ratio of related physics processes. We hope that this ratio will cancel out much of the systematic uncertainty while leaving the effects of the electroweak correction intact. Even after the ratio has been taken there is a lot of work that needs to be done in identifying and quantifying the systematic uncertainties that remain.

Once the arduous work of modelling the effects of systematic uncertainties is complete, actually measuring the effects of the electroweak corrections is just a matter of comparing the predictions with and without electroweak correction, while verifying that the difference between the two is significantly larger than the uncertainties estimated.

I’m actually slightly disappointed that I don’t have more results to share for this poster session. There is a lot of work that needs to be done before any ‘real’ results can actually be taken. A lot of what I’ve been doing is creating statistical samples with the Monte Carlo simulations, writing code to build the distributions of variables that we might want to investigate, and working on a list of cuts that would provide a clean and useful dataset. Once we’ve confirmed that the cuts are appropriate, I can then change the parameters of the models uses to run the simulations. These parameters represent a fundamental uncertainty in our knowledge of the physics theories. Hypothetically, if the ratio calculation was entirely successful in cancelling out all of the systematic uncertainty, we would expect to see no effect when we change the QCD models and parameters. This is what I will be working on immediately in the next few weeks. It’s just a lot of coding. A lot of C++. Some would say too much C++. I am part of that some.

I’m not sure if images can be included in the blog post but the image below illustrates how complex particle collisions can be. As explained here: https://www.semanticscholar.org/paper/Introduction-to-parton-shower-event-generators-Hoche/085e658fe0e5b313c84e4dcb2e6c75aba63315d6

The red blob in the center represents the hard collision, surrounded by a tree-like structure representing Bremsstrahlung as simulated by parton showers. The purple blob indicates a secondary hard scattering event. Parton-to-hadron transitions are represented by light green blobs, dark green blobs indicate hadron decays, while yellow lines signal soft photon radiation.

There are dozens of variables that go into each vertex of the diagram, each with their own uncertainty. Separating the uncertainties present in these other physics processes from the electroweak corrections is a difficult process.

2 thoughts on “Optimization of W+jets sensitivity to NLO electroweak corrections at the LHC

  • October 24, 2020 at 12:41 am
    Permalink

    This is really cool – You made it possible for me to understand for a non-STEM major like me, while still providing abundant detail about your research!

  • October 24, 2020 at 8:41 pm
    Permalink

    Amazing presentation. I was fortunate to have watched your first presentation at the end of the summer and I’m so glad you’ve made progress. I’m not a huge particle physics person, so It can be hard to understand this type of research. However, your results section was very clear.

Leave a Reply