A group of Tufts physicists working on high energy physics is taking part in the next big venture in the world of science – the Large Hadron Collider (LHC), located near Geneva, Switzerland. The LHC is a global initiative in particle physics led by the European Organization for Nuclear Research or CERN.

Scientists at CERN use the world’s largest and most complex scientific instruments to study what happens when the basic constituents of matter – the fundamental elementary particles – collide. By studying the data gathered from the collisions, scientists hope to learn about the basic laws of nature that have shaped our Universe since the beginning of time and that will determine its fate. The questions being explored include the origin of mass, extra dimensions of space, microscopic black holes, and evidence for dark matter candidates in the Universe.

 

Tufts Department of Physics faculty Austin Napier and Krzysztof Sliwa, post-doctoral research associates Simona Rolli and Sharka Todorova-Nova, and graduate students Samuel Hamilton and Jeffrey Wetter are working along with researchers from Boston University, Brandeis, Harvard and MIT on the ATLAS Experiment, short for “A Toroidal LHC ApparatuS,” one of the two large detectors at the LHC. ATLAS will observe and record the results of collisions between two beams of protons moving in opposite direction in the LHC accelerator, each beam traveling with almost the speed of light and reaching the highest energies ever achieved on Earth.

Photo showing path of the LHC. Click to enlarge.

Tufts researchers, in parallel with physicists from all around the world working on ATLAS, are going to look at the data that is being produced by the experiment and analyze it, trying to identify and understand the particles emerging from collisions and the processes that led to their creation. Since the truly interesting processes are very rare, one will have to look at and analyze a large number of collisions. As Professor Sliwa explains, “the experiment is planning to collect about 1 PB (PetaByte=1015 bytes) of data per year, and since the ATLAS detector is very complex, a lot of CPU and disk storage will be needed to perform analyses of this data.”

The Tufts researchers are relying on the Tufts high-performance computing research cluster to help them with this task. The Tufts research cluster is a collection of many local computers (nodes) that are interconnected via a high-speed connection to provide a single shared resource. Its distributed processing system allows complex computations to run in parallel as the tasks are shared among the individual processors and memory.

By allocating local and network-based storage on the Tufts research cluster and having a dedicated ATLAS gateway node supporting specialized software, the Tufts team is making the research cluster into what is known in ATLAS computer system terminology as a restricted Tier-3g Computing Center. This will enable them to compile and link ATLAS software, and prepare jobs which then could be run on the Cluster, or submitted to the world-wide computer grid. The research team has funds available for about 12-15 TB (Terabyte) of disk storage, which will be added to the Tufts research storage disk arrays for their use. The Tufts team will also leverage their existing research storage shares previously used for other projects.

Durwood Marshall, Senior Research Computing Specialist, UIT Academic Technology
Rebecca Sholes, Senior Faculty Development Consultant, UIT Academic Technology

Share →

Looking for something?

Use the form below to search the site:


Still not finding what you're looking for? Drop us a note so we can take care of it!

Visit our friends!

A few highly recommended friends...

Tufts Privacy & Terms of Use Policies