Taritree and Ralitsa spent a week at Brookhaven National Lab where they participated in a GPU hack-a-thon sponsored by both the national labs and industry partners. Their team, which included collaborators from SLAC, focused on implementing a new deep convolutional neural network whose aim will be to reconstruct neutrino interactions from LArTPCs after training the network only with data. Their approach will be to use a semi-supervised training objective that enforces the inherent 3D consistency within the data. However, such a network will have many parameters, so loading the network onto a single GPU card is not possible. Therefore, we worked with the mentors from BNL and NVIDIA to help us distribute the parameters on multiple GPU cards while also making multiple copies of the network. In doing so, Ralitsa was able to perform test training runs which confirmed the model was learning — and at a much fast rate — due to our ability to more efficiently process more training examples than before.
A fun week of coding and a great program! Many thanks to our hack-a-thon mentors, Ren Yuhui (BNL) and Kamesh Arumugam (NVIDIA), for all of their valuable advice!
For those interested, program info is here. Luckily, it seems like an on-going event, happening several times a year, at several different national lab computing centers.