Every scientific code is different. Every organization is different. There is no one-size-fits-all solution when it comes to an optimized code base. We bring not just our expertise but our willingness to learn and find the solution that fits your situation.
We love science, we love tackling new problems and finding new solutions! In many cases, programming decisions must be informed by an understanding of the physical problem being modeled. We are cognizant of that and will seek an optimal combination of collaboration and educating ourselves on your field as needed.
We live on our reputation. In addition to our technical results, we emphasize clear communication and reliability.
We will work with you to make sure there's never a line in your code you aren't able to understand.
We want to improve your code, infrastructure, and practices - but we don't want anyone to become reliant on us. We will always work with the guideline that, should you wish to terminate our services, the transition must be smooth and every hour we spent will be worth it to you.
We will work together to make sure the scope and goals of the project are well-defined. That includes requirements for workflow, structure, performance, scalability, portability, and determinism.
There's no point in having fast code if it's not reliable. Everything we write will be tested and brought into a testing framework immediately.
All code contributions are documented as we write them. This is essential for long-term and/or collaborative projects.
A key mindset for writing modern, flexible code is modularity - structuring your code as smaller components that interact with the others through interfaces rather than, for example, shared global variables. One benefit is that someone working on one part of your code doesn't have to know anything about the specific implementation of another part. This approach is also essential for writing easily understandable and debuggable code - let alone parallelizing and optimizing it.
In our experience it is usually worth it to have a streamlined, readable, and conceptually straight-forward code rather than introduce significant complexity for a small performance gain. In the long run, the latter approach often results in inaccessible, unstable and non-portable code.
Backwards compatibility is not always possible, but it should be sacrificed only when necessary and the break should be done in a deliberative fashion. Specifically, such breaks should be rare and not create a substantial burden when reproducing older simulations, etc.
All quantifiable carbon emissions produced by Jubilee due to computation and travel will be offset via contributions to community sustainability projects.
New paper out at IPDPS 2022 on optimization plasma physics collision operators: Batched sparse iterative solvers on GPU for the collision operator for fusion plasma simulations.
Another great Exascale Computing Project (ECP) annual meeting. Presented "Performance Portability of XGC: Frontier, Aurora, and Perlmutter."
New paper out on the Cabana library, Enabling particle applications for exascale computing platforms!
The Covid-19 molecular docking simulation paper is a Gordon Bell Special Prize Finalist at SC2020! Congrats to all involved.
The new AutoDock-GPU - ready for a billion docking calculations in search of drugs for Covid-19 - paper published at ACM-BCB 2020.
Accelerating PsyNeuLink, a neuropsychology simulation, at the Princeton Hackathon - 200x speedup of major kernels in days.
Helping Scripps and Oak Ridge get protein-docking code AutoDock ready for large-scale Covid-19 drug simulations!
Presentation at the SIAM Conference on Parallel Processing for Scientific Computing, Taking the Plasma Physics Code XGC to Summit and Beyond with Kokkos/Cabana.
Presentation at Supercomputing19, Kokkos and Fortran in the Exascale Computing Project Plasma Physics Code XGC.
Presentation at the Platform for Advanced Scientific Computing (PASC) Conference, Porting the XGC gyrokinetic code to Summit.