• Subscribe

Charlene Yang

The Paths to HPC series, presented in collaboration with Women in HPC, showcases the women working in high-performance computing. Our hope is that by highlighting these trailblazers—and the sometimes unique paths they followed into the field—other women will feel inspired to envision themselves in similar roles. 

Today we talk with Charlene Yang, application performance specialist, National Energy Research Scientific Computing Center (NERSC)Berkeley Lab

What was your path to working with HPC? 

I did my Bachelor’s degree, Master’s degree, and PhD all in Signal Processing and Wireless Communication, thinking that I might join a telecom company after my study. However, some interactions I had with staff members at the Pawsey Supercomputing Centre changed the course of my life completely.

<strong>Charlene Yang</strong> is an application performance specialist at the National Energy Research Scientific Computing Center (NERSC).My PhD was on iterative algorithms for data detection and channel equalization in telecommunications, at the University of Western Australia in Perth. There were a great deal of computer simulations involved, and the workstation at the lab just didn’t seem to be churning the numbers fast enough. For each change in parameter or algorithm, I had to wait for half an hour to a few hours to get a result back—very frustrating when you have a paper deadline approaching!

The training course I attended at Pawsey changed my life and opened a new world to me, the world of high-performance computing (HPC). I realized I could parallelize my code and run on hundreds of CPU nodes, and the speed-up I got was AMAZING. 

What’s cool about working with HPC is the diverse and dynamic workload, and the tangible and almost immediate impact that HPC has on science.

I really enjoyed the parallel computing side of things during my PhD, so much so that I decided to pursue a job at Pawsey. I got the job, and the next three years was a time when I really immersed myself in HPC and learned how to program on different architectures, and more importantly, to tease out every bit of performance from these giant machines. 

HPC is very international, and you get to know what is happening all around the world. It was through some of the conferences that I realized that maybe I wanted to scale myself up and work on even larger supercomputers. That led me to the NERSC, where I have been for the last three years, working as an application performance specialist. 

What’s cool about working with HPC?

To me, what’s cool about working with HPC is the diverse and dynamic workload, and the tangible and almost immediate impact that HPC has on science. I work with researchers from a range of domains, such as physics, material science, astronomy, and biology, and watching them tackle the world’s most challenging problems is just exhilarating. Whether it’s simulating fusion plasmas, calculating self-energy for complex materials, studying black holes, or mapping human genomes, I am just happy that I am part of this through my computational skills. 

There is nothing more thrilling to me than seeing my optimization actually speeding up the application by 10 times or even more.

These fields all require intensive computation, and fast hardware alone just doesn’t cut it. We need efficient, parallel software solutions. As a performance specialist, I help parallelize and optimize complex scientific applications to make them run faster. There is nothing more thrilling to me than seeing my optimization actually speeding up the application by 10 times or even more. It is that sense of excitement and achievement that keeps me going every day. 

What are some of the challenges you face or have faced in taking this path?

HPC is a very multi-disciplinary domain, and many problems are only solved through collaboration between different research groups. This usually involves researchers from a particular science domain, and researchers who are more on the computational or HPC side. Sometimes it’s challenging to have efficient communication between different groups across different domains. 

Terminologies may be different, and even the same words, like ‘architecture’, could mean different things, such as hardware architecture, software architecture, or deep learning neural network architecture. I have found context and clarification can always help: explain what you mean and check if everyone is on the same page before moving on. Communication across domains may continue to be a challenge for a while, but more communication is always better than less!

Any mentors you would like to thank?

There have been many mentors and role models in my career so far, and in no particular order, I would like to thank my colleagues in Australia: Daniel Grimwood, Chris Bording, Chris Harris, and Mark Gray, and my colleagues here: Jack Deslippe, Rebecca Hartman-Baker, and Samuel Williams. They have been very supportive of my career development, and they have inspired and had a positive influence on me, both professionally and personally.


Join the conversation

Do you have story ideas or something to contribute? Let us know!

Copyright © 2023 Science Node ™  |  Privacy Notice  |  Sitemap

Disclaimer: While Science Node ™ does its best to provide complete and up-to-date information, it does not warrant that the information is error-free and disclaims all liability with respect to results from the use of the information.


We encourage you to republish this article online and in print, it’s free under our creative commons attribution license, but please follow some simple guidelines:
  1. You have to credit our authors.
  2. You have to credit ScienceNode.org — where possible include our logo with a link back to the original article.
  3. You can simply run the first few lines of the article and then add: “Read the full article on ScienceNode.org” containing a link back to the original article.
  4. The easiest way to get the article on your site is to embed the code below.