• HPC Engineer, Scientific Solutions

    Location US-MA-Boston
    Job Posted Date 1 week ago(5/14/2018 4:05 PM)
    Job ID
    2018-10690
    Category
    IT/Health IT/Informatics
    Type
    full time
    Grade
    23
  • Overview

    POSITION SUMMARY:

     

    Located in Boston and the surrounding communities, Dana-Farber Cancer Institute brings together world renowned clinicians, innovative researchers and dedicated professionals, allies in the common mission of conquering cancer, HIV/AIDS and related diseases. Combining extremely talented people with the best technologies in a genuinely positive environment, we provide compassionate and comprehensive care to patients of all ages; we conduct research that advances treatment; we educate tomorrow's physician/researchers; we reach out to underserved members of our community; and we work with amazing partners, including other Harvard Medical School-affiliated hospitals.

     

    The HPC Engineer will serve the Dana-Farber Cancer Institute (DFCI) and its patients within the Computational Solutions team under the Chief Health Information Office. The successful candidate supports production grade development and operation of computational workloads such as NGS pipelines within a high-performance computing (HPC) environment.

     

    Cancer is a disease of the genome; effectively fighting cancer will depend on the successful application of genomics to the understanding, diagnosis, and treatment of cancer. A significant component of our HPC environment is devoted to our strategic Cancer Genomics program, encompassing centers such as CCGD and programs such as PROFILE in collaboration with Brigham and Women’s Hospital, to facilitate genome discoveries in human cancer through basic and translational research and generation of reports of medically actionable genetic alterations in a CLIA-certified laboratory. These cancer genomic profiles are being used to guide patient treatment and/or stratification for clinical trials of novel anticancer agents.

     

    The role requires a strong technical background with hands-on experience in bringing a research computational pipeline from prototype to production with regards to error handling, stability, reliability, usability, scalability and performance. The process may include refactoring of existing code and redesign of how data is processed in a scalable HPC environment. The ideal candidate understands how hardware and software go hand in hand and only the ideal combination of compute resources and software components deliver the optimal result; can take a prototype computational workload off a scientist’s computer and run it productively on HPC resources.

    Responsibilities

    PRIMARY DUTIES AND RESPONSIBILITIES:  

    • Responsible for the productionization and the reliable operation of HPC workloads (NGS pipelines and others).
    • Monitoring and troubleshooting of production HPC workloads and implementation of fixes when necessary.
    • Researches, finds and implements optimal runtime conditions for HPC workloads.
    • Acts as a cluster scheduler power user and primary contact for the research community.
    • Works with system administration to align configuration of compute resources with computational workloads.
    • Collects metrics and provides input on how to optimally utilize available compute resources.
    • Is fully knowledgeable of informatics services and resources offered by Research Computing or Partners HealthCare and proposes solutions to optimally address concrete computational needs through a combination of different services.
    • Maintains the flow of information for end-to-end service delivery and provides updates where needed.
    • Performs other related duties as assigned / needed

    SUPERVISORY RESPONSIBILITIES:

     

    The position does not involve supervisory responsibilities.

    Qualifications

    MINIMUM JOB QUALIFICATIONS: 

    • 3 years’ experience within a dev ops or application engineering environment
    • 5 years’ experience within a HPC/scientific computing environment
    • Bachelor’s degree in computer science, bioinformatics, related field or equivalent industry experience

     

    KNOWLEDGE, SKILLS, AND ABILITIES REQUIRED:

    • Good working knowledge of Java or C/C++ programming or equivalent.
    • Can build software from sources including knowledge of build systems (Make, CMake, Maven, Gradle etc.).
    • Solid knowledge of computer architectures and multi-threaded/parallel processing applications.
    • Strong knowledge of local and distributed I/O performance tuning.
    • Hands-on knowledge of network- and distributed filesystems (e.g. NFS, BeeGFS) and knowledge of ZFS.
    • Extensive experience with HPC or cloud scheduling, such as GridEngine, SLURM or LSF.
    • Fluency in at least one scripting language (Python preferred), bash and knowledge of parallel shell.
    • Working knowledge of containerization (e.g., Docker)
    • Hands-on experience with system administrative task in Linux environments.
    • Knowledge of collaboration tools such as Jira, Confluence, SharePoint and others.
    • Exceptional service orientation, excellent problem-solving abilities; keen attention to detail.
    • Excellent analytical, organizational and time management skills.
    • Ability to work under pressure with minimal supervision in a complex environment.
    • Demonstrated ability to work effectively in a highly collaborative technical team.
    • Strong interpersonal skills - ability to interact productively with users and colleagues of diverse seniority levels and professional backgrounds.
    • Ability to communicate technical topics to technical and non-technical audiences appropriately.

    Options

    Sorry the Share function is not working properly at this moment. Please refresh the page and try again later.
    Share on your newsfeed

    Connect With Us!

    Not ready to apply? Connect with us for general consideration.