• Data Engineer

    Location US-NC-Raleigh
    Posting date 4 weeks ago(11/15/2019 8:49 AM)
    Job ID
    74281
    Category
    Operations
  • Company description

    At Red Hat, we connect an innovative community of customers, partners, and contributors to deliver an open source stack of trusted, high-performing solutions. We offer cloud, Linux, middleware, storage, and virtualization technologies, together with award-winning global customer support, consulting, and implementation services. Red Hat is a rapidly growing company supporting more than 90% of Fortune 500 companies.

    Job summary

    The Red Hat Products and Technologies (PnT) Product Data Science team is looking for a skilled and well-rounded engineer with solid programming skills and business insight. You’ll need to have an established set of foundational skills and the ability to learn new skills quickly. You’ll also need to be motivated and able to work on your own in a fast-paced and ambiguous environment. As a Data Engineer, you will be responsible for implementing opportunities for team efficiency gains using in-house analytics packages, translating and manipulating large sets of data, and creating and maintaining software and tools to enable the analytics team.

    Primary job responsibilities

    • Work closely with team members and stakeholders to turn business problems into analytical projects, translated requirements, and solutions
    • Work cross-functionally with teams on data migration, translation, and organization initiatives
    • Translate large volumes of raw, unstructured data into highly visual and easily digestible formats
    • Manage data pipelines for predictive analytics modeling; model life cycle management and deployment
    • Recommend ways to improve data reliability, efficiency, and quality
    • Help create, maintain, and implement tools, libraries, and systems to increase the efficiency and scalability of the team

    Required skills

    • Ability to problem-solve and test and implement new technologies and tools
    • Solid grasp of data systems and how they interact with each other
    • Solid analytical skills to determine the source and resolution of highly complex problems
    • Proficient Python programming experience
    • Excellent data manipulation skills, namely using SQL and the Python scientific stack, including pandas, NumPy, and scikit-learn
    • Ability to extract unstructured data from REST APIs, NoSQL databases, and object storage, including Red Hat Ceph Storage and Amazon S3
    • Experience with distributed computing frameworks, e.g., Dask or PySpark, is a plus
    • Experience with Linux system administration, shell scripting, and virtualization technology, including containers
    • Solid grasp of version control, e.g., Git or Apache Subversion (SVN)
    • Experience deploying applications using Platform-as-a-Service (PaaS) technologies, e.g., Red Hat OpenShift or Apache Airflow, is a plus
    • Knowledge of and a desire to stay connected to the current industry landscape of computer software, programming languages, and technology
    • Bachelor's degree with 5+ years of relevant working experience or master’s degree with 2+ years of working experience in computer science or software engineering


    Red Hat is proud to be an equal opportunity workplace and an affirmative action employer. We review applications for employment without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, ancestry, citizenship, age, uniformed services, genetic information, physical or mental disability, medical condition, marital status, or any other basis prohibited by law.


    Red Hat does not seek or accept unsolicited resumes or CVs from recruitment agencies. We are not responsible for, and will not pay, any fees, commissions, or any other payment related to unsolicited resumes or CVs except as required in a written contract between Red Hat and the recruitment agency or party requesting payment of a fee.

     

    Interested in this job?

    Sorry the Share function is not working properly at this moment. Please refresh the page and try again later.
    Share on your newsfeed