DataOps Engineer

Location: Pittsburgh, PA

Job Type: Full Time / Permanent

As a DataOps Engineer, you play a key role in orchestrating and automating the data analytics pipeline to make it more flexible while maintaining a high level of quality. Partnering closely with our product, operations, and engineering teams, you ensure our clients are able to leverage their data and analytics to make critical decisions about their benefits and human capital data.

Responsibilities: 

  • Maintain processes that feed data from various vendors within the company data platform, ensuring data quality and process efficiency
  • Troubleshoot data issues including optimizing SQL, ETL jobs, and analytic models
  • Drive the release cycle of our internal data products, utilizing automation to ensure high quality code and accelerate the pace of development
  • Collaborate with data scientists and engineers to deploy new schema, code, and analytic models (e.g., machine learning, predictive, etc.) into complex and mission critical production systems; select the right tool(s) for the job and make it work in production
  • Promote a culture of self-serve data analytics by minimizing technical barriers to data access and understanding
  • Bring relentless focus on automation all around
  • Stay current with the latest research and technology and communicate your knowledge

Education & Experience:

  • Bachelor’s, master’s, or doctorate degree in a related field, or an intriguing reason for not having one
  • Experience in data modeling, ETL development, and data warehousing plus hands-on experience with different data warehouse and processing technologies such as AWS Redshift, Oracle, PostgreSQL, Hadoop, Spark, etc.
  • Proficiency in SQL and Linux with a passion for automating everything
  • 2+ years’ industry experience working with data in a production environment
  • Shows curiosity and an ability to learn quickly, especially new technology and processes
  • Approaches all work with a team-based/collaborative orientation
  • Passion for data democratization
  • Experience with Git or other version control software
  • Bonus Points
    • Strong programming skills in a variety of languages (e.g., Python, Bash)
    • Experience with ETL tools including Pentaho, Talend, or Informatica
    • Experience with AWS technologies including Redshift, RDS, S3
    • Demonstrable skills and experience using SQL with large data sets (e.g. Oracle, SQL Server, Redshift)
    • System administration / performance tuning / troubleshooting experience in distributed data stores (SQL DW/NoSQL DB)
    • Experience with CI/CD tooling / processes
    • Experience with software testing
APPLY NOW