Data Platform Engineer
Location: Pittsburgh
Job Type: Full Time / Permanent
Join a group of bright, talented, and kind engineers in a team that has very high visibility across the organization. The Data and Analytics organization is seeking a Data Platform Engineer to design, build, and support platforms that will power the next generation of analytical and ML solutions. The ideal candidate has a passion to create reliable and scalable platforms while driving down the total cost of ownership. This is a full time remote opportunity. We are looking to hire immediately.
Responsibilities: Work with team to identify necessary resources and automate their provisioning (CI/CD) pipelines and Infrastructure as Code (IaC). Write custom code or scripts to automate common tasks, infrastructure, monitoring services, and test cases. Deploy, configure, and maintain enterprise data management solutions. Create meaningful dashboards: logging, alerting, and responding to ensure that issues are captured and addressed proactively. Participate in an on-call rotation for support during and after business hours. Proactively review the performance and capacity of all aspects of production: code, infrastructure, data, and message processing. Contribute to the long-term technical strategy of the team, especially focused on scalable, resilient architecture. Make recommendations and execute cost saving strategies for cloud resources. Participate in learning activities around modern cloud architecture and design and development core practices (communities of practice). Increase business acumen by learning about other parts of the business.
What you will bring: The ideal candidate will have demonstrable experience coding and scripting using languages like Python, JavaScript, TypeScript, etc. Hands-on experience in continuous delivery and continuous integration. Experience in the use of Infrastructure as Code tools (Terraform) Solid technical acumen – including understanding and framing problems, planning and designing the solution, developing high quality software, and operationalizing services. Experience on Kubernetes and containerization technologies like Docker. Comfort with agile delivery methodologies in a fast-paced complex environment – Scrum, SAFe Experience in working closely. with Security and Infrastructure groups to onboard Cloud data solutions for production use. Strong communication skills across different mediums to craft compelling messages to drive action and alignment. Natural curiosity and tendency to get excited to dig in and understand how things work. 2+ years of hands-on experience with infrastructure-as-code technologies (e.g. Terraform, CloudFormation, Azure Resource Manager), source code management (e.g. GitHub), and build automation (e.g. Jenkins, GitHub Actions). 3+ years of experience in scripting languages such as Python, Shell Scripting etc. 3+ years of experience with at least one of the following cloud platforms: Microsoft Azure, Amazon Web Services (AWS), Google Cloud Platform (GCP), others Familiarity with Snowflake and Azure Ecosystem (Azure Data lake, Azure Data Factory, Azure Data Bricks, Azure Storage, Cosmos DB, ADO).