Careers

Join our team which includes experts from all relevant fields –  clinicians, technologists, and researchers – who are committed to accelerating life-changing innovation across the healthcare industry.

OMNY is a real world data exchange platform that enables health systems and specialty networks to share their de-identified data sets with external parties at scale. The company’s vision is to help sustain the healthcare ecosystem through a data driven business model, while unlocking incredible innovation in the life sciences industry with real world data from health systems. The OMNY platform ensures control, security, and data governance for both data sellers and data buyers. We rely on a foundation of talented, passionate people to help us achieve our mission of revolutionizing how data is shared and valued.
 

Data Engineer II (Healthcare Data Systems)

Atlanta

About The Position

The Mission

At OMNY Health, we are bridging the gap between clinical complexity and life-saving research. We are looking for a Data Engineer II who is passionate about the intersection of healthcare and technology. Your primary mission will be to architect and scale the pipelines that transform raw, messy clinical data into a high-fidelity, de-identified, research-ready data product. You will be a key player in managing our hybrid data landscape, extracting insights from our data sources, and curating them within our BigQuery and Snowflake environments. This role is about ensuring privacy at scale while maintaining the scientific utility of the data powering the next generation of medical breakthroughs.

Requirements

Key Responsibilities

  • Pipeline Development: Design, build, and maintain robust ETL/ELT pipelines to ingest structured and unstructured healthcare data into our BigQuery and Snowflake warehouses.
  • Modern Transformations: Lead the development of modular, high-performance transformations using stored procedures, and dbt (data build tool) to map raw clinical data to standardized research schemas in our Common Data Model (CDM).
  • Cloud-Native Orchestration: Deploy and manage complex workflows using Argo, ensuring high availability and fault tolerance within our GCP ecosystem.
  • Automated Data Quality: Implement “trust-but-verify” frameworks using SODA to monitor clinical data integrity, ensuring every record in our research product is validated and compliant.
  • De-identification & Privacy: Implement and automate sophisticated de-identification protocols (Safe Harbor or Expert Determination methods) to ensure HIPAA compliance while preserving data longitudinality.
  • Data Modeling: Architect scalable data models (Common Data Model) that allow researchers to query complex patient journeys with ease.
  • Infrastructure: Collaborate with DevOps to manage cloud-native data infrastructure, ensuring high availability and rigorous security controls.


Technical Qualifications

  • Experience: 3-5+ years in Data Engineering, with a focus on building production-grade healthcare pipelines.
  • GCP & Storage: Hands-on experience with Google Cloud Platform, specifically CloudSQL and BigQuery.
  • Warehousing: Deep expertise in BigQuery and Snowflake architectures, including performance tuning and secure data sharing.
  • Code & Orchestration: Expert-level Python and SQL.
  • Proven experience with Argo Workflows/Events for containerized orchestration.
  • Mastery of dbt for maintaining the transformation layer.
  • Quality Assurance: Experience using SODA (or Great Expectations) to define and enforce data contracts.
  • Healthcare Domain: Familiarity with healthcare-specific data challenges (ICD-10, FHIR, or provider-specific MS SQL schemas) is a significant plus.
  • Security Mindset: Understanding of HIPAA regulations and encryption standards.


Core Competencies

  • The “Curator” Mindset: You don’t just move data; you care about its meaning. You understand that a “null” in a lab result is a clinical signal, not just a missing string.
  • Adaptability: You thrive in the “zero-to-one” phase where documentation might be thin, but the impact is massive.
  • Collaborative Spirit: You can speak “Data” to engineers and “Insight” to clinical researchers.

 

Our Tech Stack

  • Cloud: GCP
  • Databases: CloudSQL (MS SQL Server), BigQuery, Snowflake
  • Transformation: dbt
  • Orchestration: Argo
  • Validation: SODA
  • Languages: Python, SQL


What We Offer

Why Join Us?

  • Impact: Your work directly enables researchers to find cures and improve patient outcomes.
  • Innovation: We are tackling the hardest problem in health tech: making data usable without sacrificing privacy.
  • Growth: As an early hire, you will have a front-row seat (and a steering wheel) in building our engineering culture.

Note on De-identification: > Because this role focuses on a research-ready product, the candidate should be familiar with the trade-offs between data utility and privacy—specifically how to handle dates, zip codes, and unique identifiers in a way that satisfies both statisticians and compliance officers.

Apply for this position