Position Overview
We are seeking a hands-on Data Engineer to design, build, and optimize our data pipeline and reporting infrastructure. This role will focus on ETLing data from our low-code platform into a SQL-based data warehouse, merging production and archived historical data, and preparing high-quality, performant datasets for client reporting.
The ideal candidate has strong experience in Python-based ETL development, SQL data modeling, performance optimization, and Power BI administration and report development. This is a contract-to-hire position with the opportunity to transition into a long-term role.
Key Responsibilities
Data Engineering & ETL
- Design and implement ETL processes to extract data from a low-code platform into a SQL database.
- Merge production and archived historical data into unified reporting tables.
- Develop Python scripts to handle large-volume tables, including:
- Incremental loads
- Create and update (upsert) logic
- Change detection
- Error handling and logging
- Ensure data integrity, consistency, and reliability across all pipelines.
- Automate data workflows and monitor for failures or performance degradation.
Data Warehouse & Modeling
- Design and maintain a reporting-ready SQL data warehouse.
- Develop efficient data models (fact/dimension schemas where appropriate).
- Implement best practices for:
- Referential integrity
- Indexing strategies
- Partitioning (where applicable)
- Query performance optimization
- Ensure scalability as data volumes grow.
Reporting & Power BI
- Prepare curated datasets optimized for reporting and analytics.
- Administer Power BI workspace environments.
- Build and maintain Power BI reports and dashboards for client-facing use.