THE PLACE
ITRex - AI pioneers who build systems that actually work in the real world, not just in demos. We're 250+ people spread across the US and Europe, creating solutions for companies like P&G and Shutterstock. We keep it simple, build it right, and focus on what works.
THE PEOPLE
We're the kind of people who don't ignore messages in Slack, who jump in to help when you're stuck on a problem, and who offer solutions instead of blame when things go sideways. We believe in openness, accountability, and having each other's backs. No office politics, no hidden agendas - just people who care about doing good work together and supporting each other to get there.
THE ROLE
We are looking for a Data Engineer to join a data platform project focused on modernizing and scaling analytical data pipelines. This is a hands-on role with strong ownership, where you will contribute to both day-to-day delivery and longer-term technical direction.
Responsibilities
- Design, develop, and maintain end-to-end data pipelines: from ingestion to curated data models in a cloud data warehouse (Snowflake)
- Own and support data ingestion processes, including configuring and maintaining automated data connectors (e.g. Fivetran or similar tools)
- Contribute to the migration of existing ETL workflows to dbt, ensuring logic parity, performance improvements, and better maintainability
- Develop well-structured, modular dbt models, macros, and tests following best practices
- Participate in architectural discussions and help shape standards for data modeling, testing, and deployment
- Collaborate with business stakeholders and analytics users to translate requirements into reliable data models and metrics
- Contribute to improving data quality, observability, and documentation across the data platform
Requirements
Technical Skills
- 4+ years of experience in Data Engineering
- Strong SQL skills with attention to performance, readability, and maintainability
- Hands-on experience with dbt is required (models, macros, tests, documentation, environments)
- Experience with ETL development and data migration project
- Experience in developing and maintaining scalable ETL processes and cloud-based data solutions
- Understanding of Data Lakes and Data Warehousing architectures
- Experience working with data cataloging and data management tools
- Familiarity with CI/CD practices for data deployment and automated testing
Business & Collaboration
- Experience working with cross-functional teams and business customers
- Excellent communication skills, both verbal and written, with an ability to convey information clearly and concisely
- Proactive mindset with the ability to take responsibility for assigned tasks
- Collaborative and team-oriented approach in Agile environments
- Ability to work independently within a defined technical framework
- English proficiency: Upper-intermediate and above
Nice to have
- Proficiency in Python for data engineering and data analytical tasks
- Familiarity with ETL tools like Fivetran
- Hands-on experience ensuring data quality
Benefits
Why people stay
First, the foundation:
- Remote flexibility: Work where and how you work best - we trust you to deliver
- Fair compensation: Competitive salary + benefits that matter (medical, wellness, learning)
Then, the growth:
- Ownership opportunities: See a problem worth solving? Own it. We back smart risks over bureaucratic safety
- AI enhancement: We leverage AI to make you faster and stronger - complementing your abilities, not replacing them
- Learning investment: English classes, professional development, well-being support
- Career progression: Real paths up, not just sideways shuffling
Finally, the people:
- Responsive teammates: No ignored Slacks, no "not my problem" attitudes
- Supportive culture: When you're stuck, people help. When things break, we fix them together
- Human connections: Regular meetups, tech talks, and actual relationships beyond work
Curious? We are too. Let's talk