Job Overview
We're seeking a highly skilled professional to join our team as a Software Engineer, focusing on Data Integrations. This role requires the ability to design and develop reliable automation pipelines for ingesting, mapping, and enriching large datasets from multiple external platforms.
About the Role
You will play a key role in connecting our backend systems to external data sources, ensuring our data remains accurate, normalized, and scalable. Your collaboration with our Founding Engineer will bring new data integrations online, enhance normalization logic, and improve ingestion reliability across multiple environments.
Key Responsibilities
* Develop and maintain automation scripts and data ingestion pipelines from external dashboards or portals.
* Build data mappers that convert multiple schemas into a unified internal data model.
* Implement error handling, logging, and monitoring for all ingestion jobs.
* Collaborate with the engineering team to refine the database structure and enrichment logic.
* Document integration workflows to support scalability and future maintenance.
Requirements
To succeed in this position, you'll need:
* A strong background in software engineering or data engineering, preferably in backend or automation-heavy environments.
* Excellent coding skills in JavaScript/TypeScript, mainly inside NestJS and Node.js.
* Strong experience integrating front-end with back-end applications such as Next.js.
* Experience with web automation frameworks (Playwright, Puppeteer, or similar).
* Solid understanding of SQL-based data modeling and transformation pipelines.
* Working proficiency in English.
* Strong ownership mindset and ability to deliver production-quality code with minimal oversight.
* Algorithm & Data Structures fundamentals, including hash tables, trees, stacks, queues, linked list, DFS, BFS.
* Experience in hands-on software development with thoughtfulness of scale, latency, and distributed architecture.
Bonus Qualifications
We'd love to see:
* Experience with Supabase, PostgreSQL, or other modern cloud databases.
* Background in ETL pipelines, data normalization, or automation frameworks.
* Familiarity with browser automation, headless environments, or task scheduling tools & Queue managers.
* Passion for building reliable data systems that simplify complex workflows.