Exciting New Career Opportunity: Software Engineer - Quality & AI Infrastructure
Company: Hirebus
Location: Remote in Brazil
Pay: $30K-$60K per year
We're hiring a mid-level engineer to own two things: the quality of our codebase and the early foundations of our AI agent work.
The role
Most of your time will be spent making our existing software more reliable. You'll dig into production bugs, trace them to root cause, and write the tests that should have caught them. You'll review pull requests in a way that raises the bar without demoralizing anyone. You'll tackle the tech debt that everyone complains about in retros but nobody picks up.
The other part — and this is growing — is our AI agent work. We're building tooling, orchestration, and evaluation systems for LLM-powered features. This isn't a research project. It's production software that needs the same rigor as everything else we ship. We need someone who can bring that rigor.
What you'll do
* Debug and fix issues across the stack, from staging through production
* Build meaningful test coverage (the kind that actually catches regressions)
* Review code thoroughly — with context, not just style nits
* Refactor the parts of the codebase that slow the team down
* Help define how we ship: observability, release process, quality gates
* Work with product and design to land features cleanly
* Contribute to our AI agent infrastructure: tool-calling, orchestration, evaluation, safety
Technical Requirements
* 7+ years of professional engineering experience with a track record of shipping production SaaS products
* Expert in Node.js, React, and TypeScript with strong product architecture instincts (scalability, maintainability, speed of iteration)
* Hands-on Supabase experience: Postgres schema design, RLS policies, edge functions, and auth
* Shipped production LLM-powered features (OpenAI, Anthropic, or similar) with a working understanding of prompt engineering, evals, latency/cost tradeoffs, and guardrails
* Experience building agentic workflows, RAG systems, or tool-use integrations (MCP, function calling)
* Excellent written English and self-directed async work style, with meaningful overlap with US Mountain Time
* Comfortable using AI-native development tools (Claude Code, Cursor, or similar) as part of daily workflow
Your first 90 days:
* Month 1: You've shipped real bug fixes. You're in the PR review rotation and contributing useful feedback. You have context on the parts of the system that break most often.
* Month 2: Test coverage in your area has improved measurably. You've identified at least one chunk of tech debt worth prioritizing and started on it.
* Month 3: You've shipped a meaningful refactor. You've made your first contributions to the AI agent infrastructure — something concrete, with results we can point to.
We're not looking for someone who wants to rewrite everything from scratch. We want someone who makes the team steadily better — through reliable fixes, honest reviews, and the kind of infrastructure work that compounds over time. If you're also curious about what engineering looks like as AI agents become part of the workflow, even better.