Exciting New Career Opportunity: Software Engineer - Quality & AI InfrastructureCompany: HirebusLocation: Remote in BrazilPay: $30K-$60K per yearWe're hiring a mid-level engineer to own two things: the quality of our codebase and the early foundations of our AI agent work.The roleMost of your time will be spent making our existing software more reliable. You'll dig into production bugs, trace them to root cause, and write the tests that should have caught them. You'll review pull requests in a way that raises the bar without demoralizing anyone. You'll tackle the tech debt that everyone complains about in retros but nobody picks up.The other part — and this is growing — is our AI agent work. We're building tooling, orchestration, and evaluation systems for LLM-powered features. This isn't a research project. It's production software that needs the same rigor as everything else we ship. We need someone who can bring that rigor.What you'll doDebug and fix issues across the stack, from staging through productionBuild meaningful test coverage (the kind that actually catches regressions)Review code thoroughly — with context, not just style nitsRefactor the parts of the codebase that slow the team downHelp define how we ship: observability, release process, quality gatesWork with product and design to land features cleanlyContribute to our AI agent infrastructure: tool-calling, orchestration, evaluation, safetyTechnical Requirements7+ years of professional engineering experience with a track record of shipping production SaaS productsExpert in Node.Js, React, and TypeScript with strong product architecture instincts (scalability, maintainability, speed of iteration)Hands-on Supabase experience: Postgres schema design, RLS policies, edge functions, and authShipped production LLM-powered features (OpenAI, Anthropic, or similar) with a working understanding of prompt engineering, evals, latency/cost tradeoffs, and guardrailsExperience building agentic workflows, RAG systems, or tool-use integrations (MCP, function calling)Excellent written English and self-directed async work style, with meaningful overlap with US Mountain TimeComfortable using AI-native development tools (Claude Code, Cursor, or similar) as part of daily workflowYour first 90 days:Month 1: You've shipped real bug fixes. You're in the PR review rotation and contributing useful feedback. You have context on the parts of the system that break most often.Month 2: Test coverage in your area has improved measurably. You've identified at least one chunk of tech debt worth prioritizing and started on it.Month 3: You've shipped a meaningful refactor. You've made your first contributions to the AI agent infrastructure — something concrete, with results we can point to.We're not looking for someone who wants to rewrite everything from scratch. We want someone who makes the team steadily better — through reliable fixes, honest reviews, and the kind of infrastructure work that compounds over time. If you're also curious about what engineering looks like as AI agents become part of the workflow, even better.