Remote Senior Software Engineer for Large Language Model Evaluation
About this role:
We are seeking a highly skilled senior software engineer to join our team in evaluating and improving the performance of Large Language Models (LLMs) on realistic software engineering tasks. This is an exciting opportunity to work at the intersection of software engineering, open-source ecosystems, and frontier AI.
Project Overview:
We are building high-quality evaluation and training datasets to improve how LLMs interact with software engineering challenges from public GitHub repository histories using a human-in-the-loop process.
Key Responsibilities:
* Evaluate model-generated code responses for correctness, code quality, style, and efficiency.
* Provide clear, detailed rationales explaining the reasoning behind each ranking decision.
* Maintain high consistency and objectivity across evaluations.
* Collaborate with the team to identify edge cases and ambiguities in model behavior.
Requirements:
* 7+ years of professional software engineering experience, ideally at top-tier product companies.
* Strong fundamentals in software design, coding best practices, and debugging.
* Excellent ability to assess code quality, correctness, and maintainability.
* Proficient with code review processes and reading diffs in real-world repositories.
* Exceptional written communication skills to articulate evaluation rationale clearly.
Bonus Points:
* Experience in LLM research, developer agents, or AI evaluation projects.
* Background in building or scaling developer tools or automation systems.
Engagement Details:
* Commitment: ~20 hours/week.
* Type: Contractor.
* Duration: 1 month (potential extensions based on performance and fit).