Skillful Software Engineers Wanted
About Us
At the intersection of software engineering, open-source ecosystems, and frontier AI, we're working on building high-quality evaluation and training datasets to improve how Large Language Models (LLMs) interact with realistic software engineering tasks. Our goal is to curate verifiable software engineering challenges from public GitHub repository histories using a human-in-the-loop process.
Project Overview
* We focus on evaluating code quality, correctness, maintainability, and style.
* Our project aims to train and benchmark next-gen LLMs for real-world software development tasks.
Why This Role Is Unique
* Collaborate directly with AI researchers shaping the future of AI-powered software development.
* Work with high-impact open-source projects and evaluate how LLMs perform on real bugs, issues, and developer tasks.
* Influence dataset design that will train and benchmark next-gen LLMs.
* What does day-to-day look like:
o Review and compare 3–4 model-generated code responses for each task using a structured ranking system.
o Evaluate code diffs for correctness, code quality, style, and efficiency.
o Provide clear, detailed rationales explaining the reasoning behind each ranking decision.
o Maintain high consistency and objectivity across evaluations.
o Collaborate with the team to identify edge cases and ambiguities in model behavior.
Required Skills and Qualifications
* 7+ years of professional software engineering experience, ideally at top-tier product companies.
* Strong fundamentals in software design, coding best practices, and debugging.
* Excellent ability to assess code quality, correctness, and maintainability.
* Proficient with code review processes and reading diffs in real-world repositories.
* Exceptional written communication skills to articulate evaluation rationale clearly.
* Prior experience with LLM-generated code or evaluation work is a plus.
Engagement Details
* Commitment: ~20 hours/week.
* Type: Contractor.
* Duration: 1 month (potential extensions based on performance and fit).