About this Project
Our mission is to empower the next generation of AI systems to reason about and work with real-world software repositories.
This project involves building high-quality evaluation and training datasets to improve how Large Language Models (LLMs) interact with realistic software engineering tasks.
We are looking for a skilled professional to collaborate directly with AI researchers shaping the future of AI-powered software development, work with high-impact open-source projects, and influence dataset design that will train and benchmark next-gen LLMs.
Responsibilities:
* Review and compare model-generated code responses for each task using a structured ranking system.
* Evaluate code diffs for correctness, code quality, style, and efficiency.
* Provide clear, detailed rationales explaining the reasoning behind each ranking decision.
* Maintain high consistency and objectivity across evaluations.
* Collaborate with the team to identify edge cases and ambiguities in model behavior.
Requirements:
* 7+ years of professional software engineering experience, ideally at top-tier product companies.
* Strong fundamentals in software design, coding best practices, and debugging.
* Excellent ability to assess code quality, correctness, and maintainability.
* Proficient with code review processes and reading diffs in real-world repositories.
* Exceptional written communication skills to articulate evaluation rationale clearly.
* Prior experience with LLM-generated code or evaluation work is a plus.
Commitment and Engagement Details
* Commitment: ~20 hours/week (partial PST overlap required)
* Type: Contractor (no medical/paid leave)
* Duration: 1 month (potential extensions based on performance and fit)