About this role
This position involves working with Large Language Models (LLMs) to evaluate and improve their performance on realistic software engineering tasks. You will be responsible for reviewing and comparing model-generated code responses, evaluating code diffs for correctness and quality, and providing detailed rationales for your decisions.
The ideal candidate will have a strong background in software engineering, preferably at top-tier product companies, and excellent written communication skills. Experience with LLM-generated code or evaluation work is a plus, but not required.
* Key responsibilities
* Review and compare model-generated code responses for each task using a structured ranking system.
* Evaluate code diffs for correctness, code quality, style, and efficiency.
* Provide clear, detailed rationales explaining the reasoning behind each ranking decision.
Requirements
* 7+ years of professional software engineering experience, ideally at top-tier product companies.
* Strong fundamentals in software design, coding best practices, and debugging.
* Excellent ability to assess code quality, correctness, and maintainability.
* Proficient with code review processes and reading diffs in real-world repositories.
* Exceptional written communication skills to articulate evaluation rationale clearly.
Favorable qualifications
* Prior experience with LLM research, developer agents, or AI evaluation projects.
* Background in building or scaling developer tools or automation systems.
Contract details
* Commitment: ~20 hours/week (partial PST overlap required)
* Type: Contractor (no medical/paid leave)
* Duration: 1 month (potential extensions based on performance and fit)