 
        
        We're seeking an AI/LLM Engineer to join a small team developing backend features for a Retrieval-Augmented Generation (RAG) service.
Main responsibilities include building and optimizing RAG models for search across large-scale medical and scientific documents.
Key areas of focus include:
 * Semantic chunking strategies
 * Improving the automated evaluation pipeline
 * Fine-tuning LLMs for textual RAG use cases
The ideal candidate will have hands-on experience with model development, backend engineering, and deployments to non-prod environments.
Collaboration with DevOps teams is crucial for production rollout.
This role requires strong technical skills and attention to detail.