Middle AI QA Engineer Ukraine What is the project, and why should you care? Symphony Solutions is a Cloud and AI-driven IT company headquartered in the Netherlands. We are a premier software provider of custom iGaming, Healthcare, and Airline solutions. Devoted to delivering the highest quality of service, we offer our expertise in full-cycle software development, cloud engineering, data and analytics, AI services, digital marketing orchestration, and more. Since our founding in 2008, Symphony Solutions has been serving many international clients primarily in Western Europe and North America. We’re looking for a Middle AI QA Engineer to join our young and dynamic AI team. This role is crucial in ensuring the quality, stability, and trustworthiness of our AI systems. You will be responsible for evaluating LLM outputs, testing AI agents, and automating QA workflows to streamline the development and deployment of AI-driven features. You will be an excellent fit for this position if you have: 2+ years of experience in QA or testing roles (manual or automation) Experience testing web applications, with a strong understanding of browser dev tools Understanding of client-server architecture, HTTP, and REST APIs Experience working with JSON, Postman, Swagger, or similar tools, AI QA (middle) 1 Familiarity with test case management and bug tracking tools (Jira, TestRail, etc.) Experience with writing clear, structured bug reports and documentation Ability to identify usability issues and ensure alignment with business requirements Knowledge of prompt engineering, prompt injection techniques, and hallucination detection English proficiency (Upper-Intermediate or higher) Nice to Have: Solid understanding of how LLMs and AI agents work at a functional level (including n8n and similar tools) Experience working with GenAI products or AI-driven interfaces Experience with test automation tools or scripting (Python, Playwright, Selenium, etc.) Experience with RAG systems or autonomous agents Knowledge of performance testing or load testing tools Familiarity with accessibility testing principles and tools Prior work with autonomous agents or RAG-based systems Here are some of the things you’ll be working on: Evaluate the output of LLM-based systems (text accuracy, coherence, relevance, hallucination detection, etc.) Develop and execute test plans for AI agents and workflows Automate QA processes using scripting (Python preferred) or low-code tools Identify and report defects, inconsistencies, and edge cases in AI responses Collaborate closely with ML Engineers, Product Managers, and Developers Contribute to the design of evaluation metrics and benchmark tests for GenAI-based features Monitor system behavior post-deployment and help triage production issues