Senior Data Engineer Ukraine What is the project and why should you care? Symphony Solutions, having its headquarters based in Amsterdam (The Netherlands) is a fast growing international organization that is providing Western-European clients with high quality IT, BPO and Consultancy services. We are seeking a highly skilled and experienced Senior Data Engineer to join our team. This role is pivotal in building, optimizing, and scaling our data infrastructure to support advanced Machine Learning (ML) and Artificial Intelligence (AI) applications. The ideal candidate will have extensive hands-on experience with cloud AI services, particularly Amazon Bedrock or equivalent, and expertise in AWS, SQL, Python, ETL tools, and data lake/data warehouse solutions. You will be an excellent fit for this position if you have: Proven experience as a Senior Data Engineer or in a similar role Strong hands-on experience with Amazon Bedrock or other cloud AI/ML services (e.g., Azure AI, Vertex AI) Advanced proficiency in AWS services (e.g., S3, Redshift, Lambda, Glue, EMR) Expertise in Python and SQL for data manipulation, analysis, and scripting In-depth knowledge of ETL tools (e.g., Apache Airflow, AWS Glue, Talend) Solid understanding of data lake and data warehouse architectures Experience in database design and implementation Familiarity with building end-to-end pipelines for machine learning and AI workflows Strong problem-solving skills and ability to work independently Excellent communication skills and ability to collaborate with cross-functional teams Preferred Qualifications: Experience with real-time data processing and streaming platforms (e.g., Apache Kafka, Kinesis) Knowledge of DevOps practices and CI/CD pipelines for data engineering workflows Familiarity with big data tools and frameworks (e.g., Spark, Hadoop) Certification in AWS or other cloud platforms Here are some of the things you’ll be working on: Design, develop, and maintain scalable data architectures, pipelines, and workflows to support ML and AI use cases Leverage Amazon Bedrock or other cloud AI services to deploy and manage advanced AI solutions Build and maintain data lakes and data warehouses, ensuring high performance, scalability, and reliability Develop and implement ETL processes to ingest, transform, and load data from various sources Optimize database performance and ensure data integrity across systems Design and implement database solutions tailored to business needs, ensuring best practices in data architecture and design Drive initiatives to improve data quality, governance, and security