<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=4958233&amp;fmt=gif">
Data
Brasov+8 more

What's this role about?

* *Lead and Oversee Data Architecture:* Design, implement, and manage scalable, high-performance data architectures on AWS and Databricks platforms to support business needs. * *End-to-End Development Lifecycle:* Participate in all phases of the application development lifecycle, from requirements gathering to deployment and maintenance. * *Technical Leadership:* Provide technical guidance and mentorship to data engineers, ensuring best practices in coding, architecture, and overall data management. * *Solution Design:* Collaborate with stakeholders to understand business requirements and translate them into effective, scalable data solutions. Participate in both logical and physical design phases. * *Complex Problem Solving:* Address complex technical challenges and provide innovative solutions, ensuring robustness, scalability, and performance. * *Quality Assurance:* Review code and enforce quality standards within the team, fostering a culture of continuous improvement. * *Agile Collaboration:* Work as an integral part of a Scrum team, contributing to sprint planning, execution, and retrospectives. * *Cross-functional Collaboration:* Engage in a broad range of complex technical or professional work activities, collaborating with cross-functional teams to ensure seamless data integration and management. * *Strategic Direction:* Work under general direction within a clear framework of accountability, providing substantial personal responsibility and autonomy in strategic decisions.

What skills and experience do you need?

* *Extensive Data Engineering Experience:* Proven experience as a Data Engineer in Databricks environments, with a deep understanding of Databricks data engineering tools (Unity Catalog, Delta Lake, Spark SQL, Spark Streaming, Delta Sharing). * *AWS Expertise:* Extensive experience in configuring and optimizing AWS data services (S3, Lambda, Glue, Step Functions, DynamoDB, Kinesis, Athena, EventBridge, SNS) for seamless data integration and event-driven workflows. * *Advanced Data Pipeline Management:* Expertise in developing and maintaining data pipelines using Python within an AWS-based platform, ensuring efficient data processing and management. * *Data Automation and Ingestion:* Strong skills in data automation and ingestion, utilizing tools and frameworks to streamline data workflows. * *Infrastructure as Code (IaC):* Proficiency in implementing IaC using AWS CDK to manage and automate cloud resources effectively. * *Apache Iceberg:* Experience utilizing Apache Iceberg to enhance data storage and organization for improved scalability and performance. * *Version Control and CI/CD:* Manage version control and CI/CD workflows using GitHub, leveraging GitHub Actions for continuous integration and deployment. * *Monitoring and Observability:* Enhance monitoring and observability of data pipelines using Datadog, optimizing performance and reliability. * *Client-facing and Communication Skills:* Excellent client-facing skills with the ability to communicate complex technical concepts to non-technical stakeholders. * *English Proficiency:* Strong English language skills, both written and verbal. * *Agile Methodologies:* Experience working within an Agile delivery framework, contributing to continuous improvement and efficient project delivery.

Apply now

Join our

team

Ready to create your own Endava story? Your journey starts here!
Life at Endava

Our global community

We’re a diverse group of people who share, create and connect over projects and beyond in the communities we live in. Working across 28 countries in six continents, we aim to be a force for good locally and turn our passions into drivers of change.

Interesting? We love when people share.

Learn more about the opportunities and life at Endava

Contact us now!