Ref: #70343

Databricks Champion (x3)

  • Practice Data

  • Technologies Business Intelligence Jobs and Data Recruitment

  • Location Almere, Netherlands

  • Type Contract

About the Role

We are seeking a Databricks Champion to lead the adoption, optimization, and best-practice use of the Databricks platform across our organization. This individual will act as both a technical expert and strategic evangelist, empowering data engineers, analysts, and scientists to deliver scalable, high-performance data solutions.

The ideal candidate will have deep experience in big data architecture, Delta Lake, and Spark, along with a passion for mentoring teams, driving platform excellence, and fostering a data-driven culture.

Key Responsibilities

  • Champion Databricks adoption across teams by defining best practices, frameworks, and reusable components.

  • Design and implement scalable data pipelines leveraging Apache Spark, Delta Lake, and Databricks SQL.

  • Collaborate with cross-functional teams (Data Engineering, Analytics, Machine Learning, and IT) to develop robust, cloud-native data solutions.

  • Optimize platform performance, including cluster configuration, cost management, and job orchestration.

  • Lead enablement initiatives, including Databricks workshops, office hours, and documentation for engineers and analysts.

  • Evaluate and integrate new Databricks features and tools (e.g., Unity Catalog, Delta Live Tables, Model Serving).

  • Act as the primary liaison between internal users and Databricks customer success and support teams.

  • Promote a data excellence culture by advocating for data quality, governance, and automation across the data lifecycle.

    Qualifications

    Required:

  • 5+ years of experience in data engineering, data science, or platform engineering.

  • 2+ years of hands-on experience with Databricks (including Delta Lake and Spark optimization).

  • Preferred:

  • Databricks Certified Data Engineer / Data Scientist.

  • Experience implementing Unity Catalog or Delta Live Tables.

  • Background in data lakehouse or enterprise-scale analytics platforms.

  • Experience with MLflow or machine learning pipelines

  • Strong experience with Python, SQL, and PySpark.

  • Deep understanding of cloud data architectures (AWS, Azure, or GCP).

  • Experience with CI/CD pipelines, infrastructure-as-code, and data governance.

  • Strong communication and collaboration skills — able to translate complex concepts for diverse audiences.

Fügen Sie eine Lebenslaufdatei an. Akzeptierte Dateitypen werden DOC, DOCX, PDF, HTML und TXT.

Wir laden Ihre Bewerbung hoch. Es kann einige Augenblicke dauern, bis Sie Ihren Lebenslauf lesen können. Bitte warten!