Responsive Navbar

Data Engineer

Job Description

Roles & Responsibilities

Job Title:Mid-Level Data Engineer / Data Engineer II

About the Role

We are looking for a skilled and proactive Mid-Level Data Engineer to design, build, and maintain scalable data infrastructure and pipelines. With 2–4 years of experience, the ideal candidate will work independently on data engineering tasks, optimize workflows, and collaborate cross-functionally to support analytical and operational data needs across the organization.

Experience

  • 2–4 years of hands-on experience in data engineering or software engineering with a focus on data-intensive applications

  • Proven experience building ETL/ELT pipelines and working with large-scale data systems

Key Responsibilities

  • Design, develop, and maintain robust, scalable, and efficient batch and streaming data pipelines

  • Implement and optimize ETL/ELT workflows to process data from diverse sources

  • Collaborate with analysts, data scientists, and product teams to define data requirements and deliver solutions

  • Manage data ingestion, transformation, and integration across various platforms

  • Ensure data quality, reliability, and consistency through validation and monitoring processes

  • Contribute to data modeling efforts and maintain logical and physical data models

  • Document data flows, pipeline architecture, and data-related best practices

Education

  • Bachelor’s degree in Computer Science, Information Systems, Engineering, or a related technical discipline

  • Master’s degree is a plus

Skills & Tools

  • Strong proficiency in SQL and experience working with relational and columnar databases (e.g., PostgreSQL, Snowflake, Redshift)

  • Proficient in Python (or similar) for scripting and data manipulation

  • Experience with cloud platforms such as AWS, Google Cloud Platform (GCP), or Azure

  • Familiarity with data pipeline orchestration tools (e.g., Apache Airflow, Luigi, Prefect)

  • Understanding of data warehousing and data lake architectures

  • Knowledge of data modeling techniques, including dimensional and normalized models

  • Hands-on experience with version control tools (e.g., Git), and working in agile environments

  • Familiarity with big data tools like Spark, Kafka, or Hive is a plus

  • Exposure to CI/CD practices for data pipeline deployment and testing

Job Detail
  • Work Type: Full Time
  • Languages to be known :
  • Country: United Arab Emirates
  • City: Dubai
  • Job Category : Information Technology