img
Contract TypeContract TypeFull-time
Workplace typeWorkplace typeOn-site
LocationLocationRiyadh
About the Role
We are seeking a highly experienced Senior Data Engineer to join our growing Data & Analytics team. This role is ideal for someone with deep expertise in Informatica and a strong hands-on background in Big Data platforms (Cloudera), real-time streaming technologies (Kafka/Confluent), and data processing frameworks (Spark, Python).

Key Responsibilities
  • Design, develop, and optimize ETL/ELT pipelines using Informatica Data Engineering (DE) and related tools.
  • Build and manage real-time streaming data solutions using Apache Kafka and Confluent.
  • Develop distributed data processing applications on Cloudera using Apache Spark (Python or Scala).
  • Automate and orchestrate workflows using Apache NiFi or similar tools.
  • Work collaboratively with data architects, analysts, and business stakeholders to translate data requirements into robust technical solutions.
  • Ensure adherence to data quality, security, and governance standards across all data pipelines.
  • Troubleshoot performance bottlenecks and integration issues in data workflows.
  • Provide technical leadership, code reviews, and mentorship to junior engineers.
  • Maintain comprehensive documentation including design specifications, data flows, and operational procedures.
  • Stay up to date with industry trends in big data, streaming, and Informatica technologies, and introduce best practices into the team.

Required Qualifications
  • Bachelor’s or Master’s degree in Computer Science, Information Systems, or a related field.
  • 7+ years of hands-on experience in data engineering, with strong proficiency in Informatica Data Engineering (DE) development and administration.
  • Deep knowledge of ETL design, data modeling, and large-scale data warehousing concepts.
  • Proficient in SQL and Python for data transformation and scripting.
  • Strong experience with Apache Kafka and Confluent Platform for building real-time data streaming solutions.
  • Solid understanding of Apache Spark for big data processing.
  • Practical experience working on the Cloudera Big Data platform (HDFS, Hive, Impala, etc.).
  • Proven ability to optimize performance for ETL and streaming pipelines.
  • Excellent analytical and problem-solving skills with a focus on scalability and maintainability.
  • Strong verbal and written communication skills with the ability to work across technical and non-technical teams.
  • Informatica certification is highly desirable.

Preferred Skills
  • Experience with Informatica Cloud, Informatica Data Quality (IDQ), or Master Data Management (MDM).
  • Familiarity with SingleStore DB or other in-memory databases.
  • Exposure to DevOps tools and practices (*, Git, Jenkins, CI/CD pipelines).
  • Experience working with cloud-based big data platforms (AWS, Azure, or GCP).
  • Background in Agile methodologies and experience working in cross-functional teams.

Why Join Us?
  • Work on cutting-edge big data and real-time analytics projects.
  • Collaborate with industry experts and forward-thinking teams.
  • Competitive compensation and benefits package.
  • Opportunity to lead and mentor in a high-impact role.

Term: 1 Year Contract, possible extension.

Requirements

  • Requires 2-5 Years experience

Similar Jobs

Data Engineer

📣 Job Ad

Lucidya

Full-time
Join Lucidya as a Data Engineer and play a crucial role in shaping our data infrastructure.

As a Data Engineer, you will design, develop, and manage scalable data pipelines that support our analytics and machine learning teams. Your work will ensure that our data is accessible, reliable, and organized to meet our growing analytical needs.

Key Responsibilities:
  • Design and implement ETL processes for data movement and transformation.
  • Build and maintain data infrastructure for storage, processing, and analysis.
  • Collaborate with data analysts and scientists to support their data needs.
  • Ensure data quality and integrity throughout its lifecycle.
  • Optimize data storage and retrieval processes for performance.
  • Document data flows, transformations, and architectural changes.

Requirements:
  • Bachelor's degree in Computer Science, Information Technology, or a related field.
  • 3+ years of experience in a Data Engineer or similar role.
  • Strong knowledge of SQL and database technologies (*, PostgreSQL, MySQL).
  • Proficient in programming languages such as Python, Java, or Scala.
  • Experience with ETL tools (*, Apache Airflow, Talend).
  • Familiarity with cloud platforms (*, AWS, Google Cloud Platform, Azure).
  • Knowledge of big data technologies (*, Hadoop, Spark) is a plus.
  • Excellent attention to detail and strong communication skills.

Why Join Us?
This is an opportunity to shape our technical future, scale our platform, and deliver empowering technology. We offer Employee Stock Option Plans (ESOP) and performance-based bonuses, fostering growth and innovation in a dynamic work environment.

breifcase2-5 years

locationRiyadh

Remote Job
11 days ago