Intermediate Data Engineer

Hyperclear Tech

  • South Africa
  • Permanent
  • Full-time
  • 6 days ago
JOB TITLE:Intermediate Data EngineerLOCATION:Stellenbosch / Johannesburg (Hybrid)ABOUT CYBERLOGIC:Cyberlogic is a trusted Managed Solutions Provider with offices in South Africa, Mauritius, and the UK. Serving a diverse range of clients, spanning numerous industries, including the international maritime sector, Cyberlogic specialises in IT leadership, cyber security, cloud solutions, and business intelligence. For almost three decades, Cyberlogic has been committed to enabling digital transformation through delivering unquestionable value.Our delivery focus has enabled us to build up a national and international footprint of loyal clients that rely on us to provide transparent, open guidance to improve their processes, grow their businesses, and secure their data.Cyberlogic is part of the Hyperclear Technology group, which boasts a diverse technology offering including robotic process automation (RPA), business process management (BPM) data analytics, and decisioning technology.Through our non-profit, R4C (Ride for a Child), we partner with Bright Start Education Foundation, an organisation empowering deserving learners from underprivileged communities, providing holistic support and guidance throughout their educational careers.OUR VALUES:
  • We challenge ourselves to be more AWESOME
  • We are driven to KEEP learning and EVOLVING
  • We look beyond symptoms to identify and RESOLVE ROOT CAUSES
  • We hold each other accountable through CANDID and constructive FEEDBACK
  • We respect and care for each other and know we will only SUCCEED if we work AS A TEAM
  • We CARE deeply ABOUT the success of CYBERLOGIC
  • We FINISH WHAT WE START
  • We always GIVE OUR BEST even if it means putting in the hard yards
  • We KEEP THINGS SIMPLE
PURPOSE OF POSITION:The Intermediate Data Engineer will design and implement Azure-based data solutions – with a strong focus on Azure Synapse Analytics – to meet client needs in a consulting environment. In this individual contributor role (non-managerial), you will work closely with clients in industries such as logistics, agriculture, and manufacturing to understand their business requirements and translate these into effective technical data solutions.The role requires hands-on expertise in building data pipelines, data models, and dashboards using Python, SQL, and Power BI, ensuring that data projects deliver clear value and actionable insights to the client's business. You will be a key technical resource, collaborating with cross-functional teams to drive data-driven strategies while upholding best practices in data quality and security.KEY RESPONSIBILITIES:Data Solution Design:
  • Gather client requirements and design end-to-end data architectures on Azure, particularly using Azure Synapse Analytics, to build scalable data warehouses and analytics solutions.
ETL Pipeline Development:
  • Develop and maintain robust data integration pipelines (ETL/ELT) using Azure services (e.g. Azure Synapse Pipelines, Azure Data Factory) to ingest, transform, and integrate data from various sources.
Data Warehousing & Modeling:
  • Implement and optimize data storage solutions, including relational databases and data lakes, and create data models (star schema, snowflake schema, etc.) that support reporting and analysis.
Coding & Data Processing:
  • Write efficient SQL queries and Python scripts to process large datasets, perform data cleaning, and ensure data integrity for analytics tasks.
Client Collaboration:
  • Work directly with client stakeholders to explain technical concepts, provide updates on project progress, and adjust solutions based on feedback. Serve as a consultant who can translate business needs into technical strategies and solutions, ensuring client expectations are met or exceeded.
Quality and Performance:
  • Ensure data accuracy, quality, and security across all solutions. Optimize system performance (e.g. query tuning in SQL, scaling Azure resources) to handle growing data volumes efficiently.
Cross-Functional Coordination:
  • Collaborate with data analysts, data scientists, and IT teams to integrate solutions into the broader data ecosystem. Coordinate with project managers and business analysts to align deliverables with business goals.
Documentation:
  • Create clear documentation for data pipelines, architectures, and processes. If needed, provide knowledge transfer or training to client IT teams on the deployed solutions.
KEY REQUIREMENTS:
  • Bachelor's degree or equivalent qualification in Computer Science, Information Systems, or a related field.
  • Approximately 3-5 years of hands-on experience in data engineering or a similar role. Experience in a consulting or client-facing environment is highly desirable, with a track record of delivering data projects to external clients.
  • Azure Proficiency: Strong proficiency with Microsoft Azure cloud data services, particularly Azure Synapse Analytics (data warehousing and Spark workloads). Experience with other Azure data components such as Azure Data Factory, Azure Data Lake Storage, and Azure Databricks is a must.
  • Programming & SQL: Advanced SQL skills (writing and optimizing complex queries, stored procedures) and solid programming ability in Python for data processing. Familiarity with PySpark or Spark SQL in the Azure Synapse environment is a plus.
  • Data Modeling & ETL: Experience in data modeling (designing relational schemas, dimensional models) and building ETL/ELT processes. Comfortable working with structured and unstructured data and implementing data transformation workflows.
  • Domain Knowledge: Exposure to data projects in logistics, agriculture, or manufacturing industries. Ability to quickly learn domain-specific data requirements and pain points in these sectors to craft relevant solutions.
  • Consulting Skills: Excellent communication and interpersonal skills. Proven ability to work with clients to gather requirements and explain technical concepts in business-friendly terms. Strong problem-solving skills with a customer-focused mindset.
  • Individual Contributor: Self-motivated and able to work independently as a hands-on engineer. Capable of managing your own tasks and deadlines effectively without the need for supervisory responsibilities (this role has no direct reports).
  • Other: Strong understanding of data governance, security, and privacy best practices in data engineering. Familiarity with version control (Git) and collaborative project tools. High attention to detail and a continuous improvement attitude.
PREFERRED SKILLS:
  • Relevant data engineering certifications are a plus. For example, the Microsoft Certified: Azure Data Engineer Associate certification (exam DP-203) is desirable , as it demonstrates validated Azure data engineering expertise.
  • Advanced Azure Tools: Experience with Azure DevOps for CI/CD pipelines in data projects, and Infrastructure-as-Code tools (Terraform/ARM templates) for deploying data infrastructure.
  • Big Data & Streaming: Familiarity with big data frameworks or services (Spark, Hadoop) and streaming data tools (Kafka, Azure Event Hub, Azure Stream Analytics) for handling large-scale or real-time data would be beneficial.
  • Additional BI Tools: Knowledge of other BI/analytics tools (e.g. Tableau) or experience with advanced analytics and data science workflows.
  • Industry Knowledge: Deeper understanding of business processes in logistics, agriculture, or manufacturing, enabling quicker translation of industry-specific requirements into technical solutions.
  • Continuous Learning: Demonstrated enthusiasm for staying updated on emerging data engineering technologies and best practices, which could include participation in conferences, courses, or contributions to open-source projects.
Should you work from home, it is your responsibility to ensure that you have uninterrupted internet connectivity and a ‘work-like' environment at your home location to deliver your best in terms of performance and productivity.

Hyperclear Tech

Similar Jobs

  • Intermediate Data Engineer

    Network Recruitment

    • Cape Town, Western Cape
    Key Responsibilities: Design, develop, and maintain efficient data pipelines and ETL processes. Build and optimize data models to ensure accuracy, accessibility, and performanc…
    • 55 mins ago
  • Senior Data Engineer

    Network Recruitment

    • Cape Town, Western Cape
    Key Responsibilities: Design, build, and maintain scalable data pipelines and ETL processes. Develop and optimize data models, ensuring data quality, accuracy, and accessibilit…
    • 24 mins ago
  • Data Engineer

    PBT Group

    • Johannesburg, Gauteng
    ? Data Engineer - Azure Data Factory & Databricks Join PBT Group and help us build the future of data-driven innovation PBT Group is looking for an experienced Data Engineer wit…
    • 33 mins ago