Data Engineer

GoldenRule

  • Johannesburg, Gauteng
  • Permanent
  • Full-time
  • 6 days ago
Role Description: The Data Engineer will be responsible for designing, building, and maintaining robust, scalable, and efficient data pipelines and data warehousing solutions. This role is critical for ensuring high-quality data availability for trade evaluation, risk management, reporting, and analytics across the company.Data Pipeline Development:
  • Design, build, and optimise ETL/ELT pipelines for ingesting, transforming, and loading data from various sources (e.g., Alchemy, Murex, market data feeds) into data platforms (e.g., Quintessence, Snowflake).
  • Ensure data quality, consistency, and accuracy throughout the pipelines.
Data Warehousing Lake Management:
  • Design and implement data models for the data warehouse (e.g., Snowflake) to support analytical and reporting needs.
  • Manage data lakes and ensure efficient data storage and retrieval.
Integration API Development:
  • Develop and maintain data integration solutions, including APIs (e.g., leveraging MuleSoft) for seamless data exchange between systems.
  • Support data mapping efforts for complex financial instruments and exposures.
Performance Optimisation:
  • Monitor data pipeline performance, identify bottlenecks, and implement optimisation strategies.
  • Ensure efficient processing of large data volumes required for risk calculations and valuations.
Data Governance Security:
  • Implement data governance policies within data pipelines and data platforms.
  • Ensure data security and compliance with regulatory requirements (e.g., data masking, access controls).
Troubleshooting Support:
  • Provide expert-level support for data-related incidents and problems, including data discrepancies and pipeline failures.
  • Collaborate with Business Analysts and reporting teams to ensure data meets their requirements.Â
  • Proven experience (5+ years) as a Data Engineer or in a similar role.
  • Strong proficiency in SQL and experience with various relational and NoSQL databases.
  • Experience with cloud data platforms (e.g., Snowflake, AWS Redshift, Google BigQuery).
  • Expertise in building and optimizing ETL/ELT pipelines using tools like Apache Airflow, Talend, or custom scripting.
  • Proficiency in programming languages commonly used for data engineering (e.g., Python, Java, Scala).
  • Proficient in data modeling, dimensional data warehousing methodologies, and data lake architecture.
  • Experience with big data technologies (e.g., Spark, Hadoop) is a plus.
  • Experience in a financial services environment, particularly with financial data, is highly desirable.

ExecutivePlacements.com