Kinross Gold Corporation Company logo on Dataaxy
Kinross Gold Corporation

Data Engineer

đź“Ť
Toronto, Ontario, Canada
🧪
Not Applicable
Start Date ASAP

Hybrid Work Environment (3 days in office, 2 days remote with flexible hours)

Dress Code Business Casual

Location Downtown Toronto, Outside of Union Station (TTC & GO accessible)

A Great Place to Work

Who We Are

Kinross is a Canadian-based global senior gold mining company with operations and projects in the United States, Brazil, Mauritania, Chile and Canada. Our focus on delivering value is based on our four core values of Putting People First, Outstanding Corporate Citizenship, High Performance Culture, and Rigorous Financial Discipline. Kinross maintains listings on the Toronto Stock Exchange (symbol:K) and the New York Stock Exchange (symbol:KGC).

Mining responsibly is a priority for Kinross, and we foster a culture that makes responsible mining and operational success inseparable. In 2021, Kinross committed to a greenhouse gas reduction action plan as part of its Climate Change strategy, reached approximately 1 million beneficiaries through its community programs, and recycled 80% of the water used at our sites. We also achieved record high levels of local employment, with 99% of total workforce from within host countries, and advanced inclusion and diversity targets, including instituting a Global Inclusion and Diversity Leadership Council.

Eager to know more about us? Visit Home - Kinross Gold Corporation

Purpose of Role

Reporting to the Director of Development, Integration and Analytics, the incumbent will be a key member of the IT team, focusing primarily on data engineering, but also assisting with data architecture and management processes.

This person plays a critical role in enabling the organization to leverage data effectively for decision-making and strategic initiatives by ensuring the availability, reliability, and quality of data. In-depth knowledge on data processing, data modelling, data products for integration and visualization is required.

Job Responsibilities

Data Engineering (70%)

  • Designing and implementing data pipelines to extract, transform, and load (ETL) data from various sources into storage systems (e.g., data warehouses, data lakes). This involves understanding the data sources, defining data extraction methods, and ensuring the integrity and quality of the data throughout the process.
  • Integrating data from multiple sources and systems to create a unified view of the data landscape within the organization. This involves understanding data formats, protocols, and APIs to facilitate seamless data exchange and interoperability between systems.
  • Developing algorithms and scripts to clean, preprocess, and transform raw data into a format suitable for analysis and reporting. This may involve data normalization, denormalization, aggregation, and other data manipulation techniques.
  • Designing data models and schemas to represent the structure and relationships of the data stored in databases. This involves understanding the business requirements and data usage patterns to design models that support efficient data access and analysis.
  • Implementing data quality checks and validation processes to ensure the accuracy, completeness, and consistency of the data. This includes identifying and resolving data anomalies, errors, and discrepancies to maintain data integrity and reliability.
  • Documenting data engineering processes, workflows, and systems to facilitate knowledge sharing and collaboration within the Data and Analytics team and across the organization. This includes creating documentation for data pipelines, data models, and infrastructure configurations.
  • Provide day-to-day support and technical expertise to both technical and non-technical teams.

Data Architecture and Data Management (20%)

  • Assist with management of databases and data storage solutions to store and organize structured and unstructured data effectively. This includes database design, configuration, optimization, and performance tuning to ensure efficient data retrieval and processing.
  • Assist with managing the infrastructure and resources required to support data processing and analysis, including servers, clusters, storage systems, and cloud services.
  • Collaborate with our CyberSecurity and Data Management teams to ensure data security, compliance, and governance standards are consistently met across the data platform, adhering to global data engineering standards and principles.
  • Assist with creating, documenting, and improving repeatable patterns for accessing specific data sources, such as: repeatable playbook for consuming 3rd party APIs.

Data Science (10%)

  • Assist in introducing Data Science concepts within the organization and play a role in all stages of the data science project life cycle, including the identification of suitable data science project opportunities, partnering with business leaders, domain experts, and end-users to gain business understanding, data understanding, and requirements collection.
  • Prepare the exploratory data analyses, proof-of-concept modeling, and business cases necessary to generate partner consensus and internal support for potential opportunities.

Sustainability Expectations

  • Provide guidance and knowledge transfer to IT Support personnel around the world on IT/OT integration aspects for OT applications;
  • Lead global team of administrators to adhere to a governance model and security standards for the Industrial DMZ AD domain;
  • Support Kinross’ data security requirements by ensuring that a strong internal infrastructure security posture for servers and applications is maintained at all times;
  • Keep IT/OT applications deployment and support documentation up to date. Create new documentation if not available;
  • Promote the use of the available monitoring tools and alerts;
  • Advise senior leadership in the development and implementation of specialized solutions for Kinross, or on implications of decisions relating to IT integration; act as project lead on implementation programs;
  • Continuously update and upgrade knowledge in the IT/OT integration space; build network of external contacts to stay current on trends and best practices; act as “go to person” for best practices inside and outside the company.

Minimum Qualifications And Experience

  • A bachelor’s degree in computer science, statistics, information management, or a related field; or an equivalent combination of training and experience.
  • At least 4 years post degree experience with design, implementation, and operationalization of large-scale data and analytics solutions.
  • A strong background in technology with experience in data engineering, data warehousing and data product development.
  • Strong understanding of reporting and analytics tools & techniques.
  • Strong knowledge of data lifecycle management concepts.
  • Excellent documentation skills, including workflow documentation.
  • Ability to adapt to a fast-paced, dynamic work environment.

Required Technical Knowledge

  • Expertise designing and implementing ETL/ELT processes using Azure Data Factory and Databricks, including data extraction from various sources, data transformation, and loading data into target systems such as data lakes or warehouses.
  • Solid understanding of the Databricks platform, including its core components, such as Databricks Runtime, Databricks Workspace, Catalog, and Databricks CLI. Comfortable navigating the Databricks environment and performing common tasks such as creating clusters, notebooks, and jobs.
  • Experience with Spark DataFrame API, Spark SQL, and Spark MLlib for data processing, querying, and machine learning tasks. Able to write efficient Spark code to process large volumes of data in distributed environments.
  • Proficient in developing and executing notebooks using languages like Python, Scala, or SQL. Experience with notebook features such as interactive visualization, Markdown cells, and magic commands.
  • Familiarity with the integration between Databricks and other Azure services, such as Azure Blob Storage for data storage, Azure Data Lake Storage (ADLS) for data lakes, Azure SQL Database or Azure Synapse Analytics for data warehousing, Azure Key Vault for secrets management, and Azure Event Hubs.
  • Experience with Source Code management tooling such as Git or Azure DevOps as well as a strong understanding of deployment pipelines using services such as Git Actions, Azure DevOps, or Jenkins
  • Understanding the basics of Azure services, including Azure Virtual Machines, Azure Storage (Blob Storage, Data Lake Storage), Azure Networking (Virtual Network, Subnetting), Azure Identity and Access Management (Azure Active Directory, Role-Based Access Control), and Azure Resource Management.
  • Solid foundation in SQL and able to write and optimize SQL queries efficiently to manipulate and analyze data effectively.

Required Behavioral Competence

  • Communication – demonstrated strength in communicating with internal and external customers, vendors, project teams and senior management. Strong ability to build relationships, work collaboratively, and resolve problems with people at all levels of the organization.
  • Flexibility – the ability to adapt to changing conditions and priorities, to use feedback from the team and the broader organization to change course if it is deemed necessary.
  • Innovation – willingness to embrace new, improved and unconventional ways to address business and technical issues and opportunities.
  • Accountability – ownership of what is being worked on as well as willingness to take credit, accept and learn from failures when applicable.
  • Travel – Willingness to travel up to 10%.

Key informations

🧳
Full-time
đź“…
Posted 7 months ago

Don’t miss out on new
Data & AI Jobs

Get curated job alerts weekly.

Other jobs at Kinross Gold Corporation

Kinross Gold Corporation does not currently have any open job positions in Data & Ai.
© 2023 | All Rights Reserved | Built with 🤍 in MontrealAll our data is gathered from publicly available sources or contributed by users