This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
The Data Analytics Engineer II is responsible for development, expansion, and maintenance of data pipelines of the echo system and uses programming skills to develop, customize and manage integration tools, databases, warehouses, and analytical systems. The Data and Analytics Engineer II is responsible for implementation of optimal solutions to integrate, store, process and analyze huge data sets. This includes an understanding of methodology, specifications, programming, delivery, monitoring, and support standards.
Job Responsibility:
Meets expectations of the applicable OneCHRISTUS Competencies: Leader of Self, Leader of Others, or Leader of Leaders
Responsible for analyzing and understanding data sources, participating in requirement gathering, and providing insights and guidance on data technology and data modeling best practices
Analyzes ideas and business and functional requirements to formulate a design strategy
Acts as a tenant to draw out a workable application design and coding parameters with essential functionalities
Works in collaboration with the team members to identify and address the issues by implementing a viable technical solution that is time- and cost-effective and ensuring that it does not affect performance quality
Develops code following the industry's best practices and adheres to the organizational development rules and standards
Involved in the evaluation of proposed system acquisitions or solutions development and provides input to the decision-making process relative to compatibility, cost, resource requirements, operations, and maintenance
Integrates software components, subsystems, facilities, and services into the existing technical systems environment
assesses the impact on other systems and works with cross- functional teams within Information Services to ensure positive project impact
and installs, configures, and verifies the operation of software components
Participates in the development of standards, design, and implementation of proactive processes to collect and report data and statistics on assigned systems
Participates in the research, design, development, and implementation of application, database, and interface using technology platforms provided
Researches, designs, implements, and manages programs
Fixes problems arising across the test cycles and continuously improves the quality of deliverables
References and documents each phase of development for further reference and maintenance operation
Uses critical and analytical thinking skills and understanding of programming principles and design
Uses strong technical knowledge of Enterprise Application/Integration Design to develop systems, databases, operating systems, and Information Services
Builds stream-processing systems, using solutions such as NiFi or Spark-Streaming
Uses intermediate level of SQL programing and query performance tuning techniques for Data Integration and Consumption using design for optimum performance against large data asset within Transactional, MPP, and columnar architecture
Requirements:
Bachelor’s degree in computer science, engineering, math, or related field, or foreign equivalent
Advanced knowledge of designing and developing data pipelines and delivering advanced analytics, with open-source Big Data processing frameworks such as Hadoop technologies
Proven competency in programming utilizing distributed computing principles
Demonstrative knowledge of data mining techniques, relational, and non-relational databases, Lambda Architecture, and BI and analytics landscape, preferable in large-scale development environments
Demonstrated proficiency in open-source technology
for example, Python, Spark, Hive, HDFS, Nifi, etc.
Experience in data integration with ETL techniques and frameworks
Big Data querying tools, such as Hive, Impala, and Spark SQL
and large-scale data lake data and data warehouse implementations
Minimum five (5) years’ experience with design, architecture and development of Enterprise scale platforms built on open-source frameworks
Minimum three (3) years’ experience in MapReduce, Spark programming
Minimum three (3) years’ experience developing analytics solutions with large data sets within an OLAP and MPP architecture