CrawlJobs Logo

System Engineering Big Query

vodafone.com Logo

Vodafone

Location Icon

Location:
Romania , Bucharest

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

We are seeking a System Engineering Specialist to support and evolve enterprise-scale Big Data platforms within Big Data Platform Services RO, based in Bucharest. This role focuses on operational management, platform optimisation, and the delivery of robust Big Data solutions that support critical IT infrastructure and business needs. The individual will contribute across the full lifecycle of IT projects, combining hands-on technical expertise with a structured, service-oriented approach aligned to global standards.

Job Responsibility:

  • Participate in all phases of IT projects, from requirements definition and analysis through to implementation, testing, and operational handover
  • Define technical requirements and evaluate alternative IT solutions aligned with platform standards and business needs
  • Design, configure, implement, and test Big Data software solutions impacting IT infrastructure
  • Develop and maintain scripts and programs at operating system and database level (e.g. Python, shell scripting, SQL) to automate recurring activities and extend platform functionality
  • Provide operational management and ongoing support for supported Big Data platforms
  • Optimise existing solutions through architectural improvements, software upgrades, and technology refresh initiatives to enhance performance, availability, and resilience
  • Investigate performance incidents, analyse root causes, and implement or propose effective remediation actions
  • Perform proactive maintenance based on monitoring tools, alerts, and notifications
  • Contribute to proof-of-concept activities and the evaluation of emerging technologies for potential adoption
  • Produce and maintain clear technical and operational documentation, including change management artefacts

Requirements:

  • 3–5 years’ experience in infrastructure or platform administration
  • Proven hands-on experience with Big Data solutions such as BigQuery, Tableau, and Teradata
  • Confident working with Red Hat Enterprise Linux and understanding hardware architecture concepts
  • Experienced in procedural programming and shell scripting, with a structured approach to problem solving
  • Familiar with ITIL practices, cyber security principles, and a general understanding of SOX requirements
  • Highly organised, able to manage multiple priorities and deliver results under time pressure
  • A clear communicator with strong documentation skills and a customer-oriented mindset
  • Comfortable working in distributed and remote teams
  • Fluent in written and spoken English
  • Holds a university degree in a relevant field
What we offer:
  • Hybrid way of working: 2 days per week/ 8 per month
  • Medical and dental services
  • Life and hospitalization insurance
  • Dedicated employee phone subscription
  • Take control of your benefits and choose any of the below options: meal tickets/private pension/vacation vouchers/cultural vouchers within the budget
  • Special discounts for gyms and retailers
  • Annual Company Bonus
  • Ongoing Education – we continuously invest in you to ensure you have everything needed to excel on the job and enhance your skills
  • You get to work with tried and trusted web-technology
  • We let you write your own story by planning vacations: go for a trip, experience new things, have fun and enjoy your 23 days off
  • Special Paternal Program - 4 months of paid paternity leave

Additional Information:

Job Posted:
May 14, 2026

Employment Type:
Fulltime
Work Type:
Hybrid work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for System Engineering Big Query

Big Data Engineer

We are looking for a Big Data Engineer that will work on the collecting, storing...
Location
Location
United States , St. Louis
Salary
Salary:
Not provided
protocolinfotech.com Logo
Protocol Infotech
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s degree in Computer Applications, Science, Engineering, Technology
  • 2-4 Years experience
  • Proficiency with Hadoop v2, MapReduce, HDFS
  • Experience with building stream-processing systems, using solutions such as Storm or Spark-Streaming
  • Good knowledge of Big Data querying tools, such as Pig, Hive, and Impala
  • Experience with Spark
  • Experience with integration of data from multiple data sources
  • Experience with NoSQL databases, such as HBase, Cassandra, MongoDB
  • Knowledge of various ETL techniques and frameworks, such as Flume
  • Experience with various messaging systems, such as Kafka or RabbitMQ
Job Responsibility
Job Responsibility
  • Selecting and integrating any Big Data tools and frameworks required to provide requested capabilities
  • Implementing ETL process
  • Monitoring performance and advising any necessary infrastructure changes
  • Defining data retention policies
  • Ability to solve any ongoing issues with operating the cluster
  • Guides the development team in overall application technology design activities
What we offer
What we offer
  • Employee referral program
  • Referral fee of $1,000 will be paid if referred candidate is hired
  • Fulltime
Read More
Arrow Right

Principal Site Reliability Engineer

Location
Location
United States , Ft. Meade
Salary
Salary:
Not provided
cipherlogix.com Logo
CipherLogix
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Fourteen (14) years experience in software development/engineering, including requirements analysis, software development, installation, integration, evaluation, enhancement, maintenance, testing, and problem diagnosis/resolution
  • Ten (10) years experience in system engineering/architecture
  • Ten (10) years experience working with products that support highly distributed, massively parallel computation needs such as Hbase, Hadoop, CloudBase/Acumulo, Big Table, Cassandra, Scality etc
  • At least ten (10) years experience writing software scripts using scripting languages such as Perl, Python, or Ruby for software automation
  • At least four (4) years experience managing and monitoring large Cloud System (>200 nodes). Cloud Systems Administrator or Developer Certification
  • Experience in performing and providing technical direction for the development, engineering, interfacing, integration, and testing of complete hardware/software systems to include monitoring technical health of a system, improving organizational processes, implementation of postmortem (failure) analysis and incident management
  • Ten (10) years experience in the cleared environment
  • Ten (10) years demonstrated experience developing software for one of the following: Windows, UNIX, or Linux OS
  • Knowledge and experience with developing distributed storage routing and querying algorithms
  • Experience in developing documentation required to support a program’s technical issues and training situations
  • Fulltime
Read More
Arrow Right

Business System Analyst

As a Business System Analyst for estockgifts, Crypto and Digital Currencies unit...
Location
Location
Salary
Salary:
Not provided
keyutech.com Logo
Keyu Tech LLC.
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • A computer science graduate or undergraduate degree or 8+ years of industry experience in business systems analysis on big data platforms
  • Technical experience related to BI and/or data warehouse development
  • Strong working knowledge of SQL query language
  • Working knowledge of Hadoop, Map-Reduce, Hive, Pig, AI/ML
  • Ability to draw on technical background to work with both business and development teams
  • Experience with BI / EDW solutions in a large-scale enterprise environment, such as Teradata
  • Good in Analytics throughout product lifecycle – documentation, communication and systems
  • Ability to engage with business users and understand their needs
  • Problem solver and innovative thinker
  • Ability to work with and across teams in an agile environment. Experience in agile project development life cycle a strong plus
Job Responsibility
Job Responsibility
  • Collaborate with business partners to understand the business problem and translate problems into proof-of-concept products
  • Be the domain expert on Crypto and provide insights to senior leadership
  • Capture and document product requirements and partner with our engineering team to develop those products
  • With minimal guidance, develop a product strategy for the specific area and lead project execution
  • Represent the product to peers across estockgifts for feedback on new feature ideas
  • Facilitate technical discussion and provide feedback and input in architectural design
  • Fulltime
Read More
Arrow Right

Data Engineer

At Adyen, we treat data and data artifacts as first-class citizens. They form ou...
Location
Location
Netherlands , Amsterdam
Salary
Salary:
Not provided
adyen.com Logo
Adyen
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 3+ years of experience working as a Data Engineer or in a similar role
  • Solid understanding of both Software and Data Engineering practices
  • Proficient in tools and languages such as: Python, PySpark, Airflow, Hadoop, Spark, Kafka, SQL, Git
  • Able to effectively communicate complex data-related concepts and outcomes to a diverse range of stakeholders
  • Capable of identifying opportunities, devising solutions, and handling projects independently
  • Experimental mindset with a ‘launch fast and iterate’ mentality
  • Skilled in promoting a data-centric culture within technical teams and advocating for setting standards and continuous improvement
Job Responsibility
Job Responsibility
  • Collaborative Solution Development: Engage with a diverse range of stakeholders, including data scientists, analysts, software engineers, product managers, and customers, to understand their requirements and craft effective solutions
  • Quality Pipelines and Architecture: Design, develop, deploy and operate high-quality production ELT pipelines and data architectures. Integrate data from various sources and formats, ensuring compatibility, consistency, and reliability
  • Data Best Practices: Help establish and share best practices in performance, code quality, data validation, data governance, and discoverability in your team and in other teams. Participate in mentoring and knowledge sharing initiatives
  • High Quality Data and Code: Ensure data is accurate, complete, reliable, relevant, and timely. Implement testing, monitoring and validation protocols for your code and data, leveraging tools such as Pytest
  • Performance Optimization: Identify and resolve performance bottlenecks in data pipelines and systems. Improve query performance and resource utilization to meet SLAs and performance requirements, using technologies Spark optimizations
Read More
Arrow Right

Staff Performance Engineer

At Cloudera, we empower people to transform complex data into clear and actionab...
Location
Location
Salary
Salary:
Not provided
cloudera.com Logo
Cloudera
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Strong background in systems engineering and distributed systems
  • C++, Java, or Scala expertise
  • Deep knowledge of CPU architecture, memory, networking, and I/O
  • Hands-on experience with query engines (Impala, Hive, Spark)
Job Responsibility
Job Responsibility
  • Build and run performance benchmarks at scale (1000+ nodes)
  • Profile query engines using flame graphs, perf, and low-level debugging tools
  • Optimize execution engines, storage formats, and resource usage
  • Collaborate with developers to deliver performance-critical improvements
  • Publish performance best practices and competitive benchmarks
What we offer
What we offer
  • Generous PTO Policy
  • Support work life balance with Unplugged Days
  • Flexible WFH Policy
  • Mental & Physical Wellness programs
  • Phone and Internet Reimbursement program
  • Access to Continued Career Development
  • Comprehensive Benefits and Competitive Packages
  • Paid Volunteer Time
  • Employee Resource Groups
  • Fulltime
Read More
Arrow Right

Senior Principal Data Platform Software Engineer

We’re looking for a Sr Principal Data Platform Software Engineer (P70) to be a k...
Location
Location
Salary
Salary:
239400.00 - 312550.00 USD / Year
https://www.atlassian.com Logo
Atlassian
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 15+ years in Data Engineering, Software Engineering, or related roles, with substantial exposure to big data ecosystems
  • Demonstrated experience building and operating data platforms or large‑scale data services in production
  • Proven track record of building services from the ground up (requirements → design → implementation → deployment → ongoing ownership)
  • Hands‑on experience with AWS, GCP (e.g., compute, storage, data, and streaming services) and cloud‑native architectures
  • Practical experience with big data technologies, such as Databricks, Apache Spark, AWS EMR, Apache Flink, or StarRocks
  • Strong programming skills in one or more of: Kotlin, Scala, Java, Python
  • Experience leading cross‑team technical initiatives and influencing senior stakeholders
  • Experience mentoring Staff/Principal engineers and lifting the technical bar for a team or org
  • Bachelor’s or Master’s degree in Computer Science, Engineering, or a related technical field, or equivalent practical experience
Job Responsibility
Job Responsibility
  • Design, develop and own delivery of high quality big data and analytical platform solutions aiming to solve Atlassian’s needs to support millions of users with optimal cost, minimal latency and maximum reliability
  • Improve and operate large‑scale distributed data systems in the cloud (primarily AWS, with increasing integration with GCP and Kubernetes‑based microservices)
  • Drive the evolution of our high-performance analytical databases and its integrations with products, cloud infrastructures (AWS and GCP) and isolated cloud environments
  • Help define and uplift engineering and operational standards for petabyte scale data platforms, with sub‑second analytic queries and multi‑region availability (coding guidelines, code review practices, observability, incident response, SLIs/SLOs)
  • Partner across multiple product and platform teams (including Analytics, Marketplace/Ecosystem, Core Data Platform, ML Platform, Search, and Oasis/FedRAMP) to deliver company‑wide initiatives that depend on reliable, high‑quality data
  • Act as a technical mentor and multiplier, raising the bar on design quality, code quality, and operational excellence across the broader team
  • Design and implement self‑healing, resilient data platforms with strong observability, fault tolerance, and recovery characteristics
  • Own the long‑term architecture and technical direction of Atlassian’s product data platform with projects that are directly tied to Atlassian’s company-level OKRs
  • Be accountable for the reliability, cost efficiency, and strategic direction of Atlassian’s product analytical data platform
  • Partner with executives and influence senior leaders to align engineering efforts with Atlassian’s long-term business objectives
What we offer
What we offer
  • health and wellbeing resources
  • paid volunteer days
  • Fulltime
Read More
Arrow Right

Vice President - Bigdata Engineer - AI & NLP

The Applications Development Technology Lead Analyst is a senior-level position ...
Location
Location
India , Chennai
Salary
Salary:
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 13+ years of relevant experience in Apps Development or systems analysis role
  • Extensive experience in system analysis and programming of software applications
  • Experience in managing and implementing successful projects
  • Expert in coding Python in building Machine Learning and developing LLM-based applications in a professional environment
  • SQL skills able to perform data interrogations
  • Proficiency in enterprise-level application development using Java 8, Scala, Oracle (or comparable database), and Messaging infrastructure like Solace, Kafka, Tibco EMS
  • Develop LLM solutions for querying structured data with natural language, including RAG architectures on enterprise knowledge bases
  • Build, scale, and optimize data science workloads, applying best MLOps practices for production
  • Lead the design and development of LLM-based tools to increase data accessibility, focusing on text-to-SQL platforms
  • Train and fine-tune LLM models to accurately interpret natural language queries and generate SQL queries
Job Responsibility
Job Responsibility
  • Partner with multiple management teams to ensure appropriate integration of functions to meet goals
  • Identify and define necessary system enhancements to deploy new products and process improvements
  • Resolve variety of high impact problems/projects through in-depth evaluation of complex business processes, system processes, and industry standards
  • Provide expertise in area and advanced knowledge of applications programming
  • Ensure application design adheres to the overall architecture blueprint
  • Utilize advanced knowledge of system flow and develop standards for coding, testing, debugging, and implementation
  • Develop comprehensive knowledge of how areas of business, such as architecture and infrastructure, integrate to accomplish business goals
  • Provide in-depth analysis with interpretive thinking to define issues and develop innovative solutions
  • Serve as advisor or coach to mid-level developers and analysts, allocating work as necessary
  • Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets
What we offer
What we offer
  • Global Benefits
  • Best-in-class benefits to be well, live well and save well
  • Fulltime
Read More
Arrow Right

Ab Initio Data Engineer

The Applications Development Intermediate Programmer Analyst is an intermediate ...
Location
Location
India , Chennai; Pune
Salary
Salary:
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's degree in a quantitative field (such as Engineering, Computer Science, Statistics, Econometrics)
  • Minimum 5 years of extensive experience in design, build and deployment of Ab Initio-based applications
  • Expertise in handling complex large-scale Data Lake and Warehouse environments
  • Hands-on experience writing complex SQL queries, exporting and importing large amounts of data using utilities
Job Responsibility
Job Responsibility
  • Ability to design and build Ab Initio graphs (both continuous & batch) and Conduct>it Plans
  • Build Web-Service and RESTful graphs and create RAML or Swagger documentations
  • Complete understanding and analytical ability of Metadata Hub metamodel
  • Strong hands on Multifile system level programming, debugging and optimization skill
  • Hands on experience in developing complex ETL applications
  • Good knowledge of RDBMS – Oracle, with ability to write complex SQL needed to investigate and analyze data issues
  • Strong in UNIX Shell/Perl Scripting
  • Build graphs interfacing with heterogeneous data sources – Oracle, Snowflake, Hadoop, Hive, AWS S3
  • Build application configurations for Express>It frameworks – Acquire>It, Spec-To-Graph, Data Quality Assessment
  • Build automation pipelines for Continuous Integration & Delivery (CI-CD), leveraging Testing Framework & JUnit modules, integrating with Jenkins, JIRA and/or Service Now
  • Fulltime
Read More
Arrow Right