This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Transform environmental DNA (eDNA) datasets into high-quality, standards-compliant biodiversity data collections. Deliver operational tooling and data infrastructure, such as protocols, scripts, code, software pipelines, interfaces and guidelines. Output data collections will be integrated with, published on and disseminated to existing infrastructure.
Job Responsibility:
Transform eDNA derived data (e.g., sequence read counts, taxonomic assignments) into Darwin Core-compliant datasets
Implement and maintain data publication pipelines for eDNA-derived biodiversity data
Develop and optimise ETL (extract-transform-load) workflows to ensure data quality, version control, and reproducibility
Contribute to the development of software tools or scripts (Python, R, or similar) to automate data validation, transformation, and publication
Implement emerging standards and tools for eDNA data integration with the Ocean Biodiversity Information System (OBIS), the Global Biodiversity Information Facility (GBIF) and the Atlas of Living Australia (ALA)
Collaborate with research teams to interpret eDNA results and define metadata, ensuring alignment with FAIR data principles
Coordinate with biodiversity informatics platforms (e.g., GBIF IPT, OBIS) for dataset registration, publishing, and updates
Provide guidance and training to team members and stakeholders on best practices in eDNA data standards and publication workflows
Requirements:
Bachelor's or Master's degree (or equivalent experience) in bioinformatics, marine ecology, data engineering or a related field
Familiarity with eDNA research workflows, from sample collection through to data publishing (e.g. bioinformatic pipelines including DADA2, QIIME2)
Some experience building or maintaining data pipelines (e.g., using Python, R, or workflow managers like Snakemake, Dagster, Airflow, Prefect)
Strong problem-solving skills and ability to work independently as well as collaboratively
Excellent communication skills, particularly in explaining technical concepts to non-technical collaborators
Demonstrated ability to work collaboratively and harmoniously with team and project members to achieve joint goals
Nice to have:
Practical experience with Darwin Core terms, data schemas, and publishing tools (e.g., GBIF IPT)
Familiarity with software development practices (e.g., version control with Git, testing, documentation, docker containerisation)
Experience with relational databases and data validation
Experience with biodiversity data networks (GBIF, OBIS, Atlas of Living Australia, etc.)
Prior work on open-source biodiversity informatics tools
Understanding of taxonomic data management and biodiversity metadata standards (EML, Audubon Core, etc.)
Welcome to CrawlJobs.com – Your Global Job Discovery Platform
At CrawlJobs.com, we simplify finding your next career opportunity by bringing job listings directly to you from all corners of the web. Using cutting-edge AI and web-crawling technologies, we gather and curate job offers from various sources across the globe, ensuring you have access to the most up-to-date job listings in one place.
We use cookies to enhance your experience, analyze traffic, and serve personalized content. By clicking “Accept”, you agree to the use of cookies.