This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Tucows Domains is the world’s largest wholesale domain registrar, responsible for maintaining the health, neutrality, and openness of an important—but largely invisible part of the Internet: the domain name system (DNS). As part of Tucows—one of the world’s largest Internet companies—Tucows Domains has a rich history of helping make the Internet better, operating globally under the Ascio, Enom, Hover and OpenSRS brands. We embrace a people-first philosophy that is rooted in respect, trust, and flexibility. We believe that whatever works for our employees is what works best for us. It’s also why the majority of our roles are remote-first, meaning you can work from anywhere you can connect to the Internet! Today, over one thousand people from over 20 countries are part of our team. We’re looking for a passionate Intermediate Software Engineer specializing in Artificial Intelligence (AI) to join our growing team. In this role, you’ll help shape and build innovative AI-powered systems that transform how users interact with domain-related tools and services. You’ll work both with your team of forward-thinking engineers and with colleagues across business functions to prototype, develop, and deploy intelligent solutions using open-source models and modern infrastructure.
Job Responsibility:
Design and build AI-driven features for our domain services platform using Python and Golang
Integrate and fine-tune open-source models such as LLaMA 3.2 and similar cutting-edge architectures via tools like Ollama
Research, evaluate, and implement emerging AI technologies that align with our vision for smarter, more intuitive products and services
Collaborate with internal stakeholders and fellow engineers to rapidly prototype and iterate on machine learning and LLM-based features
Contribute to a modern AI development stack, ensuring scalability, performance, and ethical usage of models
Actively participate in the open-source ecosystem and bring relevant tools and techniques back to the team
Requirements:
Bachelor’s degree in Software Engineering, Computer Science, or a related field
3+ years of professional software engineering experience in production environments
Strong proficiency in Python and Golang
Solid foundation in software design principles, patterns, and service-oriented architecture
Experience contributing to scalable systems and component-level architecture
Ability to design and build RESTful APIs for model serving and AI-enabled workflows
Working knowledge of relational/SQL databases (preferably PostgreSQL) and data modeling for AI use cases
Strong understanding of modern LLM concepts, including transformer architectures and attention mechanisms
Hands-on experience adapting and deploying open-source models (e.g., LLaMA, Mistral, Mixtral) using tools like Ollama or Hugging Face Transformers
Experience with fine-tuning techniques (e.g., LoRA, QLoRA, PEFT) for domain-specific adaptation
Proficiency in prompt engineering (few-shot, chain-of-thought, structured outputs)
Familiarity with model serving patterns for efficient, scalable inference
Experience designing and implementing Retrieval-Augmented Generation (RAG) pipelines end-to-end
Hands-on experience with vector databases (e.g., pgvector, Pinecone, Weaviate)
Familiarity with embedding models, chunking strategies, and semantic search patterns
Understanding of data pipelines for ingestion, transformation, and inference result storage
Familiarity with Model Context Protocol (MCP) server design patterns
Experience with agent orchestration frameworks (e.g., LangChain, LangGraph)
Understanding of tool use, function calling, and multi-step reasoning in LLM workflows
Experience with LLM evaluation frameworks (e.g., RAGAS, promptfoo, or custom pipelines)
Familiarity with observability and tracing tools (e.g., LangSmith, Helicone)
Comfort with structured logging, metrics, and alerting for AI workloads
Experience with containerization and cloud-native deployment (preferably AWS)
Familiarity with Kubernetes or EKS for scaling model-serving workloads
Understanding of GPU considerations for inference (quantization, batching, memory trade-offs)
Active interest in the open-source AI ecosystem
Strong collaboration and communication skills across technical and business teams
Enthusiasm for emerging AI technologies with a practical, delivery-focused mindset
What we offer:
Fair compensation and generous benefits
Commitment to inclusion across race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status or disability status
Reasonable accommodation for individuals with disabilities