Web3 is a new paradigm

Browse 65,894 blockchain jobs in web3 at 7,327 projects. Filter the best remote crypto jobs by salary, location, and skills.

web3.career is now part of the Bondex Logo Bondex Ecosystem

Receive emails of new Web3 Jobs
Job Position Company Posted Location Salary Tags

Bitpanda

Barcelona, Spain

$87k - $87k

Bitpanda

Barcelona, Spain

$84k - $110k

Bitpanda

Vienna, Austria

Bitpanda

Vienna, Austria

$85k - $90k

Bitpanda

Barcelona, Spain

$85k - $96k

Bitpanda

Vienna, Austria

$39k - $45k

Bitpanda

New York, NY, United States

$87k - $115k

Bitmex

Hong Kong, Hong Kong

$84k - $100k

Easygo

Melbourne, Australia

$81k - $111k

Optimism Unlimited

Remote

$121k - $166k

Token Metrics

Cape Town, South Africa

$90k - $120k

Token Metrics

Budapest, Hungary

$90k - $120k

Binance

Hong Kong, Hong Kong

Circle - Referrals

Remote

$160k - $207k

Circle

Miami, FL, United States

$160k - $207k

Affine.io
$150k - $380k
Remote
Apply

About Affine
Affine is building an incentivized RL environment that pays miners for incremental improvements on tasks like program synthesis and coding. Operating on Bittensor's Subnet 120, we’ve created a sybil-proof, decoy-proof, copy-proof, and overfitting-proof mechanism that rewards genuine model improvements. Our vision is to commoditize reasoning—the highest form of intelligence—by directing and aggregating the work of a large, non-permissioned group on RL tasks to break the intelligence sound barrier.

Overview

We’re looking for research-minded engineers who can push the frontier of reinforcement learning, program synthesis, and reasoning agents inside Affine’s competitive RL environments. This role is about experimentation and discovery: designing new post-training methods, exploring agent architectures, and proving them in live competitive benchmarks. You’ll take cutting-edge theory (GRPO, PPO, multi-objective RL, program abduction) and turn it into working systems that miners can improve, validate, and monetize through Affine, Bittensor’s Subnet-120.

This is a rare opportunity to help reshape how AI is trained, evaluated, and aligned in a decentralized ecosystem. The position is ideal for someone who thrives at the intersection of research and engineering—able to prototype novel algorithms quickly, evaluate them rigorously, and scale them into production pipelines that feed back into Affine’s incentive system.

Responsibilities

  • Design decentralized RL systems that incentivize miners to train, refine, and host high-quality agentic LLMs on the Bittensor subnet.


  • Develop evaluation frameworks to assess model performance, safety, and alignment—including task design, metrics, adversarial testing, and red-teaming.


  • Advance RL for agentic models by researching and applying cutting-edge RL and alignment techniques to improve the training–evaluation loop.


  • Prototype and scale algorithms: explore new agent architectures and post-training methods, then build reproducible pipelines for finetuning, evaluation, and data flow.


  • Contribute to live competitive benchmarks, deploying new approaches in production and ensuring the system rewards genuine intelligence gains rather than gaming.


Requirements

  • Reinforcement Learning expertise with deep knowledge and hands-on experience in RL algorithms, design, and tuning. Background in multi-agent systems, mechanism design, or RLHF is a strong plus.


  • Strong engineering skills in Python and experience building production-level ML systems with PyTorch, JAX, or TensorFlow.


  • Distributed systems experience, with comfort designing and scaling high-performance, reliable infrastructure.


  • Knowledge of LLMs and tool use, including how models interact with APIs, external tools, and function calling.


  • Advanced academic or practical background: Master’s or PhD in a relevant field, or equivalent applied research and engineering experience.


Nice-to-Haves

  • Publications in leading AI/ML conferences (NeurIPS, ICML, ICLR, AAAI), especially in RL, game theory, AI safety, or decentralized AI.


  • Experience with virtualization and sandboxed code execution environments for safe tool use.


  • Knowledge of game theory and advanced mechanism design.


  • Contributions to significant open-source RL or LLM projects.