Web3 is a new paradigm

Browse 65,876 blockchain jobs in web3 at 7,327 projects. Filter the best remote crypto jobs by salary, location, and skills.

web3.career is now part of the Bondex Logo Bondex Ecosystem

Receive emails of new Web3 Jobs
Job Position Company Posted Location Salary Tags

JetSet Token

Las Vegas, NV, United States

$45k - $75k

TON Foundation

Remote

$88k - $110k

Superlogic

New York, NY, United States

$175k - $200k

Coinbase

New York, NY, United States

$194k - $229k

QuickNode

Lisbon, Portugal

$36k - $90k

Deblock

Remote

$45k - $90k

Utila

Tel Aviv, Israel

$90k - $183k

Robinhood

Toronto, Canada

$145k - $170k

Usbank

Minneapolis, MN, United States

$133k - $156k

Tether

Lima, Peru

$102k - $117k

Tether

Accra, Ghana

$102k - $117k

Tether

MS Ivinhema BR

$102k - $117k

Anchorage Digital

Portugal

$87k - $171k

R3

London, United Kingdom

$112k - $150k

CleanSpark

Georgia

$21k - $27k

Affine.io
$150k - $380k
Remote
Apply

About Affine
Affine is building an incentivized RL environment that pays miners for incremental improvements on tasks like program synthesis and coding. Operating on Bittensor's Subnet 120, we’ve created a sybil-proof, decoy-proof, copy-proof, and overfitting-proof mechanism that rewards genuine model improvements. Our vision is to commoditize reasoning—the highest form of intelligence—by directing and aggregating the work of a large, non-permissioned group on RL tasks to break the intelligence sound barrier.

Overview

We’re looking for research-minded engineers who can push the frontier of reinforcement learning, program synthesis, and reasoning agents inside Affine’s competitive RL environments. This role is about experimentation and discovery: designing new post-training methods, exploring agent architectures, and proving them in live competitive benchmarks. You’ll take cutting-edge theory (GRPO, PPO, multi-objective RL, program abduction) and turn it into working systems that miners can improve, validate, and monetize through Affine, Bittensor’s Subnet-120.

This is a rare opportunity to help reshape how AI is trained, evaluated, and aligned in a decentralized ecosystem. The position is ideal for someone who thrives at the intersection of research and engineering—able to prototype novel algorithms quickly, evaluate them rigorously, and scale them into production pipelines that feed back into Affine’s incentive system.

Responsibilities

  • Design decentralized RL systems that incentivize miners to train, refine, and host high-quality agentic LLMs on the Bittensor subnet.


  • Develop evaluation frameworks to assess model performance, safety, and alignment—including task design, metrics, adversarial testing, and red-teaming.


  • Advance RL for agentic models by researching and applying cutting-edge RL and alignment techniques to improve the training–evaluation loop.


  • Prototype and scale algorithms: explore new agent architectures and post-training methods, then build reproducible pipelines for finetuning, evaluation, and data flow.


  • Contribute to live competitive benchmarks, deploying new approaches in production and ensuring the system rewards genuine intelligence gains rather than gaming.


Requirements

  • Reinforcement Learning expertise with deep knowledge and hands-on experience in RL algorithms, design, and tuning. Background in multi-agent systems, mechanism design, or RLHF is a strong plus.


  • Strong engineering skills in Python and experience building production-level ML systems with PyTorch, JAX, or TensorFlow.


  • Distributed systems experience, with comfort designing and scaling high-performance, reliable infrastructure.


  • Knowledge of LLMs and tool use, including how models interact with APIs, external tools, and function calling.


  • Advanced academic or practical background: Master’s or PhD in a relevant field, or equivalent applied research and engineering experience.


Nice-to-Haves

  • Publications in leading AI/ML conferences (NeurIPS, ICML, ICLR, AAAI), especially in RL, game theory, AI safety, or decentralized AI.


  • Experience with virtualization and sandboxed code execution environments for safe tool use.


  • Knowledge of game theory and advanced mechanism design.


  • Contributions to significant open-source RL or LLM projects.