Job Position | Company | Posted | Location | Salary | Tags |
---|---|---|---|---|---|
Affine.io | Remote | $150k - $500k | |||
Affine.io | Remote | $140k - $250k | |||
Bondex | Remote |
| |||
Obinex | Los Angeles, CA, United States | $54k - $80k | |||
Learn job-ready web3 skills on your schedule with 1-on-1 support & get a job, or your money back. | | by Metana Bootcamp Info | |||
Chainlink Labs | New York, NY, United States | $36k - $99k | |||
a16z | Remote | $237k | |||
Avalabs | Remote | $143k - $179k | |||
Avalabs | Remote | $158k - $198k | |||
Avalabs | Remote | $158k - $198k | |||
Winnables | Remote | $62k - $70k | |||
Lazer | Canada | $87k - $87k | |||
Inmobi | Remote | $90k - $104k | |||
Okx | Remote | $75k - $99k | |||
AI Dogg | Remote | $72k - $100k | |||
Wintermute | Remote | $86k - $180k |
About Affine
Affine is building an incentivized RL environment that pays miners for incremental improvements on tasks like program synthesis and coding. Operating on Bittensor's Subnet 120, weâve created a sybil-proof, decoy-proof, copy-proof, and overfitting-proof mechanism that rewards genuine model improvements. Our vision is to commoditize reasoningâthe highest form of intelligenceâby directing and aggregating the work of a large, non-permissioned group on RL tasks to break the intelligence sound barrier.
Overview
Weâre looking for research-minded engineers who can push the frontier of reinforcement learning, program synthesis, and reasoning agents inside Affineâs competitive RL environments. This role is about experimentation and discovery: designing new post-training methods, exploring agent architectures, and proving them in live competitive benchmarks. Youâll take cutting-edge theory (GRPO, PPO, multi-objective RL, program abduction) and turn it into working systems that miners can improve, validate, and monetize through Affine, Bittensorâs Subnet-120.
This is a rare opportunity to help reshape how AI is trained, evaluated, and aligned in a decentralized ecosystem. The position is ideal for someone who thrives at the intersection of research and engineeringâable to prototype novel algorithms quickly, evaluate them rigorously, and scale them into production pipelines that feed back into Affineâs incentive system.
Responsibilities
- Design decentralized RL systems that incentivize miners to train, refine, and host high-quality agentic LLMs on the Bittensor subnet.
- Develop evaluation frameworks to assess model performance, safety, and alignmentâincluding task design, metrics, adversarial testing, and red-teaming.
- Advance RL for agentic models by researching and applying cutting-edge RL and alignment techniques to improve the trainingâevaluation loop.
- Prototype and scale algorithms: explore new agent architectures and post-training methods, then build reproducible pipelines for finetuning, evaluation, and data flow.
- Contribute to live competitive benchmarks, deploying new approaches in production and ensuring the system rewards genuine intelligence gains rather than gaming.
Requirements
- Reinforcement Learning expertise with deep knowledge and hands-on experience in RL algorithms, design, and tuning. Background in multi-agent systems, mechanism design, or RLHF is a strong plus.
- Strong engineering skills in Python and experience building production-level ML systems with PyTorch, JAX, or TensorFlow.
- Distributed systems experience, with comfort designing and scaling high-performance, reliable infrastructure.
- Knowledge of LLMs and tool use, including how models interact with APIs, external tools, and function calling.
- Advanced academic or practical background: Masterâs or PhD in a relevant field, or equivalent applied research and engineering experience.
Nice-to-Haves
- Publications in leading AI/ML conferences (NeurIPS, ICML, ICLR, AAAI), especially in RL, game theory, AI safety, or decentralized AI.
- Experience with virtualization and sandboxed code execution environments for safe tool use.
- Knowledge of game theory and advanced mechanism design.
- Contributions to significant open-source RL or LLM projects.