| Job Position | Company | Posted | Location | Salary | Tags |
|---|---|---|---|---|---|
Binance | Taipei, Taiwan |
| |||
Zinnia | Remote | $122k - $123k | |||
Okx | Remote | $98k - $150k | |||
Okx | Remote | $72k - $72k | |||
| Learn job-ready web3 skills on your schedule with 1-on-1 support & get a job, or your money back. | | by Metana Bootcamp Info | |||
Layerzerolabs | Vancouver, Canada | $75k - $77k | |||
Integra | Remote | $88k - $101k | |||
Integra | Remote | $72k - $84k | |||
Phantom | Remote | $185k - $225k | |||
Mysten Labs | United States | $164k - $225k | |||
Kraken | London, United Kingdom | $88k - $89k | |||
Binance | Hong Kong, Hong Kong |
| |||
RiskPod | New York, NY, United States | $150k - $180k | |||
Golfiuns | Remote | $80k - $90k | |||
Okx | Remote | $88k - $119k | |||
Layerzerolabs | Remote | $91k - $100k |
Binance Accelerator Program - Applied Data Scientist
About the Role
You'll work directly on the AI systems powering Binance AI Products and next-generation agentic trading features â alongside the full-time algorithm team, on real production challenges.
You own deliverables, run experiments, and ship code that matters. You build components of AI systems (agents, pipelines, evaluation tools), debug real systems, and work with engineers to ship features that reach users.
This is not a "watch and learn" program. You are expected to build, contribute, and ship.
Responsibilities
-
Contribute to the design and development of LLM-powered pipelines for agentic trading â including reasoning agent components, tool-use frameworks via MCP, and automated workflow execution across crypto markets.
-
Build and evaluate prompt engineering strategies, test-time scaling approaches, and retrieval architectures for crypto-native data sources â on-chain data, market feeds, news, and sentiment signals.
-
Design and run model evaluation experiments â defining quality metrics for agent reasoning in financial contexts, executing benchmarks, and synthesizing results into actionable findings.
-
Analyze agent performance characteristics in live trading scenarios â covering decision accuracy, latency, reliability, and adversarial robustness.
-
Apply AI-native development practices using agentic coding tools as a standard part of the engineering workflow â writing and executing Python code for strategy logic, data pipelines, and agent evaluation.
Requirements
-
Currently pursuing a Bachelor's or Master's degree in Computer Science, Data Science, Statistics, Mathematics, or related technical field.
-
Expected graduation in 2026 or 2027.
-
Strong Python programming skills with demonstrated AI-native development practices â you use agentic coding tools (Claude Code, Cursor, GitHub Copilot Workspace) as a core part of your workflow.
-
Foundational understanding of how large language models work â attention mechanisms, prompting, and the difference between standard generation and reasoning models.
-
Structured problem-solving approach and ability to operate independently on defined tasks.
What does a data scientist in web3 do?
A data scientist in web3 is a type of data scientist who focuses on working with data related to the development of web-based technologies and applications that are part of the larger web3 ecosystem
This can include working with data from decentralized applications (DApps), blockchain networks, and other types of distributed and decentralized systems
In general, a data scientist in web3 is responsible for using data analysis and machine learning techniques to help organizations and individuals understand, interpret, and make decisions based on the data generated by these systems
Some specific tasks that a data scientist in web3 might be involved in include developing predictive models, conducting research, and creating data visualizations.