Data Science Jobs in Web3

1,593 jobs found

web3.career is now part of the Bondex Logo Bondex Ecosystem

Receive emails of Data Science Jobs in Web3
Job Position Company Posted Location Salary Tags

Edge & Node

remote

$63k - $75k

Numus

Remote

$90k - $180k

Gigster

United States

$112k - $183k

Gigster

Chicago, IL, United States

$112k - $183k

The Block

Remote

$80k - $95k

Shakepay

Montreal, Canada

$98k - $110k

TaxBit

Washington, United States

$76k - $80k

Coinmarketcap

Taipei, Taiwan

$95k - $110k

Gemini

Singapore, Singapore

$72k - $100k

SwissBorg

Remote

$72k - $80k

Luno

Cape Town, South Africa

$76k - $100k

Gemini

Remote

$120k - $168k

SFOX

Chicago, IL, United States

$76k - $183k

OKX

Singapore, Singapore

$75k - $110k

Coinbase

United States

$201k - $237k

Data Engineer

Edge & Node
$63k - $75k estimated

This job is closed

Edge & Node is a creative software development company working to build a vibrant, decentralized future. Founded by the initial team behind The Graph, Edge & Node is dedicated to the advancement of web3, a decentralized and fair internet where public data is available to all—an internet that enables its users to increase agency over their creations and their lives.

Edge & Node’s initial product is The Graph, an indexing protocol for querying networks like Ethereum and IPFS, which ensures open data is always available and easy to access. The Graph is used by thousands of protocols and dapps including Uniswap, Livepeer, Aave, Decentraland, and more. Edge & Node also launched Everest, a decentralized registry with the mission to catalyze the shift to web3, facilitating community-driven curation of projects providing ongoing utility to the crypto space.

The Engineering Operations & Customer Success team works closely with all other Engineering teams across Edge & Node to ensure the services we operate are reliable, performant, secure, and predictable. We focus on a mix of software development, operational automation, cyber security, and collaboration with other teams to help take our service delivery to the next level.

We are looking for an early-career Data Engineer to be focused on developing and maintaining data science pipelines. Ideally, the team would like to bring on someone who has experience with the current tools being used by the team which include, but are not limited to, Redpanda, Materialize, and GCP. In this role, you will monitor and maintain reliability of the Redpanda cluster, streaming database, DBT jobs, QoS oracle, and other data engineering systems. You’ll be expected to learn Materialize and help migrate BigQuery models to reduce costs. In addition, you will help establish and maintain good standards around documentation and internal educational tools and respond to data engineering/devops requests in our incident management process.

What You’ll Be Doing

  • Learning our infrastructure and data engineering toolset

  • Partnering closely with our Data Science team to perform various data warehouse jobs and periodic RedPanda/streaming database devops tasks

  • Manage historical data models in BigQuery/DBT

  • Develop pipelines to support dashboards and perform devops tasks to support Superset dashboards

What We Expect

  • Experience with one or more of the following: BigQuery, ETL automation/workflow tools (DBT), BI/dashboarding tools (Apache Superset), streaming data platforms (Apache Kafka, Redpanda, or Confluent), or other data engineering and data warehouse toolsets/environments

  • Some experience or knowledge of container orchestration tools such as Kubernetes and Kustomize preferred

  • Some experience or knowledge of monitoring and alerting (Grafana dashboards) preferred

  • Some experience or knowledge of SQL–able to create and manage tables within a SQL database

  • Proficiency in one or more programming languages, such as Python, R, or Rust

  • Must be able to to serve on-call shifts and support devops needs

  • Ability to create documentation and communicate with a a variety of audiences

  • Clear communication skills (written and verbal) to document processes and architectures

  • Ability to work well within a multinational team environment

  • Preference to be physically located in The Americas, however the team is open to candidates in European time zones or other locations

About the Graph

What does a data scientist in web3 do?

A data scientist in web3 is a type of data scientist who focuses on working with data related to the development of web-based technologies and applications that are part of the larger web3 ecosystem

This can include working with data from decentralized applications (DApps), blockchain networks, and other types of distributed and decentralized systems

In general, a data scientist in web3 is responsible for using data analysis and machine learning techniques to help organizations and individuals understand, interpret, and make decisions based on the data generated by these systems

Some specific tasks that a data scientist in web3 might be involved in include developing predictive models, conducting research, and creating data visualizations.