Data Science Jobs in Web3

1,662 jobs found

web3.career is now part of the Bondex Logo Bondex Ecosystem

Receive emails of Data Science Jobs in Web3
Job Position Company Posted Location Salary Tags

Pintu

Setiabudi, Indonesia

$90k - $100k

Swan

Remote

$90k - $105k

Easygo Gaming

Melbourne, Australia

$75k - $84k

Menyala

Singapore, Singapore

$81k - $84k

TRM Labs

Remote

$157k - $192k

Logos

Prague, Czech Republic

$36k - $90k

Laguna Games

San Francisco, CA, United States

$85k - $150k

Binance

Asia

Bitcoin Depot

Atlanta, GA, United States

$81k - $84k

Openmesh

Sydney, Australia

$75k - $100k

Openmesh

Sydney, Australia

$75k - $100k

Overmind

Athens, Greece

$75k - $84k

Nethermind

Argentina

$84k - $115k

BitGo

Bangalore, India

$95k - $105k

SFOX

Sao Paulo, Brazil

$76k - $100k

Pintu
$90k - $100k estimated
Setiabudi ID

At PINTU, We are building the #1 crypto investment platform to focus on new investors in Indonesia and Southeast Asia. We know that 99% of new investors are underserved because existing solutions cater to the 1% who are pros and early adopters hence we built an app that helps them to learn, invest and sell cryptocurrencies with one click away.

We’re looking for a Data Engineer to join our Engineering team, to maintain PINTU’s data pipeline and data pipeline observability. This role is the subject matter expert for PINTU’s data pipeline management.

What You’ll Be Doing
You will be managing PINTU’s data pipeline. Using your technical knowledge and passion for data management, you will team up with our data team to build a seamless data pipeline to fasten the company-wide data decision-making.

In this role, you will: 

  1. Conceptualize and generate infrastructure that allows big data to be accessed and analyzed
  2. Design, develop and maintain data pipelines (external data source ingestion jobs, ETL/ELT jobs, etc)
  3. Continuously seek ways to optimize existing data processing to be cost and time efficient
  4. Ensure good data governance and quality through build monitoring systems to monitor data quality in data warehouse
  5. Liaise with coworkers and specific stakeholders to elucidate the requirements for each task
  6. Keep up-to-date with blockchain standards and technological advancements that will improve the quality of your outputs

Requirements:

  • Fluent in Python and advanced-SQL
  • At least 2+ years of relevant experience as a data engineer
  • Fluently working in data warehouse environments (eg: Google BigQuery, AWS Redshift, Snowflake)
  • Familiar with data transformation or processing framework (eg: dbt, spark, Hive, etc)
  • Familiar with data processing technology (Google Cloud Function, Google Dataflow, Google Dataproc, etc)
  • Fluently working with a data orchestration tool (eg: Airflow, Prefect, Dagster etc)
  • Familiar with data storage (eg: Google Cloud Storage, AWS S3)
  • Understand cloud data warehousing concept and experience in data modeling and measure + improve data quality
  • Preferably understand basic containerization and microservice concept (eg: Docker, Kubernetes)
  • Able to build and maintain good relationship with stakeholders
  • Able to translate business requirements to data warehouse modeling specifications
  • Able to demonstrate creative problem solving skill
  • A team player who loves to collaborate with others and can work independently when needed

What does a data scientist in web3 do?

A data scientist in web3 is a type of data scientist who focuses on working with data related to the development of web-based technologies and applications that are part of the larger web3 ecosystem

This can include working with data from decentralized applications (DApps), blockchain networks, and other types of distributed and decentralized systems

In general, a data scientist in web3 is responsible for using data analysis and machine learning techniques to help organizations and individuals understand, interpret, and make decisions based on the data generated by these systems

Some specific tasks that a data scientist in web3 might be involved in include developing predictive models, conducting research, and creating data visualizations.