| Job Position | Company | Posted | Location | Salary | Tags |
|---|---|---|---|---|---|
Binance | Hong Kong, Hong Kong |
| |||
Bcbgroup | Remote | $62k - $64k | |||
Bitgo | Remote | $126k - $144k | |||
Bitgo | Remote | $126k - $144k | |||
| Learn job-ready web3 skills on your schedule with 1-on-1 support & get a job, or your money back. | | by Metana Bootcamp Info | |||
Bitgo | Remote | $95k - $111k | |||
Binance | Taipei, Taiwan |
| |||
Binance | Hong Kong, Hong Kong |
| |||
Gsrmarkets | Remote | $80k - $95k | |||
Binance | Brisbane, Australia |
| |||
Binance | Taipei, Taiwan |
| |||
Bitpanda | Remote | $105k - $150k | |||
Token Metrics Inc. | London, United Kingdom | $28k - $38k | |||
Token Metrics Inc. | London, United Kingdom | $28k - $38k | |||
CertiK | New York, NY, United States | $80k - $93k | |||
Nansen | Remote | $98k - $150k |
Senior Data Engineer - AI Data Service
Responsibilities
- Responsible for designing, building, and maintaining scalable data pipelines to support Square tem’s data and feature engineering needs
- Lead feature data preparation, transformation, validation, and monitoring for machine learning and recommendation/search systems
- Collaborate closely with algorithm, product, and business teams to translate business requirements into reliable data and feature solutions
- Drive the development of core data infrastructure and feature platforms with data-driven strategies to maximize business impact
- Ensure data quality, stability, and performance across offline and online feature pipelines
- Identify data gaps and optimization opportunities, and define success metrics together with Product and Business stakeholders
Requirements
- Bachelor’s or Master’s degree in Computer Science, Data Engineering, Software Engineering, or related fields, with 5+ years of relevant industry experience
- Strong hands-on experience in data engineering, including large-scale data processing, ETL/ELT pipelines, and feature engineering
- Expert-level proficiency in Java and Scala
- Solid experience with big data technologies (e.g. Spark, Hive, Flink, or similar distributed systems)
- Familiar with machine learning feature lifecycle, including offline training features and online serving features
- Experience working with data warehouses and feature stores is a strong plus
- Bilingual English and Chinese are required to be able to coordinate with overseas partners and stakeholders.
- Strong ownership, communication skills, and ability to work in a fast-paced, cross-functional environment
What does a data scientist in web3 do?
A data scientist in web3 is a type of data scientist who focuses on working with data related to the development of web-based technologies and applications that are part of the larger web3 ecosystem
This can include working with data from decentralized applications (DApps), blockchain networks, and other types of distributed and decentralized systems
In general, a data scientist in web3 is responsible for using data analysis and machine learning techniques to help organizations and individuals understand, interpret, and make decisions based on the data generated by these systems
Some specific tasks that a data scientist in web3 might be involved in include developing predictive models, conducting research, and creating data visualizations.