Data Science Jobs in Web3

1,276 jobs found

Receive emails of Data Science Jobs in Web3
Job Position Company Posted Location Salary Tags

Zinnia

Remote

$88k - $97k

Chainalysis

Canada

$76k - $87k

Binance

Taipei, Taiwan

Zinnia

Remote

$80k - $100k

DV Trading

Chicago, IL, United States

$100k - $150k

Integra

Remote

$21k - $64k

Integra

Remote

$79k - $84k

Magic Eden

Shanghai, China

$112k - $150k

Alchemy

Remote

$91k - $156k

Anti Capital

New York, NY, United States

$100k - $200k

Fabric of Truth, Inc

Boston, MA, United States

$84k - $164k

Fabric of Truth, Inc

Belgium

$84k - $164k

Binance

Taipei, Taiwan

Rampnetwork

Remote

$79k - $84k

Nethermind

London, United Kingdom

$27k - $63k

Zinnia
$88k - $97k estimated
Remote
Apply

WHO WE ARE: Zinnia is the leading technology platform for accelerating life and annuities growth. With innovative enterprise solutions and data insights, Zinnia simplifies the experience of buying, selling, and administering insurance products. All of which enables more people to protect their financial futures. Our success is driven by a commitment to three core values: be bold, team up, deliver value – and that we do. Zinnia has over $180 billion in assets under administration, serves 100+ carrier clients, 2500 distributors and partners, and over 2 million policyholders.WHO YOU ARE: As a seasoned Data Engineer specializing in data engineering, you bring extensive expertise in optimizing data workflows using various database tools like Oracle, BigQuery, and SQL Server. You possess a deep understanding of ELT/ETL processes, data integration, and have a strong command of Python for data manipulation and automation tasks. You will possess advanced expertise in working with data platforms like Google Big Query, DBT, Python, and Airflow. Responsible for designing and maintaining scalable ETL pipelines, optimizing complex data systems, and ensuring smooth data flow across different platforms. As a Senior Data Engineer, you will also be required to work collaboratively in a team and contribute to building data infrastructure that drives business insights WHAT YOU’LL DO:

Design, develop, and optimize complex ETL pipelines that integrate large data sets from various sources. Build and maintain high-performance data models using Google BigQuery and DBT for data transformation. Develop Python scripts for data ingestion, transformation, and automation. Implement and manage data workflows using Apache Airflow for scheduling and orchestration. Collaborate with data scientists, analysts, and other stakeholders to ensure data availability, reliability, and performance. Troubleshoot and optimize data systems, identifying issues and resolving them proactively. Work on cloud-based platforms, particularly AWS, to leverage scalability and storage options for data pipelines. Ensure data integrity, consistency, and security across systems. Take ownership of end-to-end data engineering tasks while mentoring junior team members. Continuously improve processes and technologies for more efficient data processing and delivery. Act as a key contributor to developing and supporting complex data architectures.

WHAT YOU’LL NEED:

Bachelor’s degree in computer science, Information Technology, or a related field. 6+ years of hands-on experience in Data Engineering or related fields, with a strong background in building and optimizing data pipelines Strong proficiency in Google Big Query, including designing and optimizing queries. Advanced knowledge of DBT for data transformation and model management. Proficiency in Python for data engineering tasks, including scripting, data manipulation, and automation. Solid experience with Apache Airflow for workflow orchestration and task automation. Extensive experience in building and maintaining ETL pipelines. Familiarity with cloud platforms, particularly AWS (Amazon Web Services), including tools like S3, Lambda, Redshift, or Glue. Java knowledge is a plus. Excellent problem-solving and troubleshooting abilities. Strong communication and collaboration skills with the ability to work effectively in a team environment. Self-motivated, detail-oriented, and able to work with minimal supervision. Ability to manage multiple priorities and deadlines in a fast-paced environment. Experience with other cloud platforms (e.g., GCP, Azure) is a plus. Knowledge of data warehousing best practices and architecture.

WHAT’S IN IT FOR YOU? At Zinnia, you collaborate with smart, creative professionals who are dedicated to delivering cutting-edge technologies, deeper data insights, and enhanced services to transform how insurance is done. Visit our website at www.zinnia.com for more information. Apply by completing the online application on the careers section of our website. We are an Equal Opportunity employer committed to a diverse workforce. We do not discriminate based on race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability  

LI-SC1

⬇
Apply Now

What does a data scientist in web3 do?

A data scientist in web3 is a type of data scientist who focuses on working with data related to the development of web-based technologies and applications that are part of the larger web3 ecosystem

This can include working with data from decentralized applications (DApps), blockchain networks, and other types of distributed and decentralized systems

In general, a data scientist in web3 is responsible for using data analysis and machine learning techniques to help organizations and individuals understand, interpret, and make decisions based on the data generated by these systems

Some specific tasks that a data scientist in web3 might be involved in include developing predictive models, conducting research, and creating data visualizations.