Engineer Jobs in Web3

16,582 jobs found

web3.career is now part of the Bondex Logo Bondex Ecosystem

Receive emails of Engineer Jobs in Web3
Job Position Company Posted Location Salary Tags

FalconX

Remote

$103k - $165k

OKX

Singapore, Singapore

$103k - $117k

OKX

Singapore, Singapore

$90k - $180k

OKX

Singapore, Singapore

$90k - $180k

Syndica

Houston, TX, United States

Goldman Sachs

New York, NY, United States

$115k - $180k

Find Satoshi Lab

Remote

$36k - $54k

ChainSafe Systems

New York, NY, United States

$84k - $84k

Hyperbolic Labs

Irvine, CA, United States

$81k - $150k

Coinbase

Remote

$185k

Coinbase

Remote

$211k - $249k

BitGo

Palo Alto, CA, United States

$185k - $235k

BitGo

Toronto, Canada

$80k - $105k

Nomos

Warsaw, Poland

$90k - $148k

Gauntlet

Remote

$160k - $180k

Senior Software EngineerData

FalconX
$103k - $165k estimated

This job is closed

What you’ll be working on:

  • Provide technical and thought leadership for Data Engineering and Business Intelligence
  • Create, implement and operate the strategy for robust and scalable data pipelines for business intelligence and machine learning
  • Develop and maintain core data framework and key infrastructures
  • Data Warehouse design and data modeling for efficient and cost effective reporting
  • Define and implement Data Governance processes related to data discovery, lineage, access control and quality assurance.

Skills you'll need:

  • Degree in Computer Science, a related field or equivalent professional experience
  • 3+ years of strong experience with data transformation & ETL on large data sets using open technologies like Spark, SQL and Python
  • 3+ years of complex SQL with strong knowledge of SQL optimization and understanding of logical & physical execution plans
  • You have at least 1 year working in AWS environment, and should be familiar with modern web technologies like AWS cloud, MySQL database, Redis caching, messaging tools like Kafka/SQS etc
  • Experience in advanced Data Lake, Data Warehouse concepts & Data Modeling experience (i.e. Relational, Dimensional, internet-scale logs)
  • Knowledge of Python, Spark (Batch/Streaming), SparkSQL and PySpark
  • Proficient in at least one of the following Object-oriented programming languages -- Python / Java / C++
  • Effective craftsmanship in building, testing, and optimizing ETL/feature/metric pipelines
  • Experience with Business Requirements definition and management, structured analysis, process design, use case documentation
  • A data-oriented mindset