FalconX Jobs

There are 296 Web3 Jobs at FalconX

Receive emails of FalconX Jobs
Job Position Company Posted Location Salary Tags

FalconX

San Mateo, CA, United States

$106k - $183k

FalconX

San Mateo, CA, United States

$77k - $102k

FalconX

remote

$85k - $152k

FalconX

remote

FalconX

remote

$36k - $75k

FalconX

remote

$40k - $90k

FalconX

Bangalore, India

$85k - $92k

FalconX

San Mateo, Portugal

$91k - $100k

FalconX

San Mateo, Portugal

$84k - $96k

FalconX

Bangalore, India

$90k - $105k

FalconX

San Mateo, Portugal

$58k - $60k

FalconX

Bangalore, India

$90k - $100k

FalconX

San Mateo, Portugal

FalconX

San Mateo, Portugal

$98k - $112k

FalconX

Bangalore, India

Senior Software Engineer

FalconX
$106k - $183k estimated

This job is closed

FalconX’s Data Infra team builds and operates systems to centralize internal and third-party data, make it easy for engineering, data science, business intelligence, accounting, compliance teams to transform and access that data for analytics and machine learning, and power end-user experiences. As a Data Engineer on the team, you will contribute to the scalable Batch and Streaming ETL pipelines , DWH design and Data modeling, Data Governance initiatives tools and applications that make that data available to other teams and systems.

What you’ll be working on:

  • Provide technical and thought leadership for Data Engineering and Business Intelligence
  • Create, implement and operate the strategy for robust and scalable data pipelines for business intelligence and machine learning
  • Develop and maintain core data framework and key infrastructures
  • Data Warehouse design and data modeling for efficient and cost effective reporting
  • Define and implement Data Governance processes related to data discovery, lineage, access control and quality assurance.

Skills you'll need:

  • Degree in Computer Science, a related field or equivalent professional experience
  • 3+ years of strong experience with data transformation & ETL on large data sets using open technologies like Spark, SQL and Python
  • 3+ years of complex SQL with strong knowledge of SQL optimization and understanding of logical & physical execution plans
  • You have at least 1 year working in AWS environment,and should be familiar with modern web technologies like AWS cloud, MySQL database, Redis caching, messaging tools like Kafka/SQS etc
  • Experience in advanced Data Lake, Data Warehouse concepts & Data Modeling experience (i.e. Relational, Dimensional, internet-scale logs)
  • Knowledge of Python, Spark (Batch/Streaming), SparkSQL and PySpark
  • Proficient in at least one of the following Object-oriented programming languages -- Python / Java / C++
  • Effective craftsmanship in building, testing, and optimizing ETL/feature/metric pipelines
  • Experience with Business Requirements definition and management, structured analysis, process design, use case documentation
  • A data-oriented mindset

Nice to haves:

  • Experience with AWS, and especially RDS, MSK, EMR, S3, Glue, Kinesis etc
  • Prior experience with Databricks, Snowflake, Airflow and Delta Lake

Base pay for this role is expected to be between $155,000 and $245,000 USD. This expected base pay range is based on information at the time this post was generated. This role will also be eligible for other forms of compensation such as a performance linked bonus, equity, and a competitive benefits package. Actual compensation for a successful candidate will be determined based on a number of factors such as skillset, experience, and qualifications.