Senior Jobs at FalconX
There are 106 Web3 Jobs at FalconX
web3.career is now part of the
Bondex Ecosystem
| Job Position | Company | Posted | Location | Salary | Tags |
|---|---|---|---|---|---|
FalconX | Remote | $103k - $165k | |||
FalconX | Remote | $90k - $106k | |||
FalconX | New York, NY, United States | $106k - $165k | |||
FalconX | Bangalore, India | $84k - $112k | |||
| Learn job-ready web3 skills on your schedule with 1-on-1 support & get a job, or your money back. | | by Metana Bootcamp Info | |||
FalconX | Remote | $106k - $165k | |||
FalconX | San Mateo, CA, United States | $87k - $92k | |||
FalconX | San Mateo, CA, United States | $106k - $165k | |||
FalconX | Remote | $103k - $117k | |||
FalconX | San Mateo, CA, United States | $116k - $189k | |||
FalconX | Bangalore, India | $83k - $110k | |||
FalconX | San Francisco, CA, United States | $106k - $165k | |||
FalconX | Remote | $105k - $117k | |||
FalconX | San Mateo, CA, United States | $85k - $112k | |||
FalconX | Remote | $74k - $110k | |||
FalconX | Hong Kong, Hong Kong | $72k - $110k |
FalconX
$103k - $165k estimated
This job is closed
What you’ll be working on:
- Provide technical and thought leadership for Data Engineering and Business Intelligence
- Create, implement and operate the strategy for robust and scalable data pipelines for business intelligence and machine learning
- Develop and maintain core data framework and key infrastructures
- Data Warehouse design and data modeling for efficient and cost effective reporting
- Define and implement Data Governance processes related to data discovery, lineage, access control and quality assurance.
Skills you'll need:
- Degree in Computer Science, a related field or equivalent professional experience
- 3+ years of strong experience with data transformation & ETL on large data sets using open technologies like Spark, SQL and Python
- 3+ years of complex SQL with strong knowledge of SQL optimization and understanding of logical & physical execution plans
- You have at least 1 year working in AWS environment, and should be familiar with modern web technologies like AWS cloud, MySQL database, Redis caching, messaging tools like Kafka/SQS etc
- Experience in advanced Data Lake, Data Warehouse concepts & Data Modeling experience (i.e. Relational, Dimensional, internet-scale logs)
- Knowledge of Python, Spark (Batch/Streaming), SparkSQL and PySpark
- Proficient in at least one of the following Object-oriented programming languages -- Python / Java / C++
- Effective craftsmanship in building, testing, and optimizing ETL/feature/metric pipelines
- Experience with Business Requirements definition and management, structured analysis, process design, use case documentation
- A data-oriented mindset