Data Science Jobs in Web3

1,494 jobs found

web3.career is now part of the Bondex Logo Bondex Ecosystem

Receive emails of Data Science Jobs in Web3
Job Position Company Posted Location Salary Tags

Ideas2IT Technologies

Chennai, India

$30k - $70k

Binance

Remote

Nuri

Berlin, Germany

$91k - $156k

Parity Technologies

Berlin, Germany

$91k - $156k

Parity Technologies

Berlin, Germany

$91k - $156k

SwissBorg

Lisbon, Portugal

$40k - $62k

Rarible

Remote

$63k - $75k

CANDY

New York, NY, United States

$45k - $75k

Autograph

Santa Monica, CA, United States

$16k - $54k

MetaMask

United States

$40k - $92k

MetaMask

United States

$54k - $72k

MetaMask

San Francisco, CA, United States

$24k - $67k

Merkle Science

Bengaluru, India

$63k - $100k

Nuri

Berlin, Germany

$91k - $156k

Ideas2IT Technologies
$30k - $70k estimated
TN Chennai IN
Apply

Would you like to expand your skillset beyond the Hadoop Ecosystem? Would you like to build data pipelines using tools like AWS Glue, EMR, and Databricks? How about working on distributed data warehouses like Redshift, Snowflake, etc?

How about exposure to highly regulated domains like Healthcare? And being on projects for Silicon Valley startups and global Enterprises like Facebook and Siemens?

About Ideas2IT

Ideas2IT is a high-end product firm. Started by an ex-Googler, we count Facebook, Siemens, Motorola, eBay, Microsoft, and Zynga among our clients. We solve some very interesting problems in the USA startup ecosystem and have created great products in the process. When we build, we build great!

We invest in bleeding-edge technologies like AI, Blockchain, IoT, and complex cloud to leverage technology to build better products for our clients and end-users.

We have clocked phenomenal growth in the last ten years and are marching towards lofty goals. Ideas2IT has successfully rolled out multiple products like Pipecandy, (raised $1.1M in seed funding) and element5 (closed a recent funding round successfully with oversubscription a few months ago).

What’s more, we now have an ESOP programme. When you join us, you could become a proud co-owner of one of our upcoming product companies.

About the role

The Data Engineer (Spark) role entails working on complex data pipelines by leveraging new age Cloud and BigData technologies, to feed data to AI and analytical applications.

What’s in it for you?

  • Get to work on challenging projects like:
    • A modern Healthcare AI platform that revolutionizes cancer care

    • A real-time stream processing to real-time sync transactional and analytical databases at scale

    • A data platform that enables AI use cases like advanced semantic text retrieval from product documentation for a German Manufacturing major

  • Work on Cloud Data Warehouses, instead of the traditional on-prem warehouses

  • Learn cutting-edge data pipeline and Big Data technologies as you work with leading Silicon Valley startups

  • Opportunity to complete certifications in highly in-demand platforms like AWS and Snowflake, at our cost

  • Experience a culture that values capability over experience and continuous learning as a core tenet

  • Bring your ideas to the table and make a significant difference to the success of the product instead of being a small cog in a big wheel.

What you will be doing here

  • Implement scalable solutions for ever-increasing data volumes, using big data/cloud technologies like Pyspark, Kafka, etc.

  • Implement real-time data ingestion and processing solutions

  • Work closely with Data scientists for data extraction, analysis for machine learning

  • Work closely with the development teams to develop and maintain scalable data platform architecture and pipelines across our working environments

  • Utilize the latest technologies to ensure ease of data integrations and reusability across various teams

<br/><br/>

Here’s what you’ll bring

  • 4+ years of experience in Data Engineering

  • Good experience with Data Engineering tools such as Spark, EMR, AWS Glue, and Kafka

  • Experience in building large-scale data pipelines and data-centric applications using any of the distributed storage platforms such as HDFS, S3, NoSQL databases (Hbase, Cassandra, etc.)

  • Working knowledge of Data warehousing, Data modeling, Governance, and Data Architecture

  • Expertise in one or more high-level languages (Python/Java/Scala)

  • Ability to handle large scale structured and unstructured data from internal and third-party sources

  • Ability collaborate with analytics and business teams to improve data models that feed business intelligence tools, increase data accessibility, and foster data-driven decision making across the organization

Good to have(s)

  • Experience in any one of the cloud environments - AWS, GCP, Azure

  • Experience in Hadoop ecosystems (Hive, Pig, Flume)

  • Exposure to tools like ElasticSearch, LogStash, and DeltaLake

Check this out to get a glimpse of what we achieved in 2020!



⬇
Apply Now

What does a data scientist in web3 do?

A data scientist in web3 is a type of data scientist who focuses on working with data related to the development of web-based technologies and applications that are part of the larger web3 ecosystem

This can include working with data from decentralized applications (DApps), blockchain networks, and other types of distributed and decentralized systems

In general, a data scientist in web3 is responsible for using data analysis and machine learning techniques to help organizations and individuals understand, interpret, and make decisions based on the data generated by these systems

Some specific tasks that a data scientist in web3 might be involved in include developing predictive models, conducting research, and creating data visualizations.