Ideas2IT Technologies is hiring a
Web3 Data Engineer - Spark

Compensation: $30k - $70k *

Location: TN Chennai IN

Would you like to expand your skillset beyond the Hadoop Ecosystem? Would you like to build data pipelines using tools like AWS Glue, EMR, and Databricks? How about working on distributed data warehouses like Redshift, Snowflake, etc?

How about exposure to highly regulated domains like Healthcare? And being on projects for Silicon Valley startups and global Enterprises like Facebook and Siemens?

About Ideas2IT

Ideas2IT is a high-end product firm. Started by an ex-Googler, we count Facebook, Siemens, Motorola, eBay, Microsoft, and Zynga among our clients. We solve some very interesting problems in the USA startup ecosystem and have created great products in the process. When we build, we build great!

We invest in bleeding-edge technologies like AI, Blockchain, IoT, and complex cloud to leverage technology to build better products for our clients and end-users.

We have clocked phenomenal growth in the last ten years and are marching towards lofty goals. Ideas2IT has successfully rolled out multiple products like Pipecandy, (raised $1.1M in seed funding) and element5 (closed a recent funding round successfully with oversubscription a few months ago).

What’s more, we now have an ESOP programme. When you join us, you could become a proud co-owner of one of our upcoming product companies.

About the role

The Data Engineer (Spark) role entails working on complex data pipelines by leveraging new age Cloud and BigData technologies, to feed data to AI and analytical applications.

What’s in it for you?

  • Get to work on challenging projects like:
    • A modern Healthcare AI platform that revolutionizes cancer care

    • A real-time stream processing to real-time sync transactional and analytical databases at scale

    • A data platform that enables AI use cases like advanced semantic text retrieval from product documentation for a German Manufacturing major

  • Work on Cloud Data Warehouses, instead of the traditional on-prem warehouses

  • Learn cutting-edge data pipeline and Big Data technologies as you work with leading Silicon Valley startups

  • Opportunity to complete certifications in highly in-demand platforms like AWS and Snowflake, at our cost

  • Experience a culture that values capability over experience and continuous learning as a core tenet

  • Bring your ideas to the table and make a significant difference to the success of the product instead of being a small cog in a big wheel.

What you will be doing here

  • Implement scalable solutions for ever-increasing data volumes, using big data/cloud technologies like Pyspark, Kafka, etc.

  • Implement real-time data ingestion and processing solutions

  • Work closely with Data scientists for data extraction, analysis for machine learning

  • Work closely with the development teams to develop and maintain scalable data platform architecture and pipelines across our working environments

  • Utilize the latest technologies to ensure ease of data integrations and reusability across various teams

<br/><br/>

Here’s what you’ll bring

  • 4+ years of experience in Data Engineering

  • Good experience with Data Engineering tools such as Spark, EMR, AWS Glue, and Kafka

  • Experience in building large-scale data pipelines and data-centric applications using any of the distributed storage platforms such as HDFS, S3, NoSQL databases (Hbase, Cassandra, etc.)

  • Working knowledge of Data warehousing, Data modeling, Governance, and Data Architecture

  • Expertise in one or more high-level languages (Python/Java/Scala)

  • Ability to handle large scale structured and unstructured data from internal and third-party sources

  • Ability collaborate with analytics and business teams to improve data models that feed business intelligence tools, increase data accessibility, and foster data-driven decision making across the organization

Good to have(s)

  • Experience in any one of the cloud environments - AWS, GCP, Azure

  • Experience in Hadoop ecosystems (Hive, Pig, Flume)

  • Exposure to tools like ElasticSearch, LogStash, and DeltaLake

Check this out to get a glimpse of what we achieved in 2020!



Apply Now:

Compensation: $30k - $70k *

Location: TN Chennai IN


Receive similar jobs:

Web3 Data Scientist Jobs

Job Position and Company Location Tags Posted Apply
Canada
Apply
London, United Kingdom
Apply
New York, United States
Apply
Taipei, Taiwan
Apply
Bengaluru, India
Apply
San Francisco, CA, United States
Apply
Taipei, Taiwan
Apply
Salt Lake City, UT, United States
Apply
Taipei, Taiwan
Apply
Setiabudi, Indonesia
Apply

Recommended Web3 Data Scientists for this job

/@quincywang


See Profile
/@leonardoloss


See Profile
/@robertofd


See Profile
/@ianich


See Profile
/@datsin26


See Profile
Cover Letter / AI Interview