Ripple Jobs
There are 1,016 Web3 Jobs at Ripple
web3.career is now part of the Bondex Ecosystem
Ripple is a global payment network and a real-time gross settlement system that uses the cryptocurrency XRP to facilitate fast and cheap cross-border payments.
Careers at Ripple by location
▼Job Position | Company | Posted | Location | Salary | Tags |
---|---|---|---|---|---|
Ripple | San Francisco, CA, United States | $27k - $54k | |||
Ripple | San Francisco, CA, United States | $72k - $75k | |||
Ripple | New York, NY, United States | $72k - $75k | |||
Ripple | San Francisco, CA, United States | $45k - $90k | |||
Learn job-ready web3 skills on your schedule with 1-on-1 support & get a job, or your money back. | | by Metana Bootcamp Info | |||
Ripple | San Francisco, CA, United States | $45k - $75k | |||
Ripple | San Francisco, CA, United States | $32k - $72k | |||
Ripple | San Francisco, CA, United States | $54k - $60k | |||
Ripple | San Francisco, CA, United States | $59k - $90k | |||
Ripple | San Francisco, CA, United States | $26k - $39k | |||
Ripple | San Francisco, CA, United States | $28k - $75k | |||
Ripple | San Francisco, CA, United States | $63k - $87k | |||
Ripple | San Francisco, CA, United States | $45k - $75k | |||
Ripple | New York, NY, United States | $45k - $75k | |||
Ripple | San Francisco, CA, United States | $42k - $54k | |||
Ripple | San Francisco, CA, United States | $54k - $90k |
This job is closed
Ripple’s Enterprise Data Management & Analytics team is creating scalable data infrastructure to enable a smooth and safe road for scale. As a DevOps Engineer on our data platform team, you will be responsible for setup, deployment, maintenance, and continuous monitoring of data-intensive applications. Your work will directly support the operation of production software while also enabling the developer experience for data engineers and data scientists. You will bring a software engineering approach to positively impact our culture of ownership, reliability, trust and observability across the ever-increasing scope of our Data Platform.
WHAT YOU'LL DO:
- As a DevOps Engineer, you will be maintaining and developing services to support our data driven analytics framework. Architect, deploy, and maintain Ripple’s multi-region, multi-provider service platforms (with an emphasis on security and resiliency)
- Design and develop tools for automation, monitoring, and instrumentation to reduce operational friction and increase engineering efficiency
- Create solutions for unique technical challenges faced by Ripple data infrastructure, engineering and ML teams, secret management, geographic failover, data replication, availability, and platform resiliency, streaming technologies, API Services etc
- Create and automate new and existing platform and application lifecycle services, leveraging data to converge on declared states with minimal human interaction
- Collaborate with Data engineering to ensure code is production-ready
- Work closely with developers, data scientists and first level support teams
- Provide occasional after hours on-call support to handle urgent critical issues by working with first level support teams
- Participate in the leadership of DevOps-first principles within the organization
- Research promising new tools and technologies, push the team to experiment and evolve
- Participate in a robust and fair on-call framework
WHAT WE'RE LOOKING FOR:
- 8+ years of software development & operations experience
- 5+ years DevOps experience in a multi-tenant, highly scalable and highly available environments on GCP and AWS
- 3+ years experience with Kubernetes and infrastructure provisioning tools like (Terraform, CloudFormation)
- 3+ years experience with AWS, Docker containers, and container orchestration (Kubernetes, EKS, etc.)
- Solid development background with Go, Python, Java, or C++
- Experience developing APIs and SDKs
- Experience with data-relevant AWS/GCP services such as RDS, S3, EMR, Kinesis, DynamoDB, and Lambda (or equivalents from GCP or Azure)
- Experience building deployment pipelines leveraging common CI/CD tools
- Experience with real-time telemetry and tracing tools like Jaeger and Prometheus, ELK Stack (ElasticSearch, Logstash, Kibana, Beats)
- Experience in tuning and scaling Apache Kafka producer/consumer and Spark Structured Streaming applications
- Experience Setting up LDAP, RBAC, Service Mesh for API’s Gateway for Data Services
- Security awareness, with an emphasis on designing for security best practices, for IT Security, GDPR and SOC2 regulations
WHAT WE OFFER:
- The chance to work in a fast-paced start-up environment with experienced industry leaders
- A learning environment where you can dive deep into the latest technologies and make an impact
- Competitive salary and equity
- 100% paid medical and dental and 95% paid vision insurance for employees starting on your first day
- 401k (with match), commuter benefits
- Industry-leading parental leave policies
- Generous wellness reimbursement and weekly onsite programs
- Flexible vacation policy - work with your manager to take time off when you need it
- Employee giving match
- Modern office in San Francisco’s Financial District
- Fully-stocked kitchen with organic snacks, beverages, and coffee drinks
- Weekly company meeting - ask me anything >
- Team outings to sports games, happy hours, game nights and more!