This is an exclusive opportunity to work as a Data Engineer in one of the fastest growing e-commerce start-ups based in California. We are looking to expand our technical team with a Data Engineer to invest more in data driven technologies.
- Design, develop, and implement large-scale distributed systems that process large volume of data; focusing on scalability, latency, and fault-tolerance in every system built.
- Design, develop, and operate web analytics solutions processing millions on daily clickstream events.
- Create systems that orchestrate and execute complex workflows in big-data pipelines.
- Evaluate and fine tune systems for speed, robustness, and cost efficiency.
- Create datasets, tools, and services supporting big data, search and machine learning operations.
- Own full life-cycle of business solutions from requirements definition to launching in production.
- Troubleshoot business and production issues.
- Own multiple systems across the search big data platforms, and work with engineers, program managers, and engineering leaders to identify opportunities for business impact.
- Participate in setting a vision and objectives for the team in alignment with business and market needs.
You’ll be responsible for design, development and operations of large-scale data systems within the DigigGamma ecosystem. You will be focusing on real-time indexing pipelines, streaming analytics, distributed machine learning infrastructure and other tasks as part of Search BigData team. You’ll interact with the engineers, product managers and architects to provide scalable robust technical solutions.
- Bachelor’s degree in Computer Science or related technical field.
- 4+ years of object-oriented programming experience in Java or Scala.
- 3+ years of experience in building of large scale data pipelines using big data technologies (i.e. Spark/Kafka/Cassandra/Hadoop/Hive/Presto/Airflow).
- 3+ years of experience in systems design, algorithms, and distributed systems.
- 3+ years of experience in scripting languages (e.g. Python), and SQL.
- You’ll need to have:
- Large scale distributed systems experience, including scalability and fault tolerance.
- Exposure to infrastructure management tech (Docker, Kubernetes)
- Exposure to cloud infrastructure, such as Open Stack, Azure, GCP, or AWS
- A continuous drive to explore, improve, enhance, automate and optimize systems and tools.
- Strong computer science fundamentals in data structures and algorithms
- Exposure to information retrieval, statistics and machine learning.
- Excellent oral and written communication skills.
What we offer:
- Remote work once a week
- Flexible schedule
- Positive spirit within a top on the line international team
- Share the glory of disrupting the fintech landscape
- Challenging projects and freedom to experiment
- Continuous learning: free English classes, IT certifications, technical events, budget for book or online training sites
- Standard perks: free coffee / tea/ fruits, no dress code
- Team buildings and regular parties