High-performance systems - Experience with building high performance distributed systems that can scale to 100,000s QPS.
Core Infrastructure - Experience with developing and running large-scale distributed storage systems, service-oriented architectures, and reliable monitoring and deployment infrastructure.
Data Processing - experience with building and maintaining large scale and/or real-time complex data processing pipelines using Kafka, Hadoop, Hive, Storm, and Zookeeper
Geospatial - Familiarity with geospatial datasets and services, such as maps, local search, points of interest and business listings data, mobile device location and GPS traces.
Expertise in one or more object oriented programming language (e.g. Python, Go, Java, C++) and the eagerness to learn more
Experience with developing complex software systems scaling to millions of users with production quality deployment, monitoring and reliability.
Experience with large-scale distributed storage and database systems (SQL or NoSQL, e.g. MySQL, Cassandra)
Architecture chops. You should have opinions on constructing software systems and good knowledge of the principles of fault-tolerance, reliability and durability.
Experience designing and deploying high-performance production services with robust monitoring and logging practices.
Capacity for courageously evaluating tradeoffs between correctness, robustness, performance, space, and time.
Ability to build and interact with very large data processing pipelines, distributed data stores, and distributed file systems.
Strong programming and algorithmic skills (we mainly use Java & Python).