Responsibilities:
Build and enhance a high performance, high volume real time stream processing pipeline
Implement services and APIs that are used by Internal and external customers
Work with Data Scientists to implement production grade services (e.g. we built Image Recognition and Text classifiers that are used within our classified sites)
Do research and build prototypes to leverage current and upcoming internet Technologies into producing insights about user behavior.
Qualifications:
Msc in Computer Science or a related field
4 years of experience in at least one statically typed language. We use D, C, C, GO,
Also at least one Scripting Language: We use Python, Ruby and Lua.
Solid understanding of Networking and Internet Plumbing. TCP, TLS, HTTP
Good understanding of Data Structures
Strong grasp of database structure, design, query languages (e.g. SQL), fundamentals of mathematics, large data sets, distributed systems, and statistical concepts
Interest in working with teammates to explore the question “What else can our data tell us?”
Creativity and curiosity to go beyond current tools to deliver best solution to the problems at hand.
Being comfortable digging into other people’s source code in search of the root cause of a problem.
Bonus:
Good understanding of Linux or any BSD OS as well as familiarity with systems debugging tools such as strace, atop, iotop, netstat, lsof, iptables, valgrind, gdb.
Experience with ETL type Data Processing
Experience with Machine Learning or Data Science related Projects
“Big Data” Framework (e.g. spark).
Understanding how HTTP/2.0 and QUIC work
Experience with D, OCaml or Rust