Search Jobs
Sep 25, 2018

Interviewing for Demand Engineering at Netflix

This doc captures my current-point-in-time approach to hiring for the Demand Engineering team at Netflix. Every team at Netflix does it a bit different because we value localized decision making. As hiring manager, I am the Informed Captain for how and who we chose to hire for the team.

What we do is quite niche. Frequently, people ask: "How the heck do you hire for that?" This post describes some of our thinking and the process we have developed for technical evaluation. I hope to encourage a conversation around interviewing and if it leads to a few people applying, well, that'd be great too. ;)

What we're looking for.

In terms of technical skill, we're looking for people who can write code, have an interest in reliability and efficiency, and experience with distributed systems. The last one is negotiable if they have deep experience in some other complex niche like DSP, kernel hacking, etc. We are on an on-call rotation so you have to be cool with that, too. Beyond that, we're pretty open.

What we're not looking for.

While the current team is made of amazing individuals I would fight to keep, we explicitly do not want to hire more people that are just like any of the current members. Each viewpoint sheds some light on the space of possibilities and by adding different viewpoints we can attain a more full picture. Without a variety of ideals, teams tend to pathologically focus in what they optimize for. Diversity of thought helps us make better decisions.

Repeatability

In tension with this desire for variety is the desire for repeatability. Repeatability in evaluation helps counteract bias that might lead to either false positives and false negatives. Repeatability also helps streamline the process, which is economic and de-stresses the team. Finally, repeatability helps us to be deliberate in changing our hiring process if it isn't giving us the results we want.

Depersonalized Technical Evaluation

The only programming part of the interview is a homework assignment, between the non-technical phone screens and the on-site.

We have a homework problem because work-sample is the best predictor of job performance. We don't write code on whiteboards, and don't expect candidates to. We don't use any special programming take-home software. We let the candidate do it at home with full access to the internet so they are in the coding environment they're comfortable with. The whole point of this is to have an experience that mirrors "real-life" programming.

We consider the homework a mutual investment. It takes between 2 to 4 hours total for two independent reviewers to evaluate the submission, and we suggest that the candidate should spend about this much time authoring their submission. "If you pass the homework, we bring you onsite."

Two calibrated reviewers independently score the code against a rubric. Due to our rubric and calibration process, we have high inter-rater reliability, which is important for establishing fair and consistent scoring. To ensure that we have the right background to correctly score the homework, we ask the candidate to use a mainstream programming language.

We expect you to learn what you need to in order to get the job done well, so the problem is easily googled. We describe it in a couple sentences with the main "gotcha" pointed out. We aren't testing for the ability to solve previously-open problems on-the-fly. We are seeing if you can look up a solution, pick the one you think makes the most sense and then code it up. The solution strategy is a factor in, but not a determinant of, success.

Part of working together is establishing expectations. So, we provide evaluation criteria: "In addition to correctness, we're looking for a sense of craft and elegance in the implementation. You’re welcome to use any mainstream programming language. Give us enough documentation so we know how to run the program." We further provide candidates with the areas of evaluation and their respective point values.

On-Site Technical Evaluation
After programming in the small, we want to explore coding in the large and have a 45-minute session during the on-site interview devoted to an architectural problem. How would you design a system to implement this feature of Netflix? All of the interviewers in rotation for this segment ask the same question and have been cross-calibrated.

The question unfolds in complexity and detail and the evaluation is around how the candidate approaches the problem, the kinds of questions they ask and the kinds of solutions they propose and how they respond to refinement of the problem. If the candidate falters, we provide scaffolding where needed to allow the candidate to progress so a single "hang-up" doesn't derail the whole interview.

The other technical evaluation session in the on-site interview revolves around the intersection of technology and The Culture Memo. From our pre-interview email: "Since our work involves being deep domain experts that serve a wide variety of service owners, we often find ourselves in a position to teach them about how our system works and why it works the way it does. Please prepare a brief ~10 minute talk on a technical topic to present in the second technical interview. You don't need to have slides or materials (though feel free if they'd help you,) just be prepared to teach us something technically interesting. Also come prepared to discuss some of your technical missteps (don't hold back here -- I have brought down Netflix more than once...) and also your greatest technical achievements."

Telling people what we will chat about in each on-site session helps them prepare and feel more at-ease. As we ask some historical questions, it also gives them time to come up with examples. We don't try to trip people up, quiz them on language arcana or expect any kind of algorithmic cramming. We try to set people up for success in the evaluation, just like we do day-to-day.

Conclusion
Through this collection of practices, we are trying to build the best team possible. Our process isn't perfect -- we've likely had false negatives -- but we are continuing to learn and improve as time goes on. What do you think? Also, if you or anyone you know find this kind of approach to people and hiring interesting, we still have open roles.

Acknowledgement
This process for evaluation has evolved over time and is the product of many great colleagues here at Netflix. One person sticks out as taking us to the next level. I would particularly like to give credit to Lorin who took our initial conversations around creating a rubric and made it a reality. Thanks Lorin, you make us better!

Share this post: