One observation of mine is that interviewing is often more about finding the most feasible and consistent process rather than the most effective one. Whiteboard interviews make for consistent evaluation and it's easy to train up new people on a set of questions. Scoring based on how far the interview got, and the level of optimization of the approach used is relatively unambiguous. So it's logistically easy and less subject to bias - that's why it's popular even though it has little resemblance to day to day work.
A more effective interview would involve tasks that more closely resemble day to day work. However, the examples of this that I've seen so far make for much more ambiguous evaluation or are much more difficult to schedule:
One example interview process I've seen is having people submit PRs to a codebase to fix bugs or implement features, and then review a PR submitted by the interviewer. This more closely resembles day to day work, but has the disadvantage of spreading the interview process across multiple days. Additionally, you might tell the candidate that they shouldn't take more than 2 hours to implement the PR but who knows if they spend 10+ hours on it.
Another alternative interview format is a 2-hour long one that starts with the interviewer asking "how would you build a text editor?" There's no right response. If the candidate gives responses like ropes and gap buffers then the interview might go in a direction focused on data structures and systems. Some candidates ask if it's a WYSIWYG editor like word, or an ASCII/unicode editor like Vim. In that case the interview might test the candidate's abilities to think through how to build an interface to decouple the UI and underlying data structures. The candidate might start off with a simple array, and work through why it becomes infeasible at larger lengths and think through ways to mitigate that. This interview was flexible to test technical knowledge and reasoning for candidates at any level, and could go in a variety of directions. But on the other hand that makes consistent evaluation difficult and training people up on this interview similarly difficult. This was at a small company with maybe 30-40 engineers, where each team basically had total latitude on how it carried out its own hiring. That's not how a lot of larger companies' interviews work, which often emphasize consistent evaluation and multiple interviewers.
The only interview which does more closely resemble real world tasks, and I suggest that more companies employ is debugging interviews. It requires the candidate bring a laptop and that the company build a couple repository templates, but once that's done it's a very easy interview to conduct. Just observe the candidate debug and make note of how many bugs are fixed and whether the fixes do bad things like breach layers of abstraction.
A more effective interview would involve tasks that more closely resemble day to day work. However, the examples of this that I've seen so far make for much more ambiguous evaluation or are much more difficult to schedule:
One example interview process I've seen is having people submit PRs to a codebase to fix bugs or implement features, and then review a PR submitted by the interviewer. This more closely resembles day to day work, but has the disadvantage of spreading the interview process across multiple days. Additionally, you might tell the candidate that they shouldn't take more than 2 hours to implement the PR but who knows if they spend 10+ hours on it.
Another alternative interview format is a 2-hour long one that starts with the interviewer asking "how would you build a text editor?" There's no right response. If the candidate gives responses like ropes and gap buffers then the interview might go in a direction focused on data structures and systems. Some candidates ask if it's a WYSIWYG editor like word, or an ASCII/unicode editor like Vim. In that case the interview might test the candidate's abilities to think through how to build an interface to decouple the UI and underlying data structures. The candidate might start off with a simple array, and work through why it becomes infeasible at larger lengths and think through ways to mitigate that. This interview was flexible to test technical knowledge and reasoning for candidates at any level, and could go in a variety of directions. But on the other hand that makes consistent evaluation difficult and training people up on this interview similarly difficult. This was at a small company with maybe 30-40 engineers, where each team basically had total latitude on how it carried out its own hiring. That's not how a lot of larger companies' interviews work, which often emphasize consistent evaluation and multiple interviewers.
The only interview which does more closely resemble real world tasks, and I suggest that more companies employ is debugging interviews. It requires the candidate bring a laptop and that the company build a couple repository templates, but once that's done it's a very easy interview to conduct. Just observe the candidate debug and make note of how many bugs are fixed and whether the fixes do bad things like breach layers of abstraction.