Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The problem is that "leetcode" is a definitely a thing that exists in "FAANG" interviews.

The biggest stumbling block with leetcode is that you shouldn't be programming like that in real life.

"make a linked list" no, just use a library like everyone else.

"Implement addition in python but with string inputs", "no you can't use the built in x" All of that "clever" shit should be filtered out at PR/MR/diff review time.

All of this "clever shit" is exceptionally bad programming. We don't live in the 80s anymore. we don't need to make our own sort algorithms. just. use. a. Library.



I'm not going to defend "leetcode grinding" and its pathologies but TBF this is problematic:

> "make a linked list" no, just use a library like everyone else.

> "Implement addition in python but with string inputs", "no you can't use the built in x" All of that "clever" shit should be filtered out at PR/MR/diff review time.

Sure, those statements (just use an existing library) are what you'll do in practice especially as a beginner. But there is value in asking these kinds of questions.

One is "do you know what's going on under the hood?" On another post I just saw a comment in which someone advocated counting the number of occurrences of something in a list by filtering it and then getting the length of the result. This is almost certainly a terrible solution as it allocates memory (which has to be freed) in what can be done in a single quick pass. By asking you such a question the interviewer can learn if you have some idea of the tradeoffs and why something might be a good or bad idea.

These are the kinds of decisions that can have order of magnitude impacts on runtime, which can have huge impact on the company's costs.

Likewise doing arithmetic with strings as inputs would expose interesting but not super complicated questions about how arithmetic works, how you parse a numeric value out of its written representation (detest the word "conversion" for this), what are interesting bounds. If the job is really so high level that you don't have to think that there's an actual machine involved, then yes the questions aren't useful. But how many jobs are really so abstract?

> We don't live in the 80s anymore. we don't need to make our own sort algorithms.

No, but sometimes you have to make a choice based on the data to be operated on.


> One is "do you know what's going on under the hood?" On another post I just saw a comment in which someone advocated counting the number of occurrences of something in a list by filtering it and then getting the length of the result. This is almost certainly a terrible solution as it allocates memory (which has to be freed) in what can be done in a single quick pass. By asking you such a question the interviewer can learn if you have some idea of the tradeoffs and why something might be a good or bad idea.

> These are the kinds of decisions that can have order of magnitude impacts on runtime, which can have huge impact on the company's costs.

Then the questions should be more like: What data structure, library, method, steps, would you use to solve this problem and why. What are the tradeoffs of your choices, etc?

All of this can be answered in fairly high level pseudocode too and still show that they understand the concepts etc.


> Sure, those statements (just use an existing library) are what you'll do in practice especially as a beginner

The more code you write, the more you have to maintain. Sure, of course there are times where you need to re-implement something from scratch. But those times are rare (or should be). Making something from scratch without strong justification is a strong signal, just not a positive one.

Now, as you point out, "when would you write x from scratch" is a great question to ask someone. I am vary wary of people that are willing to overspend innovation tokens. But thats a design/systems/culture question, not a coding question.

the signal I want from a coding test is the following:

o Can they demonstrate and understanding of the programming language?

o Do they create readable code?

o Do they ask questions?

o Do they follow the style of the programme they are modifying?

o Do they say "I don't know"?

o Do they push back on weird or silly requirements?

all of those things make working with someone much easier. None of those questions require "implement algorithm that only exist in Computer Science courses". They can all be answered by something like:

"here is a program that fetches a json document, but it breaks often, please make it more reliable"

That is far more realistic, and more likely to what they'll be doing.

The dirty little secret about most coding jobs is that actually, the important thing is getting something functional and stable for the business, so it can make money. Then after that its responding to feature/bug requests. As a programmer, your job is to convert business logic into computer logic. 99% of the time, this doesn't mean making artisan libraries that improve some tiny metric by 20x.


> o Do they ask questions?

I find I get 90% of the insight from simple questions like "write a function to shuffle a deck of cards. Don't worry about simple typos like forgetting a semicolon."

You learn a lot from how they set things up (make a suit class, and a vector of card classes? Or just use the integers 1-52?) and talking about that. Do they think about the problem (and, as you say, ask a couple of "requirement" questions) before diving in?

One of the best hires I remember was someone who got this problem wrong (went to a lot of work to leave the deck unshuffled). We gently asked if it did what was required. Hi smacked himself on the forehead and said, "I'm a fucking idiot" (in front of us in his interview!) and quickly corrected the bug. Great guy.


I’m not sure that’s a great interview question; the right answer to google the appropriate algorithm (e.g. Fisher–Yates), make sure you understand implementation errors that will lead to bias, and implement it over any arbitrary sequence.


If you ask me a question that boils down to “write a card shuffling algorithm”, I’m going to do less well because I’m being taxed by the thought of “Why is this person asking me to shuffle a deck of virtual cards?” During the interview, I’d probably just google “best algorithm for shuffling deck of cards in <language>”, and I’d tell you I was looking it up. If you tasked me with shuffling cards on the job, that’s what I’d do.


You could just ask, and we'd answer "it's just a simple problem so both of us can see what it's like to work together." You'd be surprised how many people show up who don't actually know how to write a for loop.

And if you didn't like the experience you wouldn't want to work for us so the interview would be a success too.


> don't actually know how to write a for loop

My god, yes. We had a two-part problem that we used for the "work skills" part of the interview. Part 1 boiled down to writing a two-level nested for loop that was simpler than fizzbuzz. Part 2 was using that loop to solve the rest of the problem, which was far more complex. I lost track of how many candidates spent 40 of the 50 minutes just writing that loop.


you don't actually believe that "work together" crap do you? You realize you're dangling a job over their head, right? There is a power imbalance. You'll never see "work together" in an interview because you're not colleagues on equal footing.


Often candidates already have a job and are looking to see if they want a change. There's parity in such situations.


Come up with better problems, please.


What would you recommend?


> The more code you write, the more you have to maintain. Sure, of course there are times where you need to re-implement something from scratch. But those times are rare (or should be). Making something from scratch without strong justification is a strong signal, just not a positive one.

It really depends on what you're working on. A package to manage a dentist's office should be as off the shelf as possible. I don't care about I/O and try to do as little as possible, using whatever's off the shelf, but my HFT buddies are bumming the hell out of it, customizing the kernel, and whatnot. You may use elastic search to save you time and money, but the folks working on it have probably run over it so much there's nothing but custom code left.

It's all context.


Yep I absolutely agree with this.

I’ve spent about the last year optimising text CRDTs so they’re actually usable in practice. A few years ago, 800mb of ram usage and 5 minutes processing was common to open a document. Now we’re talking 2mb of ram and 20ms. That is the difference between something not being viable and something being usable everywhere. And yes: my code involved a lot of custom data structures and algorithms. There were no libraries in cargo which did exactly what I needed.

This thread is full of people bemoaning “leetcode problems” for being irrelevant in software jobs. But I use them all the time in my work. Software engineering is far more varied than any of our individual careers.


I'm happy for you that you've found a job where the things Leetcode emphasizes has merit and importance and value in your day to day work.

Everyone who's on the other side of the Leetcode debate from you wants the exact same thing - an interview process that is representative of the actual day-to-day work.

As a Web developer, inverting a binary tree is not a thing an employer has ever asked me to do.


Yes. You, me and the GP commenter seem to all be vigorously agreeing with one another. Job interviews should be based on what the role entails. If that includes data structures and algorithms, so be it. If not, leave them out!

As the GP commenter said:

> It really depends on what you're working on. ... It's all context.

I brought it up because many (most) commenters in this thread seem to be claiming that this knowledge and skill is never useful. That its ridiculous to ask about this stuff in any job interviews, except as a ritualistic hazing ritual. This is also absurd.

> As a Web developer, inverting a binary tree is not a thing an employer has ever asked me to do.

I've never had an employer name any data structures explicitly. Its our job as the software experts to know which tools and technologies are relevant. But with that said, I can't remember ever using custom data structures in frontend website code either. If I were hiring a web developer, there are so many things I'd rather spend time talking about in an interview. (HTML & CSS knowledge, web frameworks, design skills, raw programming skill, etc.)


> many (most) commenters in this thread seem to be claiming that this knowledge and skill is never useful.

Those commenters are making generalizations because many (most) jobs in the industry do not find that knowledge and skill useful. Certainly specialized edge positions exist.


I believe Google at least at that time was trying to hire engineers who could theoretically be put to work on any team, potentially being asked to create TensorFlow or MapReduce or contributing to the Linux kernel in service of Android, not people who have pigeonholed themselves into only being considered web developers.

If pure CRUD companies only looking for JavaScript specialists are also using leetcode screening, they're missing the point.


Here's the difference: I bet you had time to think through the novel problems and didn't have someone hovering over your shoulder, demanding you come up with solutions within 20 - 30 minutes -or else-.


> Sure, those statements (just use an existing library) are what you'll do in practice especially as a beginner.

I think it's really not about "beginner" vs. "expert", but moreso about the specificity of your role. If you're tasked with making general cloud services, it's probably fine. As your role gets closer to core systems/algorithms engineering, obviously that changes.

> But there is value in asking these kinds of questions.

There is value knowing, but the dynamic of an interview make things like this harder to ask/harder to answer.

> One is "do you know what's going on under the hood?" On another post I just saw a comment in which someone advocated counting the number of occurrences of something in a list by filtering it and then getting the length of the result. This is almost certainly a terrible solution as it allocates memory (which has to be freed) in what can be done in a single quick pass.

In the real world the validity of this solution depends on its input and use. Also in the real world, especially if you're not on one of the above-mentioned specialist teams, it's usually more important to be readable and maintainable instead of just purely performance oriented. So a single pass solution that's harder to grasp at a glance becomes less desirable than multiple passes as long as your performance requirements can afford it.

> By asking you such a question the interviewer can learn if you have some idea of the tradeoffs and why something might be a good or bad idea.

Totally, but the issue becomes "Can you figure out an algorithm on a whiteboard", instead of the (as you've already agreed) correct path of "Can you work through tradeoffs of one implementation vs. another". I think this question could pretty easily be presented in a way that isn't a quiz and also allows the programmer to demonstrate their problemsolving ability without seeming adversarial.


> One is "do you know what's going on under the hood?"

Why should I need to? Let's take Microsoft's .Net sort implemtation.

It will intelligently determine which is the optimized sort given the conditions it's facing.

Heapsort? Mergesort? Quicksort? Radix Sort.

And the most-possibility optimized versions of each.

This is what data scientists do. We don't need programmers standing at whiteboards writting BubbleShort and being told it's wrong. In the real world, you call .Sort() and get on with real work.

Sure, you can hand-roll your own sort for hot-path cases but that's likely been taken into account anyway!


Uh oh. You realize you just stated you actually know sorting algorithms and even know enough to be able to decide to roll your own if needed?


The problem is though how do you evaluate for real work without a portfolio of experience? Real work projects take weeks - perhaps months - to design, build, review and release. How do you test for that, really?

I agree leetcode style interviews are artificial, but I think they persist because few people have identified and popularised effective alternatives. At least with leetcode you've shown people in front of you have been prepared to go learn arcane stuff and apply it. It's not good, but it's better than nothing.

I was once asked to do the Gilded Rose kata[1] for one ~200 headcount Series D "startup", and it not only resembled real work - it's a refactoring problem - but it also showed their engineering culture. When I joined, I found smart people who weren't leetcode robots, but thoughtful engineers trying to solve tricky engineering problems. I "stole" Gilded Rose to use in other companies when interviewing until I joined my current employer (a FAANG, heavily prescribed process), with great success. I would like to see more katas that are as good at testing real world skills.

Also, something I've only ever been interviewed for twice in 25+ years, which I think is underplayed: pair programming and handling code review feedback. Do this more, please. If you hire people not knowing how they're going to respond to a principal engineer telling them "we need you to think about 4 other things that weren't in the original scope given to you by the PM, can you please go deal with them", why are you surprised when toys are subsequently thrown out of metaphorical prams?

[1] https://github.com/emilybache/GildedRose-Refactoring-Kata


It seems fairly straightforward to just ask verbal questions?

For example, if the position is mostly coding C then just get them to explain how some X interacts with some Y and how that interacts with some Z. Maybe with a copy of of K&R and get them to point to page so and so.

Do that a few times, maybe use a whiteboard, and I think any experienced C developer will be able to tell who's faking it and who really understands within 2 or 3 iterations.

Even if there are no experienced folks in the company, then it will take longer but, after an hour or two it'd be pretty difficult for anyone to fake a solid understanding of hundreds of pages of K&R.


Not the point you’re making but in a C shop, if I interview and somebody pulls out K&R in 2023, they’re not getting hired.

I get the wider point you’re making, and yes, a chat can help. Elsewhere in a reply to this I talk about how I personally like to do that, but the problem is, a significant number of people can talk about coding but can’t actually code.

I once interviewed somebody who had passed 3 previous calls. Asked them to implement fizzbuzz. Couldn’t.

That is a real problem. It’s not myth.

As such, I’m not hiring somebody without seen them writing code the same way I wouldn’t hire a designer without seeing their actual personal portfolio.


> a significant number of people can talk about coding but can’t actually code.

We removed pseudocode from our pre-screening due to this. Some people are absolutely great with English and pseudocode, but can't write a for loop, in their "preferred" language of their choosing. I'm sure they could be great at programming...eventually.


Asking people face-to-face is a lot different then asking over a phone, it's easily 1/10th the info, or less, so of course the percentage of fakers is much higher.


A Zed Shaw's Learn C the Hard Way fan, I see.


> It seems fairly straightforward to just ask verbal questions?

This is basically my interviewing POV. If you've done the work, you can talk about the work intelligently if I ask some questions about it. If I ask you how you would think about solving problem XYZ, you can probably verbally think through how you would solve the problem, what a solution might take.


I write a lot of C and gave up on reading K&R as it is just too outdated.


This seems like a better way to evaluate real-world coding skills. Did you use it in a standard 45-60 min interview or as a take home problem? It seems like it'd take a decent amount of time to read the requirements and just introduce the task.


Take home and “please time box it to maybe an hour maximum, you’re probably doing a load of interviews and tech tests at the moment, so you don’t need to finish: I just want to see your approach”.

In-person time I used to like to ask a question I got from here: “Tell me about your favourite coding work or side project”. We’d then dig into choices, trade offs, obstacles and how they got around them.

My favourite answer to this was “an OpenGL implementation for the X Window protocol which I wrote in Common Lisp”. Just getting through the first layer of “Why [x]?”, took 15 minutes and we spent an hour talking about all the facets. Hired him, was a superb colleague.


"Please timebox to an hour" is a bit dodgy though. If one candidate really does that, but the rest give it an extra 2 or 3 hours, that 1 hour candidate will probably be at a disadvantage.


I often ask for them to version control and send me a zip of the git repo, so I can look at commit histories. Yes, they can fake that, sure, but I can normally tell a lot about a dev from that, and actually if I see 4-5 commits over an hour perhaps with some wrong turns, that pushes the candidate up in my estimation than somebody with 1-2 commits that don't tell a story, but does show a complete solution - which is most likely what would happen if you edited history a bit.


When I'm actually writing code and hashing out a solution over the course of a single hour in my job, I'm not going to commit the wrong turns.

What you're really asking for is a candidate to spend extra time inventing a convincing git history.

I really think the only way this actually works is if the problem is on a page which you can only view after you start a timer or something.

Or do it on site and watch them.

This is all high pressure but if you really want to time box, there can't be a way around it.


Thank you for mentioning this. I always find myself going further back and forth with employers who allow a take-home exercise but ask that it be time-boxed nonetheless; it's difficult to try to unpack how they're trying to communicate what kind of time pressure a role experiences, and that if I took longer than their expected time box, does that look bad just because I needed some more time to understand the problem?


I don't think most employers are out there looking at the timestamps on your commits to see if you really completed it within the timebox or not -- I certainly wouldn't.

I don't think that candidates who spend less time or turn in incomplete take homes are necessarily at a disadvantage. Sooo much can be discerned from a take home even if it's not finished. I can evaluate a candidate's familiarity with modern syntax, how they organize functions, how readable their code is, whether or not they used ChatGPT/CoPilot or copy/pasted from an online tutorial (surprisingly easy to discern when you're evaluating many submissions), and so on.

All of that tells me a lot about how well someone functions as an engineer, as well as what level they're operating at, and it doesn't require the completion of the take home problem.


Thanks for sharing your take. I wish I came across interviewers like yourself in the last 18 months of job searching.


In a current hiring loop my company is doing for a senior, we have a take home divided into a series of steps, with explicit instructions that say that completion of the entire exercise is not necessary to advance (and we mean it).

The take home serves two purposes: (1) should the candidate progress to the in person interviews? (2) if the candidate stumbles in the in person coding, can I deduce from the take home that they do actually know what they're doing, and the stumbling was a side effect of interview anxiety?

I can tell a lot even from an incomplete take home, about whether someone is likely to do well on the in person interviews, but I mostly find it extremely valuable for (2), as a tiebreaker.

It hasn't happened yet in this cycle, but I can imagine a situation where someone turned in a fantastic take home and then brain farted during the in person coding, and, thanks to the take home, got hired.


Refactoring is the best kind of test! It has a slight bias for programming languages but the bias can be mitigated by having refactorable libraries in many languages.


I'm fairly certain I would not perform well with this, unless the library was trivial.

Refactoring is a task that's mostly about reading, understanding context, and a bit of rumination.

I think having them design the library, or a fraction of it, from scratch, would give a more momentum.


> I think having them design the library, or a fraction of it, from scratch, would give a more momentum.

I have done this as a take home test and I am ok with this as long as it doesn't take more than 5 hours.


I would never allow this. I need to know that they generated the answer. In several video interviews, I've had people copy paste the first google hit, and one used co-programmer, listening and writing the answers in another window. The second was good at coding, but couldn't explain a single line of it. I think it's only safe to assume that anyone with a take home will cheat. But, it's also very hard to fire people, in our group.


>I think they persist because few people have identified and popularised effective alternatives

Astrology persists, but that's not a testament to its efficacy.


Thanks for sharing, I've not come across this kata before. Seems like it'll be a fun one.


What’s funny is that in most positions the hard work has nothing to do with any of that. It’s communication, empathy, navigating people-problems and organizational dysfunction, and working with/around awful systems that you can’t change, while limiting their blast-radius to protect the business (and your own sanity).

You can always just google how to invert a binary tree or whatever and get a solid answer in no time. Which means it’s easy shit. Unless you’re one of a few devs doing PhD level work in the industry, that CS-heavy “mathy” stuff’s not the hardest thing you’re dealing with, and being good at that indicates almost nothing about your ability to deal with the hard problems you will encounter on the job.

(or else you’re greenfielding at a startup and are doing everything on easy-mode anyway, aside from problems you create for yourself on purpose or by accident)


In fairness, the ratio of "just write code" to everything you listed above changes quite drastically as your seniority increases.

To be sure, there will always be some senior/staff/principal engineers who sit in a room by themselves and code all day without talking to anyone, but, typically, the more senior you get in an organization the more your job consists of things other than writing code. When you're a junior or a mid, however, writing code makes up the vast majority of your day.


I was thinking about “just writing code” with the communication and empathy parts, especially, in fact, though you’re right that they also apply (differently, though I’m not sure if it’s more) to more-senior work.


Communication, empathy and navigating people is easy: there's a deluge of people capable of doing that. Writing code is hard. Not many people can code at all; even fewer are any good at it. This is why you can easily be unemployed after BA in Communications but have little problem getting a job after high school if you can code.


In my experience, it's relatively rare among software engineers. So, even if it's over-represented in the general population (which I'm skeptical of), having actual skill at that makes you leaps and bounds more effective than others, once you get up to senior+ level.


I agree good people skills along with technical skills is a huge bonus. But it's still a less common combination among overall candidate pool than good people skills and lacking technical skills.


Sure, you can pretend developers don't read and write code. Just like people who complain about leetcode pretend the questions are hard and that performance is uncorrelated somehow.


> Sure, you can pretend developers don't read and write code.

Weird. Why would someone pretend that?


All of this "clever shit" is exceptionally bad programming.

In a company like Google (or Google 15 years ago really) there are problems with many possible solutions, but some solutions are better than others. The aim of leetcode recruitment is that it filters for people who can not only solve hard computer science problems, but also recognize what problem they're solving and find good solutions rather than just solutions.

"I can implement this with a library" is only a good solution if the library is solving the problem in the right way. If the library is solving the problem but solving it in an inefficient-but-working way that is not good programming. At the end of the spectrum where Google exists you need developers who know the difference.

The problem with leetcode is that it doesn't actually filter for people who can understand problems in a general sense. It filters for people who know leetcode problems.


> If the library is solving the problem but solving it in an inefficient-but-working way that is not good programming.

Yep. And even then, you need to know what you’re looking for. Years ago I wanted a library which implemented occlusion on vector shapes for plotter art. I had no idea what terms to search for because I don’t have a strong enough background in graphics. Turns out there are algorithms which help - but it was years before I found a good enough library.

And if you know what to search for, there are so many terrible implementations out there. Look for priority queues on npm. Half the libraries implement custom binary trees or something, despite the fact that using a heap (an array) is way faster and simpler. If you don’t know why a heap is better, how could you possibly evaluate whether any given library is any good?


You do know we hire people who have to _write_ these libraries, and they can't just defer to something else with no consideration to time or space constraints. None of the questions are designed to be brainteasers or 'gotchas' but to get you to start discussing the problem and show your depth of knowledge/expertise.

If I ask you a question along the lines of "write me a function to tell if two number ranges intersect" and your solution is to grab a library instead of writing a simple predicate...then perhaps the role is not a good fit.

"Use a library for everything" is how we ended up with left-pad on npm.


> "Use a library for everything" is how we ended up with left-pad on npm.

bollocks, that's because javascript doesn't have a standard lib.

> You do know we hire people who have to _write_ these libraries

I know, because I'm there. I'm working in VR/AR. We have a number of people who are world experts for doing what they are doing. Do I go off and re-make a SLAM library because I don't like the layout? no because I have to ship something useable in a normal time frame. I can't just drop 3 months replicating someone else's work because I think the layout is a bit shit, or I can't be bothered to read the docs (I mean obviously there are no docs, but thats a different problem)

But, and I cannot stress this enough, having more than one person making similar or related libraries is a disaster. Nothing gets shipped and everything is devoted to "improving" the competing libraries.


Adding 1 to an ASCII decimal strings is a great icebreaker and I always use it. It is not leetcode. It is a filter for people who have actually encountered computers before. Even though our industry is maturing there are still a huge number of candidates who present themselves without ever once having written a program. I find the question has a massive signal. Either the candidate aces it in 15 seconds, or they immediately ask a bunch of on-point clarifying questions, or they are stumped.


I have written countless programs but would not be able to answer that without first turning to google. You might have screened out plenty of valid applicants with this


A person who isn't well-versed in how the computer represents data internally is not a "valid applicant" for the positions I interview to fill.


You don't know how to transform the string "12345" to "12346" in a mainstream PL?

I don't think "valid applicant" and "doesn't really grok strings/types" are compatible, outside of intern/maybe junior positions. The filter works.


If someone really can't do that, then either they don't know how to count (very unlikely), they didn't understand the question, or they are really bad at translating what they would do on paper into a program. All disqualifying.

Another possibility is nerves. Early in my career I was very nervous in interviews and I have sympathy. I have fucked up the most basic search algorithms completely and spewed gibberish when asked to explain the complexity of my solution.

I have sympathy for that, but the only way to get through interview nerves is exposure therapy. At least in my experience.


Eh…

You’re given `99999`. Increment it. A naive approach would turn this into `9999:`. Making this work is a bit complicated and involves memory reallocation if you’re in a C-like language and you need to increase the length of the string in order to increment it. You’ll probably want a system where the caller passes a buffer of sufficient size to use for rewriting the string, in case you overflow. Make sure to have a way for the caller to pass the buffer length. You don’t want to allocate because that’s not the way it should be done in C, callers should generally allocate. You could use atoi, increment, and itoa to go back again, but that’s probably “cheating” from the perspective of a leetcode advocate, they likely want you to do this without the stdlib conversion functions.

There’s a bazillion reasons why this sucks as an interview question. There’s no way a correct answer is just something you’ll “get in 15 seconds”.


Ironically, you just answered the "difficult" interview question in a few lines of casual commenting.

If you're not allowed to use atoi and itoa, it's still not all that difficult to do in C. Incrementing ASCII characters and handling carries is really trivial.

I'd be kind of worried if I was hiring a C developer who didn't know the basic things you described above. I'm not sure why you think those are esoteric concepts that we shouldn't expect developers to know.


Your last 2 sentences highlight the crux of it, I think - in the first sentence, you say "C developer" and in the second just "developers".

This actually does seem like a great icebreaker-type question for a C developer. I'm less convinced every developer should be able to go into that level of detail if they are doing Javascript or Python or whatever. Ideally they'd be able to at least reason through some of this and demonstrate some understanding of some of this, but I'm not sure it is a great icebreaker question for all developer roles.


Javascript doesn't even have ASCII - every string is UTF-16.


I didn’t answer the interview question. The interview question would be to actually implement it. Something that was stated should be trivially done in 15 seconds. It would take me longer than that to explain why it’s not a trivial problem.


Whether this should be done in-place or not is a completely valid direction in which to take the discussion and would be a "pass" from me. Also a "pass" would be `return itoa(atoi(input)+1))` with relevant caveats.

An O(1) in-place solution with a sane API exists in C++ and the form of it is also a handy indicator of whether candidate who proposes solving this in C++ is familiar with more recent standards or not.


I don’t see how you can increment `9999` in O(1) if the string length is considered the size of the input. You have to rewrite the string with zeroes, which is O(n) of the size of the string.


For that case, which is the worst case, yes. And the amortized average complexity is O(1) because that case is unlikely.

Are you starting to see why this is a good icebreaker? It has all kinds of discussion potential.


No, not when the goal is to actually implement it. You implied in your comment that if I wasn’t someone who could implement this in 15 seconds, I’m not a good candidate in your mind. It would take me longer to explain why it’s not trivial.


You could return the 0s with a carry flag.


It's still O(N) even if you don't need to copy the input, because you nonetheless visited every character in the string.


Oh, I skimmed over the O(1) part.

That baffled me for a moment, then I realized you were talking about the general case of a uniform random input where a carry for the 1s digit has probability 0.1, a carry for the 10s digit 0.01, etc., and all 9s is the pathological worst case, right? That's what I get for coming into the discussion sideways.


I think a lot depends on the context, the job, and the languages involved.

Of course there is a trivial solution in pretty much any language, for example something like str(Decimal(x) + 1) in python. That's dead-simple and anyone should be able to do at least that.

The later addendum that they want someone to know how data is store by the computer seems to imply that they want an answer that doesn't use these conversions, which gets a lot trickier depending on the context. I think you'd still expect someone who does C/C++ work to get there pretty quickly, but it is more than 15 seconds - you have to think about some corner cases and how you handle the string having to be resized in some situations.

But in some languages, it isn't obvious to me how you'd do it (especially since strings are immutable in many languages) and I think it is maybe a bit unfair to ask someone who does Java or Python or whatever to do it off the top of their head.


The filter works perhaps for whatever niche you might be working in. I cant imagine asking this to one of my candidates and then assessing their entire professional experience off of something like this though.


What domain do you program in where handling strings/bytes is not a weekly if not daily task?


The point is not whether an applicant can or cannot do it, the point is that the question is deliberately rephrasing to obscure a candidates understanding. If you ask your average candidate to do that in laymans terms or they were given 5 seconds to google this would be a trivial task and you would have a different outcome. All this selects for is the candidate to have a working definition of an ascii decimal string in their head, it represents no ability to problem solve whatsoever.


> All this selects for is the candidate to have a working definition of an ascii decimal string in their head, it represents no ability to problem solve whatsoever.

Problem solving isn't orthogonal to having genuine expertise. Loads of awful, brittle, hard-to-read, but working, code gets written every day by people who are good at finding a solution but haven't RTFM, so to speak.


There is a difference between "handle this string to do a normal thing, but make it secure" and "do something stupid with this string, which goes against all good practice, oh and you can't use the obvious tools because reasons".

for example, if you are using strings to do addition, why the fuck aren't you allows to to atoi? also, why are you using strings to hold numbers?

everything to do with that question is something that you'd insta-reject in a diff/pr/mr.

Surely converting from string to int, validating it, catching errors and generally making it nice, is a much better test?

Its like going to an interview for a copywriter(someone who writes text) for safety sign company, and they ask you to make a riddle in English, but you can't use any words that are derived from Latin. Sure it shows an impressive command of both etymology and english, but its totally opposite to making clear, easy to understand text.


What programs have you written?


The only justification is that its a filtering mechanism. You likely won't use any of this, but its a proxy to certain characteristics you might find desirable in a candidate - some baseline intelligence and motivation.

That you have the capacity to sit down and rote memorise a bunch of different techniques and practice completing a these kinds of tests indicates the likeliness of success in absence of other hard evidence of likely future performance. In a way it is the same function as university entrance exams.

That these companies continue to do this for experienced candidates and not just recent graduates without a track record in employment is more perplexing.


> That these companies continue to do this for experienced candidates and not just recent graduates without a track record in employment is more perplexing.

This assumes that they don't get similar number of people applying to the experienced positions who are lacking skills.

For example, I know some Java devs who could put down 5+ years of professional experience but are still very junior when it comes to problem solving skills. They are capable of following precise instructions and there's enough work of that nature to do - but an open ended "there's a bug that has a null pointer exception when running in test but not dev or production" is something that they're not capable of doing.

If they were to apply to a position that said senior java developer based on their years of experience and there was no code test as part if it, they might be able to talk their way through parts of it and there is a risk that they could get hired.

For a senior java developer position at Google, would it be surprising if they got 500 applicants with 5+ years of experience?

How would you meaningfully filter that down to a set of candidates who are likely to be able to fill the role?


I think a fair number of Leetcode 'medium' problems are reasonable as augmented versions of FizzBuzz. In most cases, someone who can code should at least be able to write a brute force solution to one of these problems and explain why their brute force solution is slow. Where I think Leetcode-style questions become problematic is when they're used as test of whether someone can invent and implement a clever algorithm under time pressure. Unless you're hiring someone to do specifically that, then I think this style of Leetcode interviewing is of limited value.

Also, some Leetcode problems just have unnecessarily unfriendly descriptions. For example this one, which is quite simple to implement, but which has an unnecessarily obscure problem description: https://leetcode.com/problems/count-and-say/


some companies/teams use leetcode hard level problems to specifically select for ACM ICPC level talent to solve their particular problems, writing very tight compute kernels in constrained environment and unbounded input


My understanding is that they use it to filter candidates a bit. FAANG positions get loads of applicants, so they have to find efficient ways of thinning the herd a bit. The correlation between people who are willing to grind leetcode and who turn out to be good engineers is high enough that it's worth it for them to keep as a tool to sort the massive pile of applicants they get.

Of course, hiring someone purely on how good they are at leetcode would be... dumb.


> The correlation between people who are willing to grind leetcode and who turn out to be good engineers is high enough that it's worth it for them to keep as a tool to sort the massive pile of applicants they get.

One "subtlety" this misses is that leetcode-style trivia tests only work for those who are willing _and able_ to grind leetcode etc.

There are many who have the aptitude and experience but have e.g. children, or interests outside of programming which means they do not have the required free time to spend rehearsing for this kind of interview.


> One "subtlety" this misses is that leetcode-style trivia tests only work for those who are willing _and able_ to grind leetcode etc.

This is simply not true. The smartest people I graduated my CS program with can pass leetcode style interviews without practicing. They live and breathe this stuff.

Are these problems overemphasised? Absolutely. But they don’t expect you to grind. They exist so they can find my smart classmates to make sure they get offered a job.


The cynical part of thinks that that might be a feature, not a bug.


It is a feature.

FAANG want dedicated engineers who don't have the distraction of family and/or sick relatives.

They don't want people who need to finish _promptly_ each day to put their kids to bed.

They want their pound of flesh in exchange for a high salary and ability to add "Worked at FAANG" on their CV.

If you have the time and dedication to grind leetcode for months the you've passed their requirement.


> FAANG want dedicated engineers who don't have the distraction of family and/or sick relatives.

This is hilariously out of touch.

Most of the FAANG people I know have kids.

Nearly all of their managers have families.

The one person I know who has two special needs kids specifically sought out a FAANG job for the stability, compensation, and work life balance it afforded.


Yeah, this sounds more like a late-00s startup mentality than anything else. Maybe the FAANGs were this once upon a time, but nowadays they are firmly in "stable grownup job" territory.


This certainly does not match my FAANG experience.


> One "subtlety" this misses is that leetcode-style trivia tests only work for those who are willing _and able_ to grind leetcode etc.

As a parent of young children, I think this is greatly exaggerated.

If you're a skilled developer with several years of experience then you shouldn't have "grind" leetcode for 10s of hours per week for months on end. It's trivial enough to do a few problems per week on a break or during some down time.

I think too many people refuse to even start because the difficulty has been exaggerated to the extreme. When I was mentoring college grads some of them would grind leetcode for months and months and even delay their interviews, then walk away dumbfounded when their interviewers didn't ask them anything resembling a Leetcode Hard. Many of them got questions that were basically Leetcode Easy questions.

They had all been convinced by the internet that they needed to grind Leetcode until they were miserable, but it's not really true unless maybe you're starting with almost zero knowledge of algorithms and data structures.


this is a feature, not a bug. Companies absolutely want people with minimum outside job responsiblities, so that they can work them during the day, and pagerduty them during the night when stuff breaks.

or overwork with boat load of feature requests and bug fixes and Ops workload - this is the reason why faang's offices are so fancy, have free food, laundry, massage, and bunch of other stuff.

These fancy offices are for engineers to pull all nighters, not for tiktokers to show off in social media.

if you are 9-5 with 5 kids - you are not gonna make it in venture capital funded Silicon Valley world. 9-5 is for regular "enterprise" engineers at Initech or Dunder Mifflin Paper Company in Lubbock TX


I did it for 10 years with 4 kids. Multiple startups. Very few work off hours and none did in my experience.


Did you found a startup with 4 kids, or were you an early hire, or just one of several worker bees?

Serious question, not rhetorical, in case it doesn't come across.


> "make a linked list" no, just use a library like everyone else.

If you cannot sketch out a simple linked list on a whiteboard, you’re not an engineer and have no business being hired as one at a place like Apple, full stop.

I was asked to sketch out a hash table on the whiteboard. It was trivial, because a trivial hash table is trivial.


> you’re not an engineer and have no business being hired as one at a place like Apple

well, unless you have a engineering degree, you're not actually an engineer, but we'll gloss over that. (big hint CS isn't an engineering discipline, otherwise testing, requirements gathering and cost analysis would be a big part of the course. )

I was once asked to implement a distributed fault tolerant hashtable on a whiteboard. I'd still never make one from scratch unless I really really needed to. (and I've worked on distributed datastores....)

but that wasn't part of the coding test, that was part of the design test.

Which is my point, re-implementing things for the hell of it, is an antipattern. rote learning of toy implementations of some everyday function doesn't give you a good signal on if the candidate understands why you should use x over y.

and again, my point is this: If I catch someone re-implementing something in a code review, they need to have a really good fucking reason to do it, otherwise it'll be rejected.


They're not asking you to re-implement it on a whiteboard so they can open a PR and shove it in their new app. They're asking you to re-implement it to demonstrate that you understand it, and that given an understanding of something, that you have the skills to implement it.

If you can't hammer out a technical interview using off-the-top of your knowledge, you're in the group they're trying to weed out.


>unless you have an engineering degree

This is not true. There are dozens of types of engineers and that word doesn’t mean civil engineer. There are audio engineers who work on music and film ffs, get over yourself.


> well, unless you have a engineering degree, you're not actually an engineer

That's not a requirement in the US.


Nor is it a requirement for a majority of engineering types regardless of country. Civil engineers think they have a monopoly on the word; this should always be pushed back upon.


You realize that somebody wrote that lib and maybe they want ppl with skills to build stuff instead of just reusing?

People always make weird arguments about lc, but whats the point?

Just put effort into it or not, no excuses needed


My team is doing things for which libraries do not exist, things that are very much leetcode-esque, deeply into theoretical CS, lots of applied science. Some other teams here are doing data plumbing others do UI stuff.

Our team can get to be very picky about people because ... well, we need to be.


If you can't explain in detail how a linked list works, that's an enormous red flag that you've probably never studied data structures. It's a FizzBuzz level question.

I'm sure some companies have lousy interview practices, but algorithms and data structures are super important if you want to hire someone who's actually competent, who can think for themselves.


It also doesn't resemble "real" work with regards to gathering requirements, clarifying ambiguity, weighing up trade-offs, making sure code is clear, etc. It tends to be a rote regurgitation exercise.


LeetCode has no correlation with great software development. Great development is more about paying attention to the end users, having an eye for usability and design and great communication & the ability to execute. The current LeetCode churning explains why Google hasn't produced anything worth using since Brin & Page checked out and why so much of software today is absolute garbage - but it is what it is I suppose. I suppose its better to screen for IQ and desparation than you know...people who love developing great software. Let them keep doing what they're doing while treating their own people like disposable garbage during hard times (like Google & Meta just did) and we'll see what they'll churn out over the next decade or so (I'm not expecting anything from either of these companies).


> LeetCode has no correlation with great software development. Great development is more about paying attention to the end users

You can’t make blanket statements about software like that. Our field is way too varied.

For example, if you’re doing high frequency trading, then performance absolutely 100% matters. And knowing your data structures and algorithms backwards is part of that. On the other hand, if you’re building the website at a large company, the hard part of your job may be interfacing with the rest of the business. So networking and navigating corporate politics will be an incredibly important part of your job. And if you’re on a small team making a product for consumers, then your work will succeed or fail based on usability.

We could brainstorm dozens of other skills which might be important. Which of these skills matter the most depends entirely on the company and the role.


I don't disagree with you and I agree with your take on understanding data structures and algorithms ... it is important, but making 80 to 90 percent of your recruitment process based on this and on playing games with quizzes and LeetCode is idiotic. There are lots of other factors that play a vitally important role in creating great software and most big companies do NOTHING to pay attention to these factors albeit I fully admit to this being hard to test for. That's why I offer every company I'm recruited into to do a paid (or free albeit open source) work sample test if they want to do so. Most don't take me up on this offer and I think its idiotic but it is what it is.


This isn't really fair. All of Android, the Go programming language, TensorFlow, MapReduce, Big Table, Kubernetes, V8, Chrome are all very good engineering, software that stands the test of time and is likely to remain in use for decades. It's not garbage. They're great at libraries, plumbing, infrastructure layer tooling that enables Internet scale.

What they aren't good at is consumer-facing web services with graphical frontends, at least since Gmail or so. Even there, businesses seem to like the G Suite.


Google acquired the early android team and they were vital in making it succesful: https://www.androidauthority.com/google-android-acquisition-...

Chrome: hmm - was Chrome developed after Brin & Page left? I never realized this but OK you got me on that one if that indeed is the case.

Go: I liked it initially and I love the performance, but i don't necessarily love the language syntax and design.

The rest: mostly tools which make scaling Google's infrastructure easier. I don't count these as outstanding accomplishments but I suppose you may have had me here too (albeit the scope I was referring to mostly referred to regular ppl not infra teams but meh I suppose I'll give you this one as well).


> gathering requirements, clarifying ambiguity, weighing up trade-offs, making sure code is clear

if you dont do that in leetcode interview - you will not make it, or you will be graded as junior/entry level engineer.

No senior engineer will be passed without doing what you described during the leetcode interview


> It tends to be a rote regurgitation exercise.

where there are much greater selective pressures in India and China to excel at this than there are in the US, raising this irrelevant bar to absurd territory if you value your time


A lot of places will also do a desgin interview to cover this aspect of the job.


Which raises the question of why do the other kind of interview at all. I mean the leetcode nonsense whose end state is often a theatrical performance: “I’m pretending to invent this clever algorithm on the fly although I’ve practiced it for weeks”.

Is there any evidence that candidates who do well on design interviews but fumble on leetcode would be any worse at the actual job?


Leetcode is simply testing whether you studied Computer Science, took Data Structures & Algorithms course, and read the CLR book on algos and practices it. Nothing else. This is really freshman level cs work. very simple.

people who cannot leetcode - a simple data structures & algorithms inteview - cannot understand runtime nuances of what they write.

This is how you end up with N+1 algorithms and exponential runtime.

For example look at all modern front-end in javascript - usually leetcode is more relaxed for front end, and you end up with ungodly gobbles of unnecessary loops inside loops wrapped in loops that traverse DOM for no good reason and freeze the browser.

it is the javascript people who import node libraries that contain 3 lines of code, instead of writing it properly


Maybe. My experience is that a lot of the unprimed recall gets weak very quickly without practice. There's irony in testing for skills that a fresh graduate may do better at than someone with substantially more experience.

Primed recall can remain strong for many years, but leetcode often penalizes googling around to refresh your memory, even though that is what almost everyone should do before taking any further steps toward an implementation. Depending, it's often also the correct thing to do before choosing a library, the parameters to a function, code base organization, and lots of other details.

Of course, domain experts may be very strong in their specialty, but that's a special case with narrower scope.

I can hypothesize that testing for a "fast study" may have more predictive power. I have seen some interviews that are designed for this, leetcode adjacent but less antagonistic.

But anyway, I am spitballing, to be real with you.


You don't need to be good at leetcode to make a performant and well written frontend. You don't even need to know data structures, algorithms, or Big O notation.


what do you mean by N+1 algorithms?



IME it's a test in which it is easy to grade many people in a way that it is really unlikely that two people end up with the same grade. That is what leetcode achieves and that is really the majority of the reason it is used. So it is just very convenient for hiring. In India, we have companies coming to hire students from colleges, and as the first filter, they give you three leetcode questions, and average the number of test cases you passed in each of them and rank you based on that. I spoke to one of the companies, they said that they had _very_ few people that had the same score, and it was easy to choose within those small sets by reading their resume/whatever.


Just use a library, but those who can implement the library are better than the ones who can't. And that's what leetcode aims to distinguish.

You thought the point of leetcode was to simulate live work conditions? lol


What is it for. I don't think anyone here understands its purpose anymore.


The reason they do this is because there is a lot of debate over having some objective way to collect information about a candidate. So they have multiple rounds of standardized technical interview questions from which they can compare interview feedback. The rounds will be from different interviewers so they can control for bias of one interviewer. Then they are able to compare this standardized data across multiple candidates.

Imo this process is flawed though because just one or two rounds of technical interviewing gives you enough information about whether the candidate can code. After that you need to understand how the candidate thinks since most of the job is spent doing things that aren’t coding. These are better probed by design questions, asking the candidate to critique some code, asking the candidate to explain a project from their resume and then propose alternatives and trade offs.

Too often you get people who pass this interview process that can code at a basic level but hinder your team by giving poor feedback, having a fixed mindset, being a bad communicator, not being able to unblock themselves etc.

It’s fine if you are just looking to grow a new grad though, although former interns are better.


Not a fan of LeetCode but arguments like this almost sell me on it.


Hiring should be a team level decision. Product roadmap should be an exec level decision. Google has it completely backwards. That's how you end up with hoards people gaming the hiring process by spending months on leetcode. And 3 chat apps from Google competing against each other.


Teams at Google are very fluid, there are many of them, and they change regularly. So it makes a lot of sense to be sure a candidate could be successful in a wide variety of teams.


gogle's product failure is failure of executives and product managers, not engineers.


Agreed. At our company the technical interview involves walking a dev through a working app that uses the same stack we develop with. There are bugs to solve, potential optimizations that exist and the dev explains and walks through the client/server/database portions (leaning into one or the other as needed). No tricky questions or puzzles... just actual code (a simplified subset but still real, working code).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: