I'm a tedious broken record about this (among many other things) but if you haven't read this Richard Cook piece, I strongly recommend you stop reading this postmortem and go read Cook's piece first. It won't take you long. It's the single best piece of writing about this topic I have ever read and I think the piece of technical writing that has done the most to change my thinking:
You can literally check off the things from Cook's piece that apply directly here. Also: when I wrote this comment, most of the thread was about root-causing the DNS thing that happened, which I don't think is the big story behind this outage. (Cook rejects the whole idea of a "root cause", and I'm pretty sure he's dead on right about why.)
The best interview process I've ever been a part of involved pair programming with the person for a couple hours, after doing the tech screening having a phone call with a member of the team. You never failed to know within a few minutes whether the person could do the job, and be a good coworker. This process worked so well, it created the best team, most productive team I've worked on in 20+ years in the industry, despite that company's other dysfunctions.
The problem with it is the same curse that has rotted so much of software culture—the need for a scalable process with high throughput. "We need to run through hundreds of candidates per position, not a half dozen, are you crazy? It doesn't matter if the net result is better, it's the metrics along the way that matter!"
I love this type of articles where we can reconstruct what happened so long ago just based on careful observations.
Some other instances I've come across:
* The K-Pg extinction event that wiped off dinosaurs had the impact it did because the asteroid happened to impact a shallow water region. This kicked up a lot of sulfur (in gypsum) that further affected global climate: https://en.wikipedia.org/wiki/Chicxulub_crater#Effects
* Earth likely had rings ~466M years ago. We deduced this by looking at impact craters from that time period, and seeing that they all lie near the equator (accounting for continental drift): https://www.sciencedirect.com/science/article/pii/S0012821X2...
* Earth's rotation period was probably frozen at 21h, ~600M years ago, likely due to interaction between lunar and solar tides. This resonance could have been broken by ice ages (!!!). Amazing to think that global climate affects earth's rotation: https://en.wikipedia.org/wiki/Earth%27s_rotation#Resonant_st...
Pick one of the six users. Split the other 5 users into friends and non-friends of the chosen user. There will either be at least 3 friends or at least 3 non-friends of the chosen user. If you can pick 3 friends of the chosen user then if any two of them are friends they plus the original user form a trio of mutual friends. If none of the 3 are friends then you have a trio in which none are friends. Similarly, if you can pick 3 non-friends of the original user then either two of them are non friends and you have a trio of non friends or all three are friends, forming a trio of mutual friends.
It's easy to prove but I don't think I'd quite call it "bleedin' obvious". Ramsey's theorem [1], of course, is more general than that and isn't at all obvious (although it's still not very hard to prove).
It was a mess from the get-go since I was applying for a data scientist position. At the time I didn't have the slightest idea what working as a data scientist entailed, nor that despite 7 years unsuccessfully pursuing a PhD in bioinformatics I wasn't sufficiently statistically oriented to excel as one. I just knew that I was good at math and had taught myself enough Perl and Ruby to be dangerous. I should have been applying for software/data engineer roles but I didn't know what that work was like at the time. The only engineer I knew had lived across the country from me and seemed more like a god than a regular human.
So after doing not-terribly in Kaggle's machine learning competition (which I did through some unholy Perl scripting and this bizarre network propagation model which had nothing to do with any normal machine learning techniques) I viewed Kaggle as a dream job. I think I did some phone interviews, I let them know that I was already local (crashing on couches and staying in transient hotels), and they scheduled me for some in-person interviews.
Waking up with flea bites from the dogs staying in my transient hotel next door to The Armory, I walked outside to find my motorcycle placed upright on its stand but somewhat demolished and a large pile of its broken parts piled politely on the seat. No note. I had about 20 minutes to make it to my interview. This would have been sufficient lane-splitting with a functional motorcycle, yet I hadn't left myself enough time to take public transportation instead of my busted ride. I brushed the jetsam off and tried to ride to the interview.
I had no left or right footpegs; my clutch lever was bent; my front brake lever had snapped off and was barely two inches long; my handlebars had been wrenched into a 135* bend forward and upward on the righthand side. As I rode this shambling contraption to my interview with my left foot anchored on top of the shift lever and my right foot reached back on the passenger peg I must have resembled a Street Fighter character diving forward in some sort of headlong right hook. I couldn't shift out of 1st and I couldn't really brake.
I arrived at the interview miraculously on-time but absolutely drenched in sweat from the harrowing journey through hostile traffic. So we'll start with that good first impression. My interviewers seemed to all have PhDs and significant post-doc experience and more than half of my interviews consisted of questions about why I dropped out of my PhD program. I mean, repeated hammering about it. I didn't really want to talk about it in great detail because I was emotional about it, mainly because my research supervisor died unexpectedly at the age of 42 and the whole situation still makes me sad to this day. So instead of actually asking questions about the company, or the role, or my skills, about half the time was spent being grilled about this sad story of my graduate career. So I was in a wonderful mood at that point and was happy to finally move on to the technical assessment because I was worried about tearing up in front of the interviewer.
THE TECHNICAL ASSESSMENT: a bunch of softball questions that I don't remember followed by a standard algorithms question. Come to think of it I don't think I got any statistics/ML questions because we got stuck on the algorithms question. The problem was that I never took CS courses, nor had I studied the CLRS book at the time, so I didn't know the "expected" way of doing some problems. I tend to get too creative.
The problem: given a list (n) of words (as strings of size k), return all sets of anagrams. The somewhat clever O(n k ln k) solution is to sort all the strings and then look at all the sorted strings that have multiple words mapping to them. The cleverer solution is to build a dictionary for each word, using the letters as keys and the number of times each appears as the value. This is linear to construct on the word length though it takes a memory penalty that is usually nugatory.
But oh no, I didn't think of either of these bog-standard solutions off the bat. Not having ever practiced this problem I immediately got creative, having too much fun with it. My initial thought was to create a dictionary where the keys were integers and the values were arrays of words. The integer keys were produced by reading the characters of the word in sequence, mapping the alphabet to the first 26 prime numbers, and taking the product of all the primes represented by the characters in the word. By the fundamental theorem of arithmetic (a.k.a. unique factorization of primes), any anagram would create the same integer key, and all words mapping to that key would get pushed into the value array. Voila, and a very compact data structure to boot!
This was really, really bad. Basing the correctness of your interview solution on Euclid's Elements is never going to get you any traction with software engineers. As we all know, engineers are allowed to interview candidates to make themselves feel smarter and more fortunate for already having a job. Getting into an argument about prime number factorization with a candidate isn't going to support either of those goals.
So I was gloating a bit, feeling good for finding such a nice clean solution after that bit of emotional wrenching, figuring I had explained it well with a little math flair and eager to move on to the next problem.
"Um, are you sure that works? Will the numbers always work out like that?"
'Yes, that's why I chose the first 26 prime numbers, it wouldn't work with composite numbers.'
". . . Could you do it a different way? Could you do it another way that isn't this solution?"
'Could I do it slower than this? Sure, but why?'
It seemed like they had a list of possible answers on their interview script and I had gone way off of it. Now, the funny thing was I couldn't think of the sorting solution or the comparing dictionary solution, and I had a correct and well-explained answer that was superior in runtime and memory requirements (and before you object with arbitrary length words and integer overflow I'm just going to stop you with replacing the product of the primes with the sum of their logs and . . . I can explain but this margin is too small to contain). I could only think of really dumb things, like calculating every possible permutation of every single word and comparing them exactly. In fact I was kind of taking a perverse delight in trying to figure out the most inefficient way I could answer their question, seeing how I was very certain there was no better answer than what I had already come up with first. They kept asking me, "is there another way you could do this? Faster than comparing all permutations?" and I kept returning to my prime number solution and they kept saying "but is there ANOTHER WAY you could do this?" Round and round this went and I never got the solution they were looking for, whatever it was.
I don't even remember the rest of the interview process as I had the distinct feeling that it ended prematurely. I didn't catch that at the time since this was to be the first of my in-person interviews in the tech world, but looking back I'm sure they hadn't originally planned on hustling me out the door after a half-hearted tour of the loft workspace.
There are of course two sides to every story, and I'm sure there's someone from Kaggle who tells a story about a leather-clad sweaty-toothed madman who kept prattling on about his dead professor trying to score sympathy points and who gesticulated wildly for an hour while screaming FUNDAMENTAL THEOREM OF ARITHMETIC in a poorly disguised Southern accent. But, regardless of their perspective I can definitely assert that was the worst interview I'd ever had. I went 'home' to buy some calamine lotion, call my insurance company, and tell my mom that the interview at the 'dream job' didn't quite pan out.
The implementation of the __cos kernel in Musl is actually quite elegant. After reducing the input to the range [-pi/4, pi/4], it just applies the best degree-14 polynomial for approximating the cosine on this interval. It turns out that this suffices for having an error that is less than the machine precision. The coefficients of this polynomial can be computed with the Remez algorithm, but even truncating the Chebyshev expansion is going to yield much better results than any of the methods proposed by the author.
Some people might do or were something that I won't like. I have learned to not let that influence me in way, and be comfortably ambivalent.
I especially felt bad after I came to know that Richard Feynman slept with many of his friends' wives and tried to sleep with even more of them. He burnt a lot of bridges this way. One Physicist had told me that Feynman hit on his wife twice, and when discouraged by the wife, he let it go as a perfect gentleman.
Despite this, Feynman remains one of my most revered icons and his picture still hangs on my wall.
I don't overlook what he did or try to find some naive justification. I know he was this way and that does not make his mind less fascinating to me.
I've mentioned this several times, but people must think it ruins your credit rating or something. Whatever. Their loss.
Go to privacy.com, and you can set up a temporary credit card with preset limits on spending. You want to cancel some service, just close the card. Now you're in control. The sticky company will figure out for itself that you're done with them, believe me.
They also notify you whenever someone uses the card, so you can't forget about your subscriptions.
They will say "yes" to whatever name & address they're given, so you don't even need to provide your real name & address when you subscribe. Obviously you could have a fake email address, too.
Privacy? That costs $10/month. I don't have it, and the real transactions do appear on my bank statement. But it's free. If you buy the privacy option, then they make up fake names for the vendors.
https://how.complexsystems.fail/
You can literally check off the things from Cook's piece that apply directly here. Also: when I wrote this comment, most of the thread was about root-causing the DNS thing that happened, which I don't think is the big story behind this outage. (Cook rejects the whole idea of a "root cause", and I'm pretty sure he's dead on right about why.)