Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> In essence, everyone simply looks at the winners and then redefines the 'path to success' to be 'the path they took'.

The classic example is Jim Collins' bestselling business book, Good to Great. The book claimed that certain characteristics of successful companies made them Great.

Inconveniently for Collins, after the book came out, these very same companies underperformed and went from Great to Good.

Of course, he did the statistics backwards. He started with the successful companies and looked for common traits. He should've started with the traits and evaluated how companies with-and-without those traits performed.



> Of course, he did the statistics backwards. He started with the successful companies and looked for common traits. He should've started with the traits and evaluated how companies with-and-without those traits performed.

I don't know if this would give useful insight, and furthermore it's possible someone has already looked into it, but this way of posing the difference sounds like it might be related to the field of research into "one-class machine learning". The usual setup for classification in ML is that you have examples from all the classes. In a positive/negative two-class setting you need both positive and negative examples. But what if you really have a one-class dataset, e.g. just a list of failures and their characteristics? Can you (in a predictively reliable way) generalize anything from that, and predict solely from that class of positive examples whether future cases presented to the classifier are like those? There's quite a bit of work looking into that. (Unfortunately, I don't know much about it.)




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: