Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In practical use, you can simply search for anything in the "dog" subclass using the WordNet hierarchy... so there is no loss in accuracy unless you have confusion across the search groups! We actually support this in sklearn-theano - if you plug in 'cat.n.01' and 'dog.n.01' for an OverfeatLocalizer we return all matched points in that subgroup.

In general, if you misclassify "dog" for a fixed architecture you will most certainly misclassify "Blenheim Spaniel" and "Flat-coated Retriever" - the two other classes are subsets of the first. The "eats shoots and leaves" sentence is analogous to a "zoomed in" picture of fur - we don't know what it is but we are pretty sure what it isn't! This is still useful, and would already get most of the way there for large numbers of fur colors/patterns.

I think the concerns you have are more important at training time, but I have not seen a scenario where it has mattered very much. In general having good inference about these nets is really hard, but I think your initial thought about "dog space" ties in nicely to a post by Christopher Olah (http://christopherolah.wordpress.com/2014/04/09/neural-netwo...) - maybe you will find it interesting?

And yes it becomes really fascinating to extend your last thought to "optical illusions" and other tricks of the mind - even our own processing has paths are easily deceived and sometimes flat out wrong... so it is no surprise when something far inferior and less powerful also has trouble :)



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: