Because MongoDB mmap(2)s its backing stores into its process memory space. It's a naive approach to persistence - it's very fast and simple, but if you overcommit (i.e. you store more in the database than you have memory available), page-thrashing results.
MySQL's InnoDB table engine, on the other hand, uses direct I/O (in the recommended scenario) and manages the buffer pool independently of the kernel. Its buffer pool manager is specifically designed for the typical workloads MySQL is used for (http://dev.mysql.com/doc/refman/5.5/en/innodb-buffer-pool.ht...) - as opposed to the naive LRU that most OSes employ for their filessytem buffers.
Its hardly naive, it is optimized for an application that needs to keep its entire working set in RAM, that is why sharding is so fundamental to the design. Not all apps need that which is once again...
For what it's worth VoltDB uses main memory format as its "naive" approach to persistance too.
I would consider VoltDB naive in the approach to persistance because it gives you dramatic gains (2 orders of magnitude) in read/write performance for similar workloads only by redefining concurrency and durability out of the equation. Durability is no longer a property of each machine but is instead a property of the network. Concurrency is handled by executing queries fast and doing them in series. So it's like ACID without the C or D.
It's not naive. It's a well understood and communicated design choice.
Remember that completely in memory databases are going to be how we all store our data in the decades to come and are already the standard for those who care about speed.
MySQL's InnoDB table engine, on the other hand, uses direct I/O (in the recommended scenario) and manages the buffer pool independently of the kernel. Its buffer pool manager is specifically designed for the typical workloads MySQL is used for (http://dev.mysql.com/doc/refman/5.5/en/innodb-buffer-pool.ht...) - as opposed to the naive LRU that most OSes employ for their filessytem buffers.