"Measure before optimizing" has limited application. There are many things that need not be measured. Appending to a file is definitely not slower than doing memory management inside the file to allocate a new chunk, seeking to that position and then writing out the chunk. Period. (And the serialization overhead is negligible compared to the disk I/O).
It's not only "writing" though. It's "writing" AND "reading".
Can you read the correct line in in CSV faster? Can you find the row with specific critiria like SQLite faster? Can you handle concurrency correctly like SQLite? Can you make sure your CSV is always in consistent state like SQLite?
The point of the article is that SQLite is still fast even with all the benefit of database.
The idea of SQLite is that you could default to using it, and move away when it hits its limit. Nobody should default to using CSV, you use CSV when you have to, despite all its limitation.
I'm not attempting to compare the usefulness of CSV to a RDBMS. I'm simply pointing out it should be inherently obvious that writing to 1 file is faster than writing to N files, which is the only thing the benchmark is in question is concerned with as well.
"Duh, I always knew that" is easy to say when you don't have to provide proof.
Also, are you sure that CSV will be faster as well? Do you have a benchmark for it?