I would say a fraction of the features. If you don't need those features, that's fine of course. But in my experience, I end up using a lot of those features when dealing with real world customer requirements. I guess if you implement search for some website where search quality just doesn't matter, light weight is fine. For everything else, using proper solutions might be the wiser thing.
In terms of performance you see a lot of comments that are stating X is faster than Y. I'd take such comments with a large grain of salt. Unless those lightweight alternatives actually do the same things you can't really compare the performance. It's not that hard to make indexing fast in Elasticsearch if you disable all the features that make it slower. Of course if you don't actually have those features it's going to be fast by default because it simply isn't doing anywhere close to the same things.
Elasticsearch is actually pretty damn fast even when you do use its many features. The reason is that it relies on in memory caches, thread pools, etc. and that a lot of very smart people have been implementing and optimizing very efficient algorithms in Lucene for the last 25 years. Elasticsearch actually can run with as little as 256MB but of course you are not going to be able to cache a lot of data with that and performance will suffer accordingly. Mostly large heap sizes with Elasticsearch are all about using larger caches. It also relies on memory mapped files and OS file caches for that. That's what allows it to work with extremely large data sets and still provide query responses in milliseconds.
There's no reason you wouldn't be able to do the same with a native implementation of course. But it would be naive to assume you'd end up using a lot less memory or CPU. Using that would be pretty core to matching performance and features. Not using that would make it pretty hard to get even close.
In terms of performance you see a lot of comments that are stating X is faster than Y. I'd take such comments with a large grain of salt. Unless those lightweight alternatives actually do the same things you can't really compare the performance. It's not that hard to make indexing fast in Elasticsearch if you disable all the features that make it slower. Of course if you don't actually have those features it's going to be fast by default because it simply isn't doing anywhere close to the same things.
Elasticsearch is actually pretty damn fast even when you do use its many features. The reason is that it relies on in memory caches, thread pools, etc. and that a lot of very smart people have been implementing and optimizing very efficient algorithms in Lucene for the last 25 years. Elasticsearch actually can run with as little as 256MB but of course you are not going to be able to cache a lot of data with that and performance will suffer accordingly. Mostly large heap sizes with Elasticsearch are all about using larger caches. It also relies on memory mapped files and OS file caches for that. That's what allows it to work with extremely large data sets and still provide query responses in milliseconds.
There's no reason you wouldn't be able to do the same with a native implementation of course. But it would be naive to assume you'd end up using a lot less memory or CPU. Using that would be pretty core to matching performance and features. Not using that would make it pretty hard to get even close.