I've tried it on a test case for generating a simple SaaS web page (design + code).
Usually I'm using GPT-5-mini for that task. Haiku 4.5 runs 3x faster with roughly comparable results (I slightly prefer the GPT-5-mini output but may have just accustomed to it).
I don't understand why more people don't talk about how fast the models are. I see so much obsession with bechmark scores but speed of response is very important for day to day use.
I agree that the models from OpenAI and Google have much slower responses than the models from Anthropic. That makes a lot of them not practical for me.
I don’t agree that speed by itself is a big factor. It may target a certain audience but I don’t mind waiting for a correct output rather than too many turns with a faster model.
Usually I'm using GPT-5-mini for that task. Haiku 4.5 runs 3x faster with roughly comparable results (I slightly prefer the GPT-5-mini output but may have just accustomed to it).