> Does it? I thought the brain is much more energy efficient.
It strongly depends on what you're trying to do with the AI. Consider "G" in "AGI" as being a dot-product over the quality of results for all the things (I_a) that some AI can do and the things (I_h) a human can do.
Stuff where the AI is competent enough to give an answer, it's often (but not always) lower energy than a human.
As an easy example of mediocre AI with unambiguously low power: think models that run on a high-end mac laptop, in the cases where such models produce good-enough answers and do so fast enough that it would have been like asking a human.
More ambiguously: If OpenAI's prices even closely resemble cost of electricity, then GPT-5-nano is similar to human energy cost if you include our bodies, beats us by a lot if you also account for humans having a 25% duty cycle when we're employed and a lifetime duty cycle of 10-11%.
Stuff where the AI isn't competent enough to give an answer… well, there's theoretical reasons to think you can make a trade-off for more competence by having it "think" for longer, but it's an exponential increase in runtime for linear performance improvements, so you very quickly reach a point where the AI far too energy intensive to bother with.
It strongly depends on what you're trying to do with the AI. Consider "G" in "AGI" as being a dot-product over the quality of results for all the things (I_a) that some AI can do and the things (I_h) a human can do.
Stuff where the AI is competent enough to give an answer, it's often (but not always) lower energy than a human.
As an easy example of mediocre AI with unambiguously low power: think models that run on a high-end mac laptop, in the cases where such models produce good-enough answers and do so fast enough that it would have been like asking a human.
More ambiguously: If OpenAI's prices even closely resemble cost of electricity, then GPT-5-nano is similar to human energy cost if you include our bodies, beats us by a lot if you also account for humans having a 25% duty cycle when we're employed and a lifetime duty cycle of 10-11%.
Stuff where the AI isn't competent enough to give an answer… well, there's theoretical reasons to think you can make a trade-off for more competence by having it "think" for longer, but it's an exponential increase in runtime for linear performance improvements, so you very quickly reach a point where the AI far too energy intensive to bother with.