Not sure, but you can pick up a 1U server running 96 ARM cores today. ARM cores are optimized for certain types of work and have code to accelerate all sorts of things like AES, SHA1/2, video codecs, etc.
So depending on your workload, a 96 core 1U is going to save you a lot of U's and power. No idea how Windows Server and Azure services fall into this. Maybe they want to dump specialized tasks onto ARM instead of churning through their x86 infrastructure which takes longer and uses more power.
Data centers are power bound. Power is expensive and instantly turns into heat, which requires yet more power to exhaust and cool. Anything that can bring significant power savings will be taken seriously in data centers. If ARM servers can deliver these power savings, then its a no-brainer to buy them. The cost of porting Exchange, SQL server, Sharepoint, IIS, and Windows Server to ARM is going to be a fraction of the power bill those DC's run. Now that MS is actually paying the server bills, they're realizing that pegging your product to just x86 isn't the wisest move.
> Not sure, but you can pick up a 1U server running 96 ARM cores today. ARM cores are optimized for certain types of work and have code to accelerate all sorts of things like AES, SHA1/2, video codecs, etc.
True, but that 96 core 1U server is going to bear a high price, and Intel CPUs also have instructions to accelerate AES, SHA1/2, video codecs, etc.
These are very cheap pieces of hardware. Price up a HP Proliant with multiple 8 core Xenons and see how crazy that pricing gets, not to mention the power usage. Pricing is good right now for ARM. The question is does ARM work for what you're computing? According to this article, ARM is probably best for high thoroughput and high RAM solutions, not necessarily number crunching:
Mind you, those dual Xenons its competing against go for anywhere between $7,000 to $3,000 each. That's just the CPU street price before the HP/Dell markup and, of course, the rest of the server. Ask a VAR what a dual or quad E5-2699 v3 or current v4's goes for. You're probably looking at a $25,000 to maybe even $50,000 box here.
The server market only cares about price/watt and size. 1U is very small for such a parallel system, no need to write your code for Phis or anything like that, and low power in comparison to Intel.
Does the market care much about size? I thought there were lots of datacenters that reached their limits on power and cooling despite having floor space available for more racks.
It depends on place to place but sometimes space is limited. I know some places need "off-site" backups that are within walking distance (not really too off site). For companies in NYC or populated areas space is a factor.
Also if a machine was half as power consuming, twice as powerful, but you could only fit one in your entire datacenter, I don't think many would go for it.
Having many systems is a failure tolerance assurance. Being able to increse your power efficiency, processing ability, AND capacity all by switching to a single system that's priced competativly/cheaper to the market standard? That's a wining combo.
I cannot wait for ThumbEE to make it into the "main stream" ARM server market. Think about the huge speed boosts we could see?
I don't know why Oracle/Python/Microsoft aren't pushing this harder. Having your JIT-ed code running directly on the CPU? What could be better then that?
DARN! That's too bad! That would have been really cool. I know that there are a few in-hardware JVM systems that are used in cellphones that could be made use of. That would be cool.
> Having your JIT-ed code running directly on the CPU? What could be better then that?
Although it looks good, Lisp Machines, Ada Machines, Mainframes have repeatedly proven that JIT on a general purpose CPU allows for more optimizations than having it on the hardware.
Hence why mainframes like IBM i, have their JIT on the kernel, not on CPUs.
So depending on your workload, a 96 core 1U is going to save you a lot of U's and power. No idea how Windows Server and Azure services fall into this. Maybe they want to dump specialized tasks onto ARM instead of churning through their x86 infrastructure which takes longer and uses more power.
Data centers are power bound. Power is expensive and instantly turns into heat, which requires yet more power to exhaust and cool. Anything that can bring significant power savings will be taken seriously in data centers. If ARM servers can deliver these power savings, then its a no-brainer to buy them. The cost of porting Exchange, SQL server, Sharepoint, IIS, and Windows Server to ARM is going to be a fraction of the power bill those DC's run. Now that MS is actually paying the server bills, they're realizing that pegging your product to just x86 isn't the wisest move.