Then you end up with too-large servers all over the place with no rhyme or reason, burning through your opex budget.
Also, what network does the VM land in? With what firewall rules? What software will it be running? Exposed to the Internet? Updated regularly? Backed up? Scanned for malware or vulnerabilities? Etc…
Do you expect every Tom,
Dick, and Harry to know the answers to these questions when they “just” want a server?
This is why IT teams invariably have to insert themselves into these processes, because the alternative is an expensive chaos that gets the org hacked by nation states.
The problem is that when interests aren’t forced to align — a failure of senior management — then the IT teams become an untenable overhead instead of a necessary and tolerable one.
The cloud is a technology often misapplied to solve a “people problem”, which is why it won’t ever work when misused in this way.
Not GP, but at my previous job we had something very similar. The form did offer options for a handful of variables (on-prem VMware vs EC2, vCPU, RAM, disk, OS/template, administrators, etc), but once submitted, the ticket went to the cloud/architecture team for review, who could adjust the inputted selections as well as configure things like networks, firewall rules, security groups, etc. Once approved, the automated workflow provisioned the server(s), firewall rules, security groups, etc and sent the details to the requestor.
We have a Service Now ticket that you can complete that spins the server up at completion. Kind of an easy way to do it.