REST APIS suck for nested resources. GraphQL is a huge breakthrough in managing them.
Ever seen an engineer do a loop and make n+1 REST calls for resources? It happens more often then you think because they don't want to have to create a backend ticket to add related resources to a call.
With internal REST for companies I have seen so many single page specific endpoints. Gross.
> There have been different opinions about structuring REST calls but those aren't going to cause any real forward progress for the industry and are inconsequential when it comes to business outcomes.
You could argue almost any tech solution in a non-pure tech play is largely in consequentially as long as the end goal of the org is met, but managing REST APIS were a huge point of friction at past companies.
Either it goes through a long review process to make sure things are structured "right" (ie lots of opinions that nobody can sync on) or people just throw up rest endpoints willynilly until you have no idea what to use.
GraphQL is essentially the "Black" for Python Syntax but for Web APIs. Ever seen engineers fight over spaces vs tabs, 8 vs 4 spaces, whether a space before a colon? those fights happened a lot and then `black` came out and standardized it so there was nothing to fight over.
GraphqL makes things very clear and standard, but can't please everyone.
Unpopular opinion: I'm actually a fan of singe page specific endpoints. You get much easier debugging, easier to audit security, easier performance optimization an the imho pretty small price to pay is that it's "not elegant" and a bit of backend code
Yup. Especially when your api really only has 1 (possibly 2 in the case of a mobile app) real clients. People like to pretend their apis are generic but they aren’t. There’s a good argument to stop writing generic apis for your single application.
Typically the endpoints clients are multiple frontend developers. If the frontend is blocked for every page they need waiting on the backend to expose data that massively increases the costs of features and reduces the time for delivery.
Yeah. And a gentle nudge that Conway's Law is about lines of communication in general, not specifically the org chart.
I used to be in charge of the stored procedures that served as the API to a shared database system used by many teams. (So, not exactly the same thing, but I'd argue that the sprocs vs ORM debate isn't entirely dissimilar from the REST/RPC vs GraphQL debate.) My turnaround time for getting database API changes into staging for application teams to play with was typically less than one work day. But that happened because I had a lot of latitude to just get things done, because the company valued rapid delivery, and therefore made sure we had what we needed to deliver things rapidly, both from a technical and a business process perspective. This was in a very highly regulated industry where every change required paperwork that would be audited by regulators, too.
I've also worked at places where this sort of thing took ages. Typically the worst places for work backing up were organizations that used Scrum, because the whole "sprints" thing meant that the _minimum_ time to get something from another team was about 1.5 times the sprint duration. And even then you could only manage that if you could deliver a request that already met all of that particular team's requirements for being able to point it, and could convince their Product Owner that it was a higher priority than whatever their pet project is this quarter, and the stars align so that you perfectly time the submission of your request with respect to that team's Scrum process's frame rule.
The thing I want to point out here is that absolutely none of that bullshit was ever the API technology's fault.
My experience is that the problem is avoided in theory, but not in practice.
Making a good API on a large system with many clients is always difficult. GraphQL makes it easier in theory, but if you have average level devs working on it, they’ll make a bigger mess than if they use simple REST. The latter will still be complex, but at least it’s easier to have observability.
The modern web development practices are just insane.
The GP's idea that a frontend developer would send a ticket to somebody so they can get all the data they need... it's just crazy.
On the other extreme, we have the HTTP 1.0 developers saying something like "networks are plenty of fast, we can waste a bit of it with legible protocols that are easier to make correct", while the HTTP 2.0 ones are all in "we must cram information into every single bit!"
Every place you look, things are completely bananas.
> idea that a frontend developer would send a ticket to somebody so they can get all the data they need... it's just crazy.
For me, what's crazy is that there are "web" developers who can't just add the endpoint they need while working on a frontend feature, or "web" developers who can't just add an element or a page for testing the backend endpoint.
What ever happened to full-stack developers? The "frontend" and "backend" developer split is so incredibly inefficient that it's really jarring—you take something that should take 2 hours and magically, through tickets, delegation, and waiting for results (then repeat that for debugging, who knows how many times!), make it into a 2-3 day task.
I once reproduced (black-box style!) a two-week effort by a three-man team in six hours of work simply because I had PyCharm and IDEA opened side-by-side and could write code on both sides at the same time.
If someone has a good explanation for why the full-stacks that were once the norm went almost extinct, I'd be happy to give it a read!
For context: I work on large-scale browser apps that are closer in complexity to something like Linear or Obsidian than to your standard WordPress blog with some forms. E.g I'm currently working on a browser-based GIS tool for the financial sector.
I started my career as a full-stack developer, but went all-in on frontend because I felt I was spreading myself too thin. At one point I found that I could choose to be almost good enough at doing two different things or extremely good at one thing. I chose the latter option.
Modern browser apps are complex beasts, at least if you want to do them right. You obviously have to worry about all the technical bits --- HTML, CSS, JavaScript, your view library of choice, platform APIs like <canvas> and WebAudio, cross browser testing, bundle sizes, performance optimizations, techniques like optimistic rendering, all that good stuff.
On top of that, you also need to work closely with designers to make sure they know the features and limitations of the platform(s) they're designing for. More often than not, you end up being a sort of bridge between the backend devs, designers, and product managers.
A lot of times you end up doing design too, whether you like it or not. I've learned a lot about UI/UX design just because I often have to fill in the gaps where a designer forgot to include a certain error state, or didn't test their design on tablet screens, or didn't account for cases where a certain API might not be available.
I tried for many years to learn as much as I could about Django as well as React and friends. But it eventually got too much. I found that I wasn't able to keep up with both ecosystems, and I was producing code that wasn't very good. I could certainly build things quickly because I was familiar with all parts of the stack, but it came at the cost of code quality, security, stability, and robustness. I eventually decided to hang up my backend developer hat and focus exclusively on what goes on inside the browser (which can be a lot by itself these days!)
It's probably possible for a single individual to build a high-quality server-rendered MPA with some forms without making a mess of it. But that says more about how good Rails/Django/Laravel are than about the capabilities of any single individual. I don't think a single person could build a product like Linear end-to-end without cutting corners.
IMO the fact that being a full-stack dev is so taxing is an indication that the stack as a whole is just way too complex and overengineered. Which is rather obvious from looking at the state of affairs on the frontend side of things. Desktop GUI devs don't usually have those problems.
Thing is, even when web apps are essentially used as desktop app replacement (Electron etc), all the complexity discussed here is still there.
As far as looking OS-native, this is (unfortunately) increasingly less relevant as OSes themselves drop desktop UX consistency even internally. But that aside, Qt is still around and is a very mature framework for "near native" UI, and then there's Avalonia and others. Even Gtk is surprisingly decent on Windows these days. Although I'm not sure what this all has to do with the original subject, since web apps most certainly don't look native anywhere.
I don't think the claim is that a single developer should be able to build an entire enterprise product, but rather that a single developer should be able to implement the software side of a task end-to-end. The latter is a reasonable expectation when your software is monolithic.
> ... I don't think a single person could build a product like Linear end-to-end without cutting corners.
Cutting corners is a feature. I bet the Linear team is as pained as any, internally, at the tradeoffs they're making.
There is no way to know "what to get right" without going through it. So for 80% of the dev cycle the job is to cut corners to get to the 80/20, rinse and repeat forever.
This isn't against excellence and the dream of a beautiful product experience as your reply seems to convey.
If you absolutely need 2 people for that, they should be side by side.
But yes, that break-down of the problem is insane. People artificially split an atomic problem in two, and go create all of that extra craziness to try to solve the communication problem they made.
And then people go and push UX and UI tasks into the frontend developer, and ops tasks on the backend since them are there... What is insane again, because those are really incompatible tasks that can't be done with the same mindset.
And since it's standard, backend tools won't support the frontend and the other way around. So the insanity gets frozen on our tooling.
Because people have a limited appetite for complexity.
I wrote some front-end stuff back in the days but I've lost track of whatever is happening these days. jQuery to append some stuff took five minutes to learn, but learning react hooks takes a determined effort.
Likewise, adding a field to a graphql type is simple, but doing it with authorization, controlling n+1s, adding tests etc.. requires front-end folks to actually invest time in learning whatever back-end they're dealing with this time.
Everything is just a lot more complicated these days, and if you've been around for a while you may not be excited anymore by the churn, but rather fed up.
This is why the Rails community should be applauded in my book, for their dogged determination that we should keep it a “one person” framework. Yes it may not be as performant, type safe or flashy on the front end but my god it’s productive.
At my startup there are 7 devs who can all do tickets across the stack and as we grow I think it would be good if we could resist the pressure to silo and specialize
It really is. Regrettably, I've drifted away from it in large part because of client requirements for more "modern" and "maintainable" solutions (e.g. Python or Node; I'll take Python every time, thanks). Django comes very close in terms of productivity (and is better in some ways: auth, admin, etc.) but the Rails CLI, generators and community (not sure if this is still relevant) give it the edge.
The "recent" (last 10+ years) movement towards "lightweight" libraries (instead of frameworks) that require you to either reinvent the wheel, copy-and-paste or use some random "getting-started" template every time you start a new project is disheartening. As others have said above, I think it's partially resume-driven-development and people wanting to tinker (both of which I do appreciate).
Something which continues to surprise me is that there hasn't really been a "modern" successor to Rails. Granted, I haven't kept pace with developments in the Node/TypeScript world but, last time I looked, Sails was on the right track but wasn't very actively developed and I was shot down (in favor of Express spaghetti) when I suggested it. There's also a smattering of Rust web frameworks but none that quite fit the bill and come with all of the batteries and opinions included. I keep saying I'm going to do a Summer of Code project which attempts this but ... life.
There are some Java frameworks that are kinda similar? Take a look at Micronaut for example. It has data access interfaces that use ActiveRecord's naming approach, controllers, view renderers, and a whole lot more on top.
Thanks. I'll take a look. Didn't mean to overlook Java, I'm just less familiar with that environment. I've dabbled with some JVM languages and could also see Kotlin being a very nice option for such a framework. Though, to be fair, I know Java has come a long way in terms of expressivity, syntactic sugar, etc. (better performance is a given).
There's no modern successor to Rails because Rails itself os still modern and very much up to date. The recent introduction of Stimulus allows to implement a SPA completely on rails (if they wanted) with the same simplicity present everywhere else in the framework.
Many organizations would rather pay 3 people $120,000 each instead of paying 1 person $300,000 to do the same work, for a variety of reasons. Some good, some bad.
Microservice architectures and the associated explosion in complexity on both ends are to blame. When it takes twice the time to build something, it is natural to hire twice as many developers. Increased complexity drives specialization.
I think it's elegant. I loved MVVM in the postback world, and with SPAs, I see view-specific endpoints as the view-model continuing to reside on the server while the view was split off to the client-side.
Sounds good in theory. In practice, what I've seen is people don't just do single page endpoints with shared services, they follow this all the way down into the services and so you end up with lots of duplicate looking code. This isn't a problem until the application gets big enough and now you have business logic that should be the same spread out across an app. This leads to weird bugs where something that should work the same way doesn't depending on which page you're accessing things from. Of course, I don't a solution, solving things like this is what makes software engineering hard.
> In practice, what I've seen is people don't just do single page endpoints with shared services, they follow this all the way down into the services and so you end up with lots of duplicate looking code
There is no easy or simple or final solution. The solution is to do the hard work of software engineering and catch this stuff at code review time or to plan and do refactoring later when you realize something needs normalization. I wish developers would just accept that software engineering is hard and do the hard work necessary.
There's that whole "the best software engineer is lazy because he will automate stuff", but that does not give anyone a license to automate stuff in an unmaintainable way. Automating stuff is hard and there is no easy way out.
The backend for frontend pattern. Something most apps where frontend and backend are in the same organization (fullstack or otherwise) should have. It does wonder to maintainability and performance.
Even though we use GQL here, we still have a B4F, so it's browser -> B4F -> GQL -> Database.
The tooling we use make it trivial so it doesn't add any development overhead (often reduces it. No CORS bullshit for one), and our app goes ZOOOOOOM.
Came here for that. People in this thread basically reinventing GraphQL lol.
"We need a http API endpoint that gives you all the data for one page. And also be able to reuse parts of it for other pages." Yeah bro, this is GraphQL.
Agreed. Specific endpoints. I was on a project recently where a 40+ yr old dev was going crazy over "pure REST." It's embarrassing. Sure, have your pure REST but FFS create specific endpoints if you they make sense.
You are literally Transferring the REpresented State of a given page/view/screen.
Screen-specific REST endpoints will make their way as a default in to a JS-based framework in 2025 and people will pretend like this is some breakthrough advancement.
> With internal REST for companies I have seen so many single page specific endpoints. Gross.
Hardly gross. It is what it is and it’s universal across the domain. I bet Windows has internal APIs or even external ones that were created just for one page/widget/dialog of one app. It’s the nature of things at times.
I suspect that people don't think about it too hard and presume that they should have only one model - that is that their REST model should map directly to their DB model which should map directly to their internal domain model (the idea that there might be two-or-more-domains and not-all-of-them-are-public hasn't even yet crossed their minds).
When you realize that sometimes your REST model may be a view-model that maps to a different storage model optimized for online-transaction-processing, neither of which map to your internal domain model directly (there are adapters for that) _then_ you get somewhere ... but of course you then have to fight off the `transformToRestModel(transformToDomainModel(retrieveDbModel(theId)))` when the three models happily coincide for a time.
A REST endpoint can be analogous to a specific method in code. just as much as a GraphQL field.
What people are excited about is that the frontend can request all the data it needs at once and the backend can efficiently serve it. Something not possible with REST without reimplementing something similar to GraphQL.
It's definitely possible, just not without tight coordination between frontend and backend. When your development teams are broken into silos with strong separation, this isn't feasible and GraphQL starts to make some sense. Otherwise, you just create an ad hoc endpoint that serves the exact data you need and call it a day.
An engineer had to spend time to make that specific API for that page instead of the frontend consumer using what was already defined and get all the resources with one call and 0 backend engineer needed for that new page.
On the other hand, it may take that engineer less time to create a page-specific endpoint than it would be to create something more generic that might serve other purposes (that may never come to pass), which may also involve talking to other teams, checking what future plans are, etc.
And that's assuming it's a new endpoint; if there's an existing endpoint that does almost what's necessary, they may need to check in with that team about what modifications to the endpoint would be acceptable, etc.
Single-page endpoints aren't great, but often times they're acceptable because they end up being a half-day task instead of a week-long slog.
The entire point of graphql servers is that they're basically ORMs (or use an underlying ORM) that turn complex nesting into a single query's worth of joins. It won't beat hand-crafted sql from an expert, but if that's your preferred approach, debates on the relative merits of different query frameworks are all academic to you anyway.
I've never seen this kind of graphql server implementation that can automatically boil down a complex nested query to sensible SQL. It sounds like the best of both worlds. Do you have links?
Symfony API Platform boils to ORM-generated SQL via Doctrine, which is verbose, but not overly clever. So the only link I could give you there would be to the Doctrine query builder docs (I won't subject you to API Platform's docs). I imagine a more sophisticated graphql dialect like what Prisma supports can generate some pretty gnarly sql, but that's ORMs for you. But exposing your data model directly through graphql is also not recommended, same with REST APIs (I can't claim the high ground, my biggest graphql project exposes models directly. The tradeoff was worth it in this case). So in the end you're writing custom controllers/resolvers anyway, and field resolvers are where graphql really starts to shine.
Typically libraries use a Dataloader + ORM to implement this which gets you pretty far, outside of that some libs like Strawberry with automatically optimize your queries for you if you return Django ORM Querysets.
Any query you were going to build and serve with Rest can be made with these two methods or even a raw dataloader and manual SQL string.
The inventors of GraphQL did not intend it to be mapped directly to a database.
"That would be one way to implement the system in a DB-centric way. However we believe that intermediate application code is pretty critical to any GraphQL implementation." [1]
"GraphQL is a client-server dance that needs server-side capabilities, not just CRUD semantics." [2]
Specifically, GQL implementations always map to a `resolve(source, args, context, info)` function (the names can be anything, the types are what matter). In that sense, you also get a standard server interface similar to wsgi/psgi/rack, but much more fine-grained (there can be a resolver for every field in the query).
What? GraphQL is purpose built to solve that in 1 Query. Not doing it in 1 query is on you not the protocol.
In practice with REST the frontend engineer didn't want to wait and tried to use the existing REST endpoints, did N+1 API HTTPS calls and then joined them client side in javascript.
> What? GraphQL is purpose built to solve that in 1 Query. Not doing it in 1 query is on you not the protocol.
1 graphql query maybe. But that translated to a dozen SQL queries.
> In practice with REST the frontend engineer didn't want to wait and tried to use the existing REST endpoints, did N+1 API HTTPS calls and then joined them client side in javascript.
The point you're missing is that for 1 graphql query the API did N+1 SQL queries, and then also joined them in JavaScript.
In the REST case the front end can switch to the efficient custom endpoint when it is implemented. In the graphql case it will never get any faster because the API has to stay generic.
Is the flexibility worth the tradeoffs? Maybe in a company where you are adding new pages all the time with deeply nested relational data needs. But I would argue this is more rare than not. And I often find that frontend engineers aren’t as familiar with database queries and the load they are putting on the system with some of the graphql queries they are making. Flexibility for frontend has its own tradeoffs and I totally understand why a frontend engineer doesn’t want to have to wait for an endpoint to be finished. But this article outlines some of the issues you encounter later as you scale your team and system.
We use a schema first design where I am at and if a frontend person needs a new endpoint because the resource-only endpoints aren’t enough then they submit a pull request to the schema repo with a design for their endpoint they need. It gets approved and boilerplate is auto generated. Yes you have to wait longer, but 90% of the time (for our software) the resource endpoints work great.
I'm not sure where this narrative comes from that GraphQL immediately means that the frontend time will have no idea what they are doing and will induce load on the system.
95% of my nested graphql fields are based on foreign key indexes so its almost no additional load to the system (only 1 query per "level" of GraphQL) to query the nested objects.
I restrict GraphQL to 5 levels and now I have 5 queries per API call instead of 5 Queries over 5 API calls.
The backend team exposes the fields that they know are efficient to query. They can restrict the depth of the GraphQL, number of fields, etc.
Every page is special. Using generalized API is waste of resources. Generalized APIs are hard to use and develop. It takes a genius to create a good generalized API (say SQL) and even those generalized APIs are not usable in web context (because of authorization and DDoS issues).
It's better to think about optimizing of creation specialized APIs.
This is like, an hour per endpoint. For maybe 30 endpoints (high figure for many apps) throughout the lifespan of your application. But let's say 3 hours: you're talking about 90 hours in total, over a period of 2 or more years. It's really not that much.
And GraphQL isn't free either; you need to actually implement that. It pervades your entire stack, too – it's not something you can just "put on top of it".
I think that in many cases, perhaps even most, GraphQL is a "spend 8 hours to automate a half hour task" kind of affair. Are there cases where that's not the case? Probably. Maybe your specific use case is one of them. But for the general case? I'm entirely unconvinced.
And no one will be spending days working on it, they will work on other things while waiting for code reviews and such.
> GraphQL libs in Python are equally as complex as the FastAPI etc.
You need to account for the fact that anything can query anything, whereas a more traditional REST-type API will have much more limited codepaths.
This starts with just basic database maintenance and indexing, which in many case will be significantly more complex, to all sorts of other problems. This article already enumerates them so I'm not going to repeat them here.
You can't just handwave all of that away with "oh there's a library you can use".
If you think all of that is worth it then that's fine. As with many things it's a trade-off, and a bit of a judgement call too. But please, don't pretend this complexity doesn't exist, and that the trade-off doesn't exist.
If an app's API requires more than a couple of nested structures then, without knowing the specifics of the domain, I would venture a guess at a poorly defined data model is the issue vs. API technology.
For example, a lot of times people build out nice normalized table structures for online transactional apps. The UI/UX is pretty straight forward because end users typically only CRUD an object maybe a couple nested objects at a time. The API's are straight forward as well, likely following single responsibility principles, etc. Then comes along requirements to build UI's for analytics and/or reporting types of things where nearly the entire schema is needed depending on what the end user wants to do. Its the wrong data model for doing those types of things. What should be done is ETL the data from the OLTP schema into a data warehouse style schema where data is de-normalized so that you can build reporting, etc.
> Ever seen an engineer do a loop and make n+1 REST calls for resources? It happens more often then you think because they don't want to have to create a backend ticket to add related resources to a call.
With REST, though, that pain is visible to both sides. Frontend engineers generally don't want to make N+1 REST calls in a tight loop; it's a performance problem that they see and that is very visible in their Dev Tools. Backend engineers with good telemetry may not know why they get the bursts of N+1 calls that they see without asking the Frontend or digging into Frontend code, but they can still see the burstiness of the calls and have some idea that something could be optimized, that something is maybe too chatty.
There are multiple ways with REST to handle things: pagination, "transclusions", hyperlinks, and more. Certainly "single page endpoints" is a way as well, no matter how gross it is from REST theory, it's still a pragmatic solution for many in practice.
REST certainly can please everyone, given pragmatic compromises, even if it isn't very clear or standard.
It doesn't. That's literally what GraphQL was designed for: To provide a resolver that sits between high latency clients and all of the various REST/RPC services in order to roll up all of the requests at datacenter speeds into one in order to overcome the latency issues.
But you still need all of the various single page REST/RPC endpoints to make use of GraphQL as it is intended. Some developers out there skip the REST/RPC part, making GraphQL their entire service, violating its principles.
If it works it works, but it does not come tradeoff free.
? why do you need a REST endpoint with GraphQL? Nearly every language has a GraphQL engine that integrated directly with the database and exposed functions as GQL directly without REST at all.
You don't, obviously – we already talked about how many don't. But if you don't have a graph of "microservices" in which to query, what do you need GraphQL for? Especially if all you are doing is returning the results from a database. The database will already have its own API. Why not just use it? The native API is always going be better than your attempt to translate.
Because exposing your Database to the outside world is asinine. GQL sits between letting the frontend query semi-customizable queries and not having any customizability to select related resources.
Where is this API coming from? You have to build it. I'm saying you should make those APIs graphql apis though a language framework and not REST apis because GQL consuming GQL is much better than GQL consuming REST.
Someone has to build it, but as long as you are using a popular DMBS the work is no doubt already done. All you need to do is stick your application-y bits in the middle – same as you have to do with GraphQL. Computers aren't exactly magic. It's all just 1s and 0s at the end of the day. You can do just about anything and make it work. But there are tradeoffs to different approaches. If GraphQL is the best choice for working with databases, why don't DMBSes use it natively?
I'm curious as to why you believe that exposing direct SQL over the database is "asinine" but GQL is fine, given that either one is very generic, and e.g. the security issues are very similar (as the article points out).
That is how REST works but is the opposite of the way GQL works.
You don't pile into existing defintions but extend the current definitions out. A team needing new data doesn't affect the consumption of other teams. which is not the case of REST where if one team needs to change a REST endpoint to return different shape of data, they have to verify the new shape with other teams.
With REST, the endpoints are team based if you have even a semi-competent CTO so you never have this problem, you just check who owns the controller and that's it.
With GQL though, good luck retracing who owns what field and what does it affect if you change the definition of what it returns, especially that you you are not even using it on the same language.
Bonus points here if you are managing something outside of your control like a mobile app.
GQL you would only care about if someone removed a field not if someone added a field. How would adding a field change existing GQL calls return? Doesn't make sense.
Also, Its about 1 line to set up CI to extract all GQL queries from a typescript project and do static analysis to check against the schema.
But again you only care if someone deletes a field, and even if you have to delete it, at least the GQL schema has built in annotations for deprecation, something not in REST.
Actually deleting happens rather rarely compared to adding.
Its much easier to trace and debug what teams are using GQL fields than REST. What if one team is piggy backing on another teams exising rest endpoint and you dont know? same problem that would require some analysis of all code calling and endpoint to determine if a field is safe to delete. GQL makes this much simpler than REST.
One place I was at we used REST with a hydration service that ran as a proxy in front of the requests. That gave us most of the benefits of GraphQL and we only implemented it for the main resources (users, photos, and maybe one other thing). To minimize latency/faults the hydration service ran locally on every webapp/API server. I wasn't around for far too long after though to see how it turned out as it grew (if it did at all, the company sort-of went defunct--development-wise anyway).
I also recall, we had similar N+1 query problems in the REST API endpoints irrespective of hydrating the returned resources.
The biggest benefit of GraphQL I can see from a user perspective is that it lowers total latency especially on mobile with fewer round trips.
I currently use GraphQL and have no problems with it specifically, I was merely sharing an experience using REST. Perhaps it adds a bit of latency/overhead due to the implementation/language that's used but with larger requests rounds downward.
> Any more levels and you have now reinvented GraphQL
Sure, but reinventing the wheel can be good, particularly when the existing wheel technology is oblong, lumpy and constructed of stone and cheese.
I’m just bitter than GraphQL endpoints return 200 on errors. If you’re returning 200 on errors, then you’re not really doing HTTP; you’re just using HTTP as a substrate for some other protocol, in which case you might as well just open a port and serve that protocol directly and not pretend to be HTTP.
> you’re just using HTTP as a substrate for some other protocol, in which case you might as well just open a port and serve that protocol directly and not pretend to be HTTP.
Browser does not allow to access any ports with any protocols.
Often intermediate software gets in the way as well. For example with Kubernetes it's trivial to expose HTTP service using Ingress and it's not trivial to expose other protocols.
Another HTTP plus is that it's trivially secured by TLS and every developer knows API for that. Using TLS for your own protocol or QUIC is absolutely not trivial and probably more like arcane knowledge.
The point is GQL is currently the standard for querying dynamic nested data shapes not that it is the first or was the first.
Look at OData download stats on Pypi it had 2 downlaods the last day. Graphql-core for python? 624,201. that is not even on the same planet.
If you don't use GQL and want a system of querying nested data you will be using a less used protocol that is analogous to GQL so you might as well use the standard.
OData is still currently a standard for querying (among other things) nested data shapes. The point was it is a matter of fact that as a standard it predates GraphQL and as a matter of ecosystem perspective whether or not you think it is a more or less used protocol.
To trade anecdotes for anecdotes: On Nuget, the Microsoft.Data.OData package marked deprecated and marked with CVEs for critical bugs still has an average of 36.3K per day downloads. (Its successors combine for about double that.) The top GraphQL library (of the same name, unadorned) still only has 9.3K per day downloads. In .NET if you want a system of querying nested data (that is also "REST compatible"), why would you use a less used protocol like GraphQL?
> Ever seen an engineer do a loop and make n+1 REST calls for resources? It happens more often then you think because they don't want to have to create a backend ticket to add related resources to a call.
> With internal REST for companies I have seen so many single page specific endpoints. Gross.
As someone pointed out in reply to another comment, GraphQL is "a technological solution to an organizational problem." If that problem manifests as abuse of REST endpoints, you can disguise it with GraphQL, until one day you find out your API calls are slow for more obscure, harder to debug reasons.
> With internal REST for companies I have seen so many single page specific endpoints. Gross
That's an established pattern (backend for frontend). Like all patterns there are trade-offs and considerations to make, but it's certainly not a priori "gross".
Perhaps that's the case for server-side transactions and ORMs over REST and gRPC to coordinate rather than allowing the front-end to diddle with data however it likes.
And, resources with an index option obviously should have a db index or unique index.
The challenges with GraphQL are that it makes it too easy to DoS services, leak internal data, break referential integrity, and there were a great deal of tools, infrastructure, and monitoring systems already available for REST (and gRPC to a degree).
Company standards for REST and coding style can and should be set in the diff review pipeline. Another facet is setting standards to minimize duplication of effort or exposing a greater attack surface.
Ever seen an engineer do a loop and make n+1 REST calls for resources? It happens more often then you think because they don't want to have to create a backend ticket to add related resources to a call.
With internal REST for companies I have seen so many single page specific endpoints. Gross.
> There have been different opinions about structuring REST calls but those aren't going to cause any real forward progress for the industry and are inconsequential when it comes to business outcomes.
You could argue almost any tech solution in a non-pure tech play is largely in consequentially as long as the end goal of the org is met, but managing REST APIS were a huge point of friction at past companies.
Either it goes through a long review process to make sure things are structured "right" (ie lots of opinions that nobody can sync on) or people just throw up rest endpoints willynilly until you have no idea what to use.
GraphQL is essentially the "Black" for Python Syntax but for Web APIs. Ever seen engineers fight over spaces vs tabs, 8 vs 4 spaces, whether a space before a colon? those fights happened a lot and then `black` came out and standardized it so there was nothing to fight over.
GraphqL makes things very clear and standard, but can't please everyone.