I think this paper misses the point. If you're a front-end developer, if you have a robust graphql endpoint available to you it's unbelievably amazing and productive.
But providing a robust graphql endpoint that is performant, scalable, secure, etc, is much more difficult than REST. GraphQL is optimizing for a different set of developers, and this paper only studied one group.
This is the strange thing about GraphQL, it shifts the work to someone else so you'll end up with people teams that benefit and teams that don't. You also need opt in from both the client and server.
I wish there was a client side library that implemented GraphQL as an abstraction over REST, GraphQL or other backend APIs. It would be nice to use some of the GraphQLs query/join features across several backend services without those services having to change. Imagine being able to merge a vendor API that can provide a tracking number with FedEx's API, for example.
There’s actually nothing server specific about GraphQL. The specification makes no mention of transport layer, it’s simply a type system + query executor.
You can pretty easily (if you have experience with the GraphQL reference implementation in JavaScript) create a GraphQL layer that sits inside of the browser, with a schema created by the UI team, that executes calls to REST APIs to resolve the data.
You could think of it as an “ORM” for the browser, which seems cool, but I wouldn’t necessarily recommend this approach (though I have done it in the past) for two reasons:
1. The “graphql” library isn’t really optimized for size so it can add a bunch of overhead to your JavaScript bundles
2. One of the benefits of GraphQL is to combine multiple requests for related data into a single query to be sent from the browser. Yes, that makes the life of the backend developer harder as they try to optimize for performance, but it makes for less data/fewer requests over the wire to the client. If you stick GraphQL in the browser, you’ve now just moved your N+1 query across the internet.
If you really want to go down that road, Apollo offers a “plugin” to their GraphQL client that allows you to call multiple REST endpoints as if they were a single GraphQL endpoint (without embedding the actual “graphql” library in the browser): https://www.apollographql.com/docs/link/links/rest/
A better approach for what you’re looking for would be to schema stitching (which allows you to combine multiple GraphQL endpoints together and treat them as one. You can even combine that with your own schema definitions to mix in whatever backend sources you want; e.g. your vendor + FedEx REST APIs): https://www.graphql-tools.com/docs/stitch-combining-schemas
Or if you don’t want to do the work yourself, check out OneGraph, which uses schema-stitching to do exactly what you describe. It’s pretty cool: https://www.onegraph.com/
I don't think #2 is that big of a deal if all requests happen async, which they should with a client side ORM, and if you're using http2.
I think the bigger problem there is how those multiple REST calls map to your data stores. Very rarely will there be clean separation of data between endpoints, and the stateless nature of REST makes it harder to optimize each call -- meaning, there will almost certainly be redundant queries.
That said, with GraphQL, your front end dev may not realize they are executing the equivalent of many REST calls, which is another problem :) I wouldn't say it's harder to optimize if you look at the application as a whole, though.
> There’s actually nothing server specific about GraphQL.
I just grokked this very recently when I was reading up on Gatsby and the fact that they use GraphQL. I was confused for a bit because Gatsby is SSG, until I realized they merely use GraphQL as a generic way to query any JSON data you might have laying around, similarly to how getStaticProps() is employed in NextJS
“ it shifts the work to someone else”
I don’t think this is specific to graphQL. I’ve seen similar patterns with (to pick a recent example) shared front end component libraries. You make something more general at the expense of increasing complexity, if you have a team dedicated to that more complex proposition and enough consumers of that system you might well benefit at an org level. but if you don’t it’s easy to get caught up supporting that complexity at greater cost in time and effort than a less general solution would require. It’s a tricky call
Gatsby does this, and I’ve built a POC that did the same thing to try to get my workplace to adopt it. I shelved it after a demo as I didn’t love the idea of maintaining a bespoke API solution. But I’m with you, I think it’s a great idea if there were a community around it.
Not necessarily on the client side, but it’s pretty common to have a GraphQL layer in between client and one or more REST (or other) APIs that does that.
I think the opposite is true. GraphQL puts most of the onus on the client to know the data model, define their own queries, understand how to join data etc. REST-style APIs do all of this on the server side, and provide the most interesting query results directly.
On the server side, assuming you have a simple CRUD service in front of a DB, you can probably use a generic GraphQL-to-DB query language library and call it a day. If you have to expose a REST API, you need to understand what's stored in the DB and create some queries, make sure they are performant etc.
Now, if you have a complex service with heterogenous data sources that you want to present homogenously, then both REST and GraphQL will be much more difficult. But even then, with GraphQL you can leave most of the hard work of figuring out how to join efficiently on the client, while with REST it's your responsibility to ensure that the requests execute in a decent amount of time.
In my own company, we use GraphQL for internal communication between a few microservices because we need the flexibility, but we expose a REST API to users, because no one wants to learn how to write queries instead of doing a simple GET on an endpoint we already expose.
> I think the opposite is true. GraphQL puts most of the onus on the client to know the data model, define their own queries, understand how to join data etc. REST-style APIs do all of this on the server side, and provide the most interesting query results directly.
This is how I view graphql (despite not having used it). It seems better practice to keep the querying done in the backend and keep frontend for display logic more than anything. Seems like graphql will encourage business logic in the frontend (my current workplace has this problem and it is not something that should be encouraged).
Well, when you say 'give me all cars and all users joined by user ID = owner id', that's business logic. With a good REST API you would just do a GET on /cars and find any user details that are needed already in each car (perhaps under a link, which may lead to the N+1 problem, but that's another discussion).
Of course, a bad REST API may expect that you do a GET /cars and GET /users and match them in your code, which is once again business logic in the front-end and bad design. There are even many DB-To-Rest libraries that encourage exactly this, unfortunately.
A good GraphQL API could also allow you to query cars and get car.Owner.Name and car.Owner.Address without you explicitly joining (the join still happens in the backend). However, I feel that many people who choose GraphQL are trying to avoid exactly this type of logic on the backend, which would explain the popularity of DB-to-GraphQL libraries.
I think if you have a small team and are developing an API for your application's needs probably things can be done as well or better in Rest, but if you have an API that needs to face to third parties or a very large organization with APIs that need to be exposed to multiple frontend teams or a product like a CMS that frontend teams that are not part of your organization then the benefits of GraphQL will often quickly outweigh Rest.
This is a feeling on the problem space, and not based on any studies though, as I'm unaware of any studies trying to determine this.
I'm in the same boat. I like integrating with graphql endpoints. But for endpoints I make myself for my own frontend, I prefer making exactly what I need. Doing it more general would be a waste of time (YAGNI etc), and since it's often the critical path (not just data fetching) it's nice to have it clearly laid ot what's happening, without having to have knowledge about how the frontend happens to call it.
Agreed with this, you definitely don't want to be the one implementing it on the backend it is no fun at all and can be quite tricky with all the n+1 you didnt see coming and all that.
The n+1 would be there with REST too. Unless you have specific, optimized REST routes - but then you can do the same with specific, optimized graphQL queries too.
REST endpoints typically return a specific set of data that you can optimize queries for. Whereas the GraphQL endpoint must handle arbitrary queries which will require some logic to prevent N+1 queries on the related tables or even resolve to other data sources.
REST endpoints typically return data I don't need. For example, the twitter REST API `/1.1/users/show.json?screen_name=twitterdev` it will show me the last tweet for the user. Presumably this involves perhaps waiting for tweet service when the client may not even want the tweet. A GraphQL client can be more explicit about what edges to select.
There are specs like jsonapi that solve this problem. I’ve never been entirely convinced that GraphQL is better than actual REST, even if it’s better than most of the APIs people call RESTful
JSON:API provides some of the same functionality as GraphQL, like specifying which fields and nested resources you want, but at that point you’re going to have the same problems with ensuring good performance with any combination of included fields and relationships.
Yes, the advantage of a GraphQL endpoint is you can ask for a variety of things and the tradeoff is potential performance issues for unforeseen queries doing N+1s or something.
If you control the API and know all the use cases for a REST endpoint, the advantage is predictable performance characteristics and the tradeoff is flexibility.
It all depends on what you need (and maybe what tools you're using to mitigate GraphQL resolver problems).
> REST endpoints typically return a specific set of data that you can optimize queries for
You mean when someone creates a REST API he magically knows in advance how it will be used and writes a bunch of specific, optimized endpoints and queries such as "/magic-endpoint/" which returns 20 blogposts with their comments?
I don't think so. And even if that were true, the same could easily be done with GraphQL.
It's not magic, you would design it in advance so you know what you're querying by and what you're returning in the payload. Typically you'd probably separate it into two endpoints `/blogposts` and `/blogposts/:id/comments`, but there are many ways to approach the problem. These days JSON:API is pretty popular for creating standard interfaces.
GraphQL allows ad-hoc queries of unlimited complexity and recursion
> If you know in advance what will be queried then you can create an optimized query for it.
No. In general you don't know what will be queried. And that's the reason why most GraphQL implementations end up using just a small predefined set of "persisted queries" (that is, REST with extra steps).
I think you misunderstand. You can write very restricted queries that mirror REST routes 1:1. No need for persisted queries.
Persisted queries enter the game when you define a query that potentially can become very complex and deep and then want to restrict the complexity in specific ways. But that is not required if you just want to mirror a RESTful API.
> You can write very restricted queries that mirror REST routes 1:1. No need for persisted queries.
What stops a frontend developer writing an ad-hoc query?
> Persisted queries enter the game when you define a query that potentially can become very complex and deep
That... That is exactly what I wrote.
---
Honestly, I'm baffled at GraphQL defenders. It's like they never even read the documentation to the tools they defend.
- How is GraphQL different from REST?
- Ad-hoc queries of unlimited complexity
- But you write restricted queries
- No. The whole point of GraphQL is ad-hoc queries
- Persisted queries enter the game when you define a query that potentially can become very complex and deep
- That is exactly what I'm saying
And elsewhere, you can see it in other replies, it's the same story with N+1:
- GraphQL is great!
- Except the issues on the server and trying to handle N+1 queries
- What's N+1?
- It's.... It's a prominent part of the documentation for the frigging tool you use. And the reason for the things you advocate like dataloaders and persisted queries
> What stops a frontend developer writing an ad-hoc query?
What stops them is that the "ad-hoc" query can only look like this:
articlesWithComments($id) {
title
contents
comment {
author
text
}
}
That's all. Now the only way to change this query is to remove fields but there is no way to make anything more complex. The only way to do more is to repeat the query. I.e. literally:
article1: articlesWithComment(1) {
title
contents
comment {
author
text
}
}
article2: articlesWithComment(2) {
title
contents
comment {
author
text
}
}
That will create two queries in the backend - that is the equivalent of just making two requests against the same REST route.
It _is_ possible to allow a user to change the query like this:
articlesWithComments($id) {
title
contents
comment {
author {
name,
age,
articles {
...
}
}
text
}
}
But that is totally optional - you don't have to give your users this power.
> What stops them is that the "ad-hoc" query can only look like this:
What is "author" in that query and why can't the user do
author {
name,
age,
articles {
...
}
}
in that query?
And what you're basically saying is: let's create REST with extra steps for no particular reason. With extremely complex setups where author in one query has a different set of fields than in a different query etc.
> I suggest you to take a step back and re-read the thread. Maybe the context got lost.
I've read the thread. And no, the context wasn't lost.
The whole point of GraphQL is flexible queries. And it is harder to make an efficient resolver in GraphQL than it is in REST.
And yes, your solution (and the solution everyone ends up arriving at) is reimplementing REST in GraphQL, poorly. Precisely because it is much harder to make an efficient resolver in GraphQL.
GraphQL unlocks Frontend-acting-as-Product, or Product
At the cost of (hopefully) a smartly written decorator, a schema-based boilerplate, or a bunch of resolvers. And surely, I guess some raw performance and enforced abstraction.
If you go the whole hog, and make your client leverage the full query granuality, then it also costs some FE complexity .e.g Apollo. But you dont always need that aspect.
Is this good for your problem space? Depends. Is it great for some problem spaces, 100%
That’s largely the point of it. Graphql allows for a smaller, more focused team to work on the generic backend and lean into the pain of it while at the same time supporting larger and more divergent frontend/user facing teams.
This comment tells me exactly what I always wanted to know about GraphQL. I'm working on a project where the front-end needs complex combinations of queries for relations between objects with various properties of different data sets. Our fairly simple REST backend with graph DB provides exactly what the front-end needs, but it does mean there's a tight coupling between the front and back ends.
So every once in a while, the question comes up whether we shouldn't be using GraphQL for this, and every time we end up unsure where to start, how to implement it, or what the actual benefits would be. We control both front and back end, and REST works fine for us.
I guess if we ever want to make our back end usable by other applications, GraphQL might become more useful to us, but until then, it seems like it's mostly a lot of extra work and complexity that we don't need.
GraphQL seems most useful when you're using something that supports it out of the box.
Good points. I had to make the GraphQL vs REST (vs maybe grpc) decision a while ago, but for my application, the API consumers are either myself (I also do the front-end) or our customers, who are not likely to be very technically proficient - for which a REST API is probably the most accessible.
Just need to figure out server-side validation now.
I'll be honest. I hate GraphQL. It probably makes sense in a world where everyone uses graphQL, but to me it felt like having to learn yet another query language to do what to me seems straightforward using a simple REST api.
I may also be old and cranky and you should probably get off of my lawn.
I’m 100% with you. We’ve implemented a GraphQL gateway to our REST APIs at my company, and IMO it’s been a tremendous waste of time. Tonnes of complexity, performance issues, time writing the server, monitoring problems when calls are no longer to simple endpoints, etc., for almost no tangible gain.
Also, a much more minor issue, but when everything is a POST to a single endpoint, debugging network calls in Chrome/whatever dev tools is more of a pain in the ass. It’s a lot easy browsing through GET /users/124, DELETE /messages/456, etc., and instantly see what’s happening, than having every call be POST /graphql, and have to read through all the giant post bodies to figure out what’s going on.
IMO GraphQL is no better than all the other multitude of RPC frameworks that everyone eventually realizes are a snake pits of unnecessary complexity when compared to REST. It’s just newer, so people don’t hate it as much YET.
With an API that keeps tighter control over access patterns, you've got a more predictable target for optimizing your indexing strategy. With GraphQL, you've got to worry about the possibility that some client figures out how to craft a query that slips between all your indexes and causes the database engine to resort to doing things the hard way. So, worrying about that stuff is hard, where it tends to be easier to manage with REST or gRPC because you can just force your worldview onto consumers and get on with your life.
In theory, though, a well-crafted GraphQL API can be more performant over a wider variety of use cases than a well-crafted REST API, for all the oft-cited reasons. So it does make the impossible possible. But not (necessarily) easy.
However, to be super efficient you need to give up on some consistency. You simply can't have data points which join directly in the db. Instead, you need to make separate parallel requests for those datapoints and let the dataloader be in charge of merging them into larger batches of requests for the db to fullfill.
This can result in some additional latency on a request, but ultimately provides the best way to be able to scale things out.
The benefit of rest is that it's easier to make a really fast on single request endpoint. You can precisely tune your db indexes to match your queries. For graphql to be fast and not kill your DB with a malicious query, you need to introduce that wait time/batching.
Sure, but that's a great example of what I'm getting at. All that complexity may let you do some pretty cool things. But it also comes at a cost, in terms of both development effort and implementation complexity, and I certainly wouldn't call it easy.
I fear, sometimes, that our collective tendency to prefer talking about the most interesting or most capable technologies tends to bias us toward over-engineering. Slinging JSON over HTTP is, in my personal opinion, a pretty hokey hack. But it's also the option that's the easiest to implement, the most widely understood, and, more often than not, it's more than up to the task.
> Instead, you need to make separate parallel requests for those datapoints and let the dataloader be in charge of merging them into larger batches of requests for the db to fullfill.
> This can result in some additional latency on a request, but ultimately provides the best way to be able to scale things out.
I promise you, this is not the best way to scale things out.
The thing that I usually do is work tightly with the frontend and extract out all queries that I would be sending to my backend and whitelist only those.
F.e. when we knew that we are going to have a big spike because of a feature in the news, we checked the cost of our queries and heavily optimized and added a cache just for those queries that are costing us the most in front of our backend (based on the query string). This enabled us scaling up from 2000 concurrent users to half a million (the difference is only that big because we were super badly unoptimized before and also the near infinite limit of Cloudflare workers)
It's definitely harder when you have bigger different teams interacting with a single central GraphQL api. My rule for that is that there needs to be a gateway that handles exactly that for every service/team/whatever. Not custom coded by every team because this 100% gets mismanaged. Instead it should be just a container image managed by the same team that handles the GraphQL api and configured by the consuming team via an env var or a file containing all the queries needed.
I handle it w/ GraphQL the same way I do with the include query param on a rest endpoint, there's tool to make parsing the query into necessary fields easy.
No, but I realize it sounded like a typo. Perhaps, in retrospect, a cleverer wording would have been, "makes impossible things possible and easy things. . . possible."
Perhaps that's my bias showing. Lately I've mostly been working in Java, a context where the words "easy" and "Web" rarely belong together in the same sentence.
> "makes impossible things possible and easy things. . . possible."
I like that one! Yeah, I totally thought it was a typo, but I do think that GraphQL makes very complicated things easy. I might also be biased because I've been working with it for a while now.
Also the API provider had a "GraphQL is self-descriptive, go away" attitude when asked for documentation that made things worse.
It could be me (or my team), but we didn't find it easy at all to explore the API and find what we were looking for. We ended using a Python tool that generated some classes from the schema and, thanks to that, we managed to figure out the queries we needed to use in our Scala client. Not a fan.
There are definitely some cons, but I don't think the learning curve of the query language is one of them. It's simple enough that you can easily pick it up after 15 mins of reading the docs. There is also a schema proved by every server that tells you exactly what can be queried. This is much nicer than having to refer to documentation of unknown quality before you know what a REST api can provide.
Now actually implementing a server on the other hand is much more difficult.
Yeah, it just that with the use case I was exposed to (payment processing) it seemed utterly pointless and overly complex when compared to the rest API.
I don't want to query. I want to submit a transaction for processing dammit.
Again though, it might just be a case of me being old and cranky and having to learn yet another query language.
I introduced graphql a few months ago to basically unblock myself from having to think about and design gazillions of custom REST endpoints for our mobile client developers. Turns out, that I don't miss doing that. REST has been a huge drain intellectually on this industry ever since people got pedantic over Roy Fielding's thesis and insisted that we stop treating HTTP like yet another RPC mechanism. The amount of debates I've been involved in over such arcane detail as the virtues of using a PUT vs POST and exactly which is more 'correct' in what situation is beyond ridiculous. I appreciate a well designed REST API as much as anyone but for most projects where the frontend code is the single customer of the API, it's not very relevant. If you are shipping SDKs to third parties, it's a different matter of course.
In any case, we now have the graphql playground where you can prototype your queries with full autocomplete (based on the schema). I've done this with third party graphql APIs; it's stupidly easy and you don't need a lot of documentation generally.
We're using the Expedia implementation for Kotlin and Spring Boot. I have a suspicion that that setup might be lot easier to deal with than Appollo and node.js since it has the important feature of using reflection for creating the schema from code. I've not written a single line of graphql schema in nearly 6 months of creating dozens of graphql endpoints. We also use kotlinx serialization to generate cross platform parsing code in our multiplatform client (we use it on Android and in the browser and soon on IOS). So, this offloads a lot of hassle of dealing with schemas and parsing both client and server side that we used to have with REST based APIs. Maybe not the most common path but worth checking out if you are looking to get started with this stuff.
My process for adding a new endpoint:
1) write a function in a spring bean that implements the Mutation or Query marker interface. Spring Boot does the rest. It generates the schema at startup time and wires everything together.
2) start a dev server, prototype the new graphql query in the playground
3) paste the working query to a new function with a multi line string along with any model classes we need in our multiplatform (js, android, and soon ios native) client library and recompile that to add the new client code for the query.
4) update the dependency on our android and web projects (we use kotlin-js for our admin UI) to use it.
5) also add the new client to our integration test project so we can write some tests for the new endpoint. We have full end to end tests of our client and API. Our server uses some mocked databases and middleware when running the tests.
It's definitely not perfect; the Expedia implementation definitely has some quirks and limitations. Also, Kotlin multiplatform has been a bit of a moving target in the last few months (though a lot more usable as of Kotlin 1.4.x). But overall it's a great setup for a small team that has better things to do than crafting custom REST APIs.
In terms of performance, technically graphql is just an HTTP POST API on top of Spring Boot (for us at least). Yes, there's a bit of overhead for query processing on the server but most of your performance is otherwise exactly the same as it would otherwise be. You of course pay a price for crafting complicated queries. But that's the same price you pay for having poorly aligned UI and REST APIs where you end up making lots of REST calls because you did not design your API right (been there, done that). Graphql just allows you to iterate on that more easily. But it's not inherently slower in any way. We are currently not doing any federation but that's mostly because we have a monolith server instead of micro-services.
I'm with you. I've done big REST APIs and now have a big GraphQL API on my ongoing project, and I wouldn't do GraphQL again for anything I'm working on. The beneficial use cases for GraphQL are far narrower than presented, and the extra overhead compared to REST isn't worth it.
Same. But I'm also getting old and cranky. Every advantage typically pointed out over REST could easily be solved in REST. You can do joins in REST people, don't be afraid! I often would add query params for such common things, such as (fake example) fillChildren=true to have what is essentially a parent object populated with its child object in what would normally be separate calls.
Of course you can do that. But eventually keeping up with all the different fill parameters is going to catch up to you.
Besides, what do you call the parameter to fill the owner of the children? Fillchildrenowners? It’s nicer to work with if your API takes this into account.
At work almost every endpoint supports an “expand” parameter that will do various expansions of referenced resources in the returned records. This can:
- cause additional sql joins
- or pull individual records from a cache (if it’s a small enough dataset)
- or cause one additional DB query and save a network round trip.
There is one huge advantage that i don't believe REST can solve: a GraphQL server returns a schema that tells you exactly what can be queried. When using REST endpoints you are at the mercy an API's documentation, which is often quite poor. I know there are tools like Swagger that solve this problem to some extent, but its not baked into the standard like with GraphQL.
Of course you can build these elements into your own implementation, but there is value in a higher level standard that has these items guaranteed. If a product advertise a GraphQL api you know immediately there will be no trouble with determining what the api can return and accept.
> Of course you can build these elements into your own implementation, but there is value in a higher level standard that has these items guaranteed.
That's what OpenAPI is. (You mention the Swagger tool, but claim it's not baked into “the standard” the way GraphQL is; as well as a tool, Swagger was also the name of the standard, though the current version of that standard is called OpenAPI.)
OData (which is REST) can return the schema and much more: $metadata resource describes all entities, operations, relations, and can also contain documentation, capabilities, etc.
I am pretty sure there's something about REST that makes it so that it's easy to discover what resources are available.
I just pulled up Fielding's paper and couldn't find anything, but wikipedia has a reference to 'Hypermedia as the engine of application state' (HATEOAS).
But then again, I am not sure anyone actually writes systems like this. Very few people actually write REST systems and instead make REST-like APIs.
That's a good point as API doc is often very, very wrong. Still, this could just as easily be added as some semi standardized REST extension/spec. Granted, it wouldn't have the traction up front GQL gives you.
True, and this also something OData has as part of the standard. Using a standard way to do this has big benefits, as this knowledge can be implemented in tools (BI, ETL, low-code dev), which can then support many REST endpoints.
We use jsonrpc over ws on f/e-b/e and between b/e-b/e services in trading system, typescript types for api, runtime type assertion combinator library for io boundary type checks, backend teams co-maintain client libraries to access the service, it works very fast, it is safe and easy to maintain/track changes etc.
JSON-RPC is my protocol of choice as well. I feel most of these other protocols are mostly an exercise in information exchange theory which make them too idealistic, resulting in poor implementations that do not follow the standard or are extended in non-conforming ways.
In the end you simply want to interact with the client or server and procedure calls do just that. I honestly do not see the use in over complicating that.
I've used jsonsprc a few times and my experience was excellent (opposite to GraphQL, actually).
But the jsonrpc endpoint had good documentation and the client library didn't feel alien to the project, like a big query string embedded in the client code.
I wonder why jsonrpc is not used more often; but I guess compared to a REST API, the client may be more complicated.
You write the moral equivelent of __attribute__((graphql)) on your code, and boom, you can query it. You want mutations? __attribute__((graphql_root_mutation)). If your object is stored in TAO, everything works perfect. You can add custom fields or even custom objects implemented in PHP that can do whatever the hell they want.
You never have to think about a database. And you barely even have to think about security if you're using pre-existing objects, the rules for which users should be allowed to see which objects are written in one centralized place and enforced everywhere in the codebase, graphql included.
Of course, it only works that well because there are multiple ~20-30 person teams maintaining that infrastructure. And GraphQL was designed with Facebook's infrastructure in mind.
Outside of Facebook, I cannot see myself using GraphQL for any reason.
This paper has a pretty naive understanding of REST. While 'REST' can mean something different depending on who you talk to, I think in the context of a paper we can expect a bit more research.
This paper takes a typical query, and just overlays it on a CRUD-style REST endpoint, but it ignores:
* REST can definitely be strongly typed. There's OpenAPI, and JSON-Schema. In many instances I think this will work better than GraphQL, as the data you get back is more likely to follow a fixed format.
* Mutations. In GraphQL this is an RPC-style feature, whereas with the typical 'REST as CRUD' API you write in the same format you read, which can make this a lot simpler.
* Discovery. The paper discusses that REST can benefit from an 'IDE', but if you do REST well your browser is your IDE and you serve text/html as well as JSON. Ignoring good hypermedia APIs that serve multiple formats, there's also systems that let you test APIs based on for example OpenAPI schemas.
To me this is not a comparison between REST and GraphQL, but a comparison between GraphQL and a GET request on a poorly documented, low effort HTTP endpoint. To be fair, many APIs are just that.
My experience is that almost all REST APIs are like that, give or take some documentation. Though I have read about both, I have never worked with a JSON schema or Open API api. I have conservatively worked at least a hundred REST APIs, including both public and private ones.
I’m mildly optimistic about GraphQL, but without a doubt I think one of the best parts is that all GraphQL APIs have typed schemas. There is no discretion on the part of the API provider.
This is similar to a debate regarding programming languages. Some people like more restrictive languages, because they don’t trust others or themselves to reliably use good discretion in the use of powerful features or omission of optional safety checks like types.
> Some people like more restrictive languages, because they don’t trust others or themselves to reliably use good discretion in the use of powerful features
For me, it's not even that. It's that I've discovered that, even when used with discretion, the simple presence of powerful features still makes it harder to reason about how things work. Discretion around powerful features isn't a one-and-done thing; it's an ongoing process of double-checking and verifying for yourself whether that feature is being used properly every time you touch or interact with code that uses it.
A common thread in many seminal papers and essays in the history of programming languages ("Can Programming be Liberated from the Von Neumann Style", "GOTO Statement considered harmful", "Lambda: The Ultimate GOTO", to name a few.) is that, ironically enough, when you take a bird's eye look at things, reducing the power of the tools tends to increase the power of the programmer.
I should say, I'm not meaning that to be a statement in favor of static or dynamic typing; I try to stay out of that debate.[1] To me, the real problem with JSON is that, at least under typical usage, it tends to be weakly typed. (Which is a powerful feature; strong typing is a way of restricting what you can do.) JSON's type system is less expressive than that of basically any well-known programming language, including JavaScript itself, and, out of necessity, people tend to deal with that by overloading its types in ways that may force consumers to use out-of-band knowledge to interpret a message properly. This has a tendency to happen even in the presence of OpenAPI specs or JSON schemata.
Your response is to the typing nature of GraphQL, but this isn't the only strong reason for GraphQL. GraphQL reduces both response payload size to only what is necessary for the client, but also reduces round-trips.
Unless you are writing custom REST endpoints for every aspect of your UI, most formulation of what data a consumer needs will usually resolve in multiple trips to the server. So, you either scale out by having custom REST endpoints to handle each case of this (which both feels odd and presumably won't scale very well), or you take advantage of GraphQL doing this natively.
The problem is you might end up writing strange endpoints just to satisfy a specific UI page.
For example, take a small blog, that holds several posts, each with an author.
In GraphQL, you could fetch all the posts, and all info related to the author in the same query (without any new, funny business).
In REST, if you wanted this in a single query, you'd need to define some endpoint like /post-and-authors-list that fetches all the posts and complete author information for each post, whereas most REST implementations would probably involve a small set of CRUD steps on all posts, and then several queries to fetch each author for further information as well (based on the author ID in the post).
So it's not that REST can't implement anything GraphQL wants to implement, but there is a lot more work involved, is harder to scale out with developers, etc.
What's the difference between "losing it" and "it doesn't remain"? I didn't say GraphQL lost the feature, I said it lost the advantage.
I see also the paper says, in bold, "it is unfair to attribute the gains observed with GraphQL only to the IDE", but elsewhere they trumpet the ability of the IDE to interpret and enforce the schema as significant. And again, we can accept that the feature is useful without conceding that GraphQL has the advantage. We could just as well have an IDE that understands OpenAPI specs an accomplish the same thing.
"By triangulating the results of RQ1 and RQ2, it is clear that the avaliability of a type system — expressed as a schema — is one of the key benefits provided by GraphQL, in terms of reducing the effort to implement queries, when compared
to REST."
The paper itself was using the word benefit in comparison to REST.
FWIW, the types possible to express in gql is quite a lot weaker than the ones in openapi, where you can for example say not only is a field a string, but also that it matches a certain format.
You can create custom scalars in GraphQL, which then have shared logic for serialization on the backend and frontend. That knowledge on format does have to be sort of out-of-band but that could probably be added to the standard.
I think REST is definitely playing catch up here though. A standard gql setup generally comes out of the box with type generation, web playgrounds, etc. Of course it's possible now with REST but I really think that gql helped push the state of the art in this space.
I'd say mostly lack of knowledge about it, but sometimes people who are aware it exists don't like the fact that it allows arbitary queries, they think that only implementing specific methods provides more security - which can be achieved by limiting the odata access in any case, so that comes back to lack of knowledge.
Another factor may be that Microsoft used it so some sectors thought it was uncool and spent a long time building other stuff instead ..
I honestly can't say. It's quite easy to use and broadly supported. Even excel can directly pull data from oData Sources with a couple of clicks. If anyone else has some insights into the lack of uptake of oData I would love to hear them though.
We get that out of the box w boring old django/sqlize
we do happen to do more interesting things elsewhere (GPGPU, columnar streaming analytics, reactive frontend, ...), so like sharp edges when a 10X+ lift, but ended up backing up on this layer as it was adding complexity w immature layers vs what felt like similar perceived benefits from reliable boring stuff. it doesn't scratch all our itches, but the big ones are at a layer above/beyond graphql vs rest (eg, auth)
As someone who's worked with both approaches, at all levels of the stack (both backend and frontend), I think the paper (at least, its abstract) is missing the point.
While I do "feel" faster when working with GraphQL, I think the main benefit is that it abstracts away the busywork of transferring data between frontend and backend.
Writing a backend becomes more about describing what data is available, on what terms, how it fits together, and the operations you can do on it (mutations). Gone is the busywork of writing routes, handling CRUD in a zillion slightly different ways, etc.
On the frontend, Apollo-client lets me just write my app, not having to handle data-fetching, loading & caching for the zillionth time. I need an additional field? Just add it to the query! No need to get everyone involved.
Of course, there are traps you can easily fall into with Apollo's caching, N+1 queries on the backend, etc. But in my experience, while you do have to "drop down a level" sometimes to fix these, you spend most of your time at the "higher level".
I have the opposite experience working on the backend. GraphQL is too prone to N+1s and other inefficiencies because its in direct opposition of the way data is stored. Its much harder to make an efficient resolver than it is making an efficient REST endpoint.
Errors are harder, because HTTP codes are not used and monitoring is a lot more complicated. I see value in GraphQL - but I think it brings a lot of cost to the table.
In my experience, N+1 queries are roughly as easy to run into with naive GraphQL as with naive REST. When you are optimizing the queries, it's slightly easier to get rid of the N+1 behavior in GraphQL than in REST because you can see exactly what data the client is trying to get and can fetch data differently based on that instead of needing to come up with your own way for the API to describe what data to fetch.
> In my experience, N+1 queries are roughly as easy to run into with naive GraphQL as with naive REST.
We're currently dealing with this on a REST API right now. If you have highly relational data, you run N+1 risks regardless. I think people misplace blame on REST/GraphQL when they will have fundamental data issues either way.
We're exploring a few ways of solving it, but they honestly all resemble GraphQL.
Imagine you're working on a bespoke blog engine for your company, so you have a GraphQL API that lets you query for all articles or just a single article.
Now the product folks want to add comments, so you add a new comments field on your article, with a new field resolver that queries the database for comments like SELECT * FROM comments WHERE article_id = ?.
This works great when viewing a single article: the frontend makes a single query to the GraphQL API, which makes 2 database queries: one for the article details and one for the comments
But then the product folks want to update the article search page with a expandable comment section for each article, so you can preview the comments for an article. Easy enough, all you have to do is add the comment field to the GraphQL query for the article search page, it's a GraphQL success story!
But now when the article search page loads, it runs 1 GraphQL query that retrieves 20 articles, and for each one it runs the SQL query to load comments. So instead of making 2 database queries it's 21.
Now we have performance scaling along with the number of results on the page, which is not ideal.
This problem exists in REST APIs in the exact same way though, unless you specifically optimize for this way of querying. But then you can do the same thing with GraphQL, so in the end GraphQL is not worse off, actually rather better because at least the problem exists only between backend and database, not between frontend and backend and database.
The idea is that there is a sort of contract between the backend and the frontend.
If I conscientiously create an API endpoint that allows you to fetch articles with their comments, I'm gonna craft that query to avoid the N+1 problem. So when someone uses it, perf isn't scaling linearly.
With GraphQL, you can have a frontend person add comments without anyone from the backend team modifying anything. They say "yay it works, GraphQL is amazing", and nobody realizes this is causing scaling problems because nobody on backend actually thought about it.
> With GraphQL, you can have a frontend person add comments without anyone from the backend team modifying anything
Well, they can do that with REST as well. They will just make a bunch of requests. That is what usually happens in the real world.
However, the difference is that with REST, it all just looks like isolated independent requests. With graphQL it becomes more clear that someone wants to query all comments in just one query, so it's much more easy to detect and optimize for that case.
I don't know, I still think you're much more likely to have a frontend dev get annoyed by having to kick off 20 requests (and seeing that perf impact in their own devtools) and ask backend to give them an endpoint that can get all the content in one go.
Same. This is why I was looking for concrete examples because the problem isn’t a graphql problem, it exists in any interface. Kinda a trick question, but it was good reading people’s reasoning ;)
I agree with GP as well, but I do think there are unique circumstances with GraphQL.
One difference is that if the front-end makes N+1 REST calls, it's (hopefully) obvious to the front-end developer. It's also generally easy to map the REST requests to the database queries being made.
Swap it all out for a single GraphQL query and now you have no idea how it will perform or whether it was optimized for the specific fields you are requesting.
Another difference is that REST-style solutions won't work for GraphQL. Imagine you're making a bunch of REST calls, e.g. querying for a list of articles then querying for a list of comments for each one. You can ask the backend team for a new endpoint that returns them all in one query, easy enough.
But with GraphQL schemas, the potential graph of data is too large to write custom SQL queries that efficiently fetch everything in one batch. For example:
{
articles {
title
contents
author {
name
articles {
title
contents
comments {
content
author {
...
}
}
}
}
comments {
content
author {
name
}
}
}
}
Maybe a bit contrived, but it illustrates my point. Due to the ability to traverse relationships it's much easier to find yourself in a situation where the implementation of the GraphQL resolvers is not ideal for the usage, but it theoretically will work.
The solution for this is to use something like dataloader btw. It essentially waits for all these queries (I believe it uses queueMicrotask under the hood) and batches them. Not unlike other db batching proxies.
If that’s a problem, have the frontend run a separate query for the article’s comments upon expansion of the section, like the REST version would have done. You still have easier caching in the GQL version.
Use something like Postgraphile to create your graphql endpoint and it will issue a single query then massage the results into what it needs them to be.
I think the classic example is article list with author details. You load the articles (one query), then for each article you load its author (N queries). Naive use of an ORM causes this:
articles = Article.load(limit=10)
for article in articles:
# .author implicitly runs a query
author_name = article.author.name
This can be avoided in ORMs providing prefetching (e.g. `Article.load(eager_load='author')`).
a query that lets you fetch a list of posts, each post has an author field. So if you ask for these, a naive backend will "resolve" the author for each post individually (so N queries to the DB to fetch 1 author each time) as opposed to a single resolution for the batch of authors.
This is definitely my experience, particularly in using Hasura (particularly via the wonderful NHost[0], a fantastic database service). I believe that Hasura knows how to not do N+1 queries.
Hasura generates one SQL query per top-level field. Hasura takes advantage of Postgres's JSON features to compile a deeply-nested graphql query into a single query.
It's pretty impressive too. For any deeply-nested query, like fetching the 1-to-many children of 1-to-many children of some entity, a lot of ORMs will often have to resort to doing N+1 queries, which adds a surprising amount of latency to a request.
As someone who has heavily used both REST and GraphQL in the past and actively, I find this comparison heavily similar to using raw HTML versus React. When it really boils down, GraphQL is just REST with types and the capability of performing multiple requests in one. Much like React is a way of creating component types from HTML elements.
GraphQL takes a bit more work to get setup and running, but it's soo worth it in the end if you value typing. One of the most impressive parts of GQL that REST isn't capable of having, is the vast tooling ecosystem enabled by a standardly typed data transfer model, the fact that anyone can write a library that can automatically hook into your model is incredibly powerful.
I find most people who have negative opinions on GraphQL simply haven't taken the dive yet to fully understand it, and they typically overthink what it actually does. In my opinion, if you're writing a progressive web/mobile app with relational data, you should hands down be using GraphQL.
well it's literally just a tree of functions with extra metadata to make it extra powerful. Also have used both REST and GQL heavily and it essentially solves lots of problems I had with REST. I'm never writing REST again and when I see that something I'm consuming also have a GQL API it makes me so happy to be able to know every single field available to me.
The only use-case where it's annoying me is a CMS for building an entire site. It's recursive as A -> B -> A -> B and the lack of recursion as of yet makes it manual work to extract every level that you need.
Does this actually pass as an academic study? It is presented in the format of a study (it looks like one) but to suggest GraphQL is better than REST based on the expenditure of effort for building out the first iteration of code for cherry picked scenarios is a first-year undergraduate CS101 study at best. Also, who funded this study?
I want to know the actual costs of GraphQL. What are the tradeoffs with REST? What are the limitations? We're not going to hear from managers who are damage controlling their decisions, but maybe the developers paying for GraphQL adoption wouldn't mind sharing grievances? Now that money and reputations are at stake, I think that unfortunately it will be a while before the tech community starts admitting to mistakes, just as was done with adopting ORMs.
The typical pitch about GraphQL is that it is intended to alleviate the pain of updating ORM dsl every time a change to an endpoint needs to be made to satisfy frontend requirements. Aren't updates to the GraphQL DSL replacing those made to the ORM? The problem hasn't been eliminated but replaced. One problem was replaced by another problem.
If the worst part about working with a REST api with raw sql calls is that you have to make a few straightforward changes, while maintaining complete control of your sql, you're doing great.
For me this is the point that I feel isn't well argued against by GraphQL proponents. Why if I'm a Javascript developer with probably a similar learning curve I couldn't just whip up a simple stateless frontend (in Node since its the same lang) and write the query in SQL? More to the point the skills learnt by doing so are more transferable. You also give your data store a chance to be more performant (e.g. better query plans for a typical DB) using that DSL to query the data store and there's a lot less complexity (components, libraries) in your solution.
Also from what I've seen the approaches that may result in single ideal query plans in your database increase the complexity of GraphQL significantly with "GraphQL to SQL" compilers which still may not give tuned SQL especially with large graphs. Reminds me of using ORM's where often the generated SQL wasn't performant/slow. Even if it does work the complexity of your solution just increased - all for what? To translate one query language into another? There's also times where it may pay NOT to expose too much of your internal domain structure to public API's which I feel from a naive developer's perspective GraphQL could encourage (direct internal API structure to contract mappings).
It feels, at least to me, that it is another abstraction layer that solves a problem that could also be solved at the org level. If there's suddenly a use case that requires a tuned query/algorithm/index as well (happens a lot from my experience) then its easy to add as a separate endpoint and test for regression test against other endpoints/use cases supported.
GraphQL specifies schema and documentation, and implementations usually come with introspection tools, which is not the case for REST. It would be more fair to compare GraphQL to something like Swagger.
That said, GraphQL libraries and tooling tend to be less fragmented and more ready to use than for equivalent REST stacks, which makes development much easier. A simple API built in Python with Graphene looks terse and declarative, while doing the same with Django REST Framework requires adding pages of plumbing code to build an equivalent REST API.
Re: GraphQL being a pain for a backend development - I don't quite understand that argument. If you want to keep a tighter control over the schema, you can have it - the API schema does not need to be a reflection of the underlying data schema with all its relationships - it should be the front-end developer's job to convince you that this or that complex relationship needs to be exposed via GraphQL and let you make sure that it's served efficiently.
This is a flawed comparison. GraphQL is probably a superior query language. The more salient question is how the complexity of back-end API endpoint implementation differs.
The answer is that it varies widely between ecosystems (i.e. you are going to have a totally different experience developing a GraphQL endpoint in PostGraphile, Graphene, and Apollo)...and the paper doesn't mention which back-end technology was used.
The results are intuitive and in favor of GraphQL, especially as query complexity increases. GraphQL makes loads of sense for modern javascript thick clients:
The front end developer is going to want the same thing that a back end developer has: an expressive, general query language.
And the API developer should want to give it to them so they aren't dragging their API around all over the place as fiddly UI & application logic changes are made[1].
I'm glad to see javascript client-server apps heading this direction, and leaving REST/HATEOAS to hypertext-oriented approaches[2].
I designed a REST API with a "well-documented" HATEOAS representation (collection+json). The primary requirement was to ensure there was no caller-side language dependency because we had multiple teams that would be calling the API and those teams might use the command line, Python, Ruby, Javascript, etc clients to consume the API results.
API adoption was low in the end - teams in the end wanted libraries rather than a well-defined representation with embedded hyperlinks. Every consumer I heard about simply parsed the JSON bits they wanted from responses and hard-coded URL links in their client apps.
I enjoyed the design and writing of the app - I used Clojure and the liberator library. Probably a wasted effort of engineering in the end. Totally fun project to implement though.
> I have never seen how does HATEOAS deliver benefit over a wiki with good documentation - that seems like better effort.
Have you used a web browser? Do you see how it handles known content types from a new website that you haven't previously visited seamlessly without you reading docs and telling it what to do with it?
That is exactly HATEOAS.
In general, understanding REST is often easiest if you think about what browsers do, since REST is the key principal underlying the design of HTTP/1.1 and was largely a rationalization of the evolved behavior of the web per-HTTP/1.1.
> Have you used a web browser? Do you see how it handles known content types from a new website that you haven't previously visited seamlessly without you reading docs and telling it what to do with it?
I want to get my job done with an API, not browse it. I spend most of my time figuring out what order endpoints need to be called in and what data they need to be passed, not what the endpoints are.
The shittiest HTTP APIs I've used had full lists of endpoints but nonexistent documentation of arguments. If HATEOAS solves that I can get behind it, but I don't think it does.
JSON is obviously not a native hypertext, any more than plain text or CSV is. You can, of course, embed URLs in it (or any other format) and call it a hypertext. That's usually pointless, as the links will be consumed by code (rather than a human, in the canonical HTML->browser interaction) so if the URLs aren't simply ignored, they will be dealt with generically.
Curious to me is that it seems you argue that XML is a valid hypertext; but, isn't it possible to implement most any valid xml document as a json document, even considering attributes?
And, have you seen Github's API implementation? They have the links and self-referential details in their particular json schemas that you seem to imply are necessary and unique for that.
If the argument is that json can't be rendered, isn't it arguable that xml equally has that problem, we just have a program with a common syntax reader format that renders pages equivalently, and something equivalent could be written for a particular json schema?
I do not think that XML is a hypertext. So I agree with you.
HTML is a hypertext.
Yes, Github probably has one of the most as-REST-like-as-possible JSON apis, and it's mostly wasted effort. It would be just as usable, as an API, without all that stuff, since it is consumed by code, not a browser/human.
But, without clients that treat the data as a uniform interface, it's all mostly pointless, which is why most JSON APIs stop at Level 2 of the Richardson Maturity Model, and why things like GraphQL are becoming more and more popular.
In the original model of the web (which is what Fielding was describing w/ REST) where HTML was being delivered to a browser, the uniform interface works: a browser has no idea what the HTML document means but can render it to a form that the human looking at it can make sense of.
The site 42papers.com that serving this link is build with Super Graph a GraphQL to SQL compiler. Tech like this makes GraphQL clearly the easier, faster and better option. I built Super Graph since I wanted something better than REST while building apps and did not want to go down the rabbit hole of GraphQL frameworks where I would still need to write and maintain all the database query code for the app.
I started out not liking GraphQL cause it didn't really reduce the code I needed to write but on digging deeper it seemed to be the best way to represent a data query from an app now only if something could convert that into SQL automagically. This was the motivation for Super Graph.
Social and Information Networks category description is: Covers all aspects of computing with sound, and sound as an information channel. Includes models of sound
But over in Audio and Speech Processing the description says: General methodological and applied contributions to economics.
Admittedly I skimmed -- observed that the paper dove really deep into time to perform queries and implement desired behavior, but didn't spend much time on implementation of GraphQL vs a REST API.
One statement made that stood out to me:
> it is clear that
the availability of a type system—expressed as a schema—
is one of the key benefits provided by GraphQL
So I wonder what the results would look like if you had an OAS spec file for the API you were hitting (with autocomplete in VSCode), seems to me that's the best of both worlds: ease of implementation on the server side, ease of use on the client side.
This to me feels like an argument of better tooling and (self) documentation rather than one technique over another.
Yes, had this discussion on here before. While in REST you can more or less optimize a service to be performant at what it does, the moment there is another service that you need to fetch some additional data, the browser would be required to orchestrate that, which means you're also waiting for the return trip on the first call before you can even get the secondary data. This alone makes REST far less favorable in my opinion.
The secondary is that operations are standardized in GQL. Whereas in REST I can pass data in through:
Query params
Body
Headers
Path variables
Each team will do this differently. In graphQL these inputs are pretty much standardized and you have the option to implement security through headers. This makes GQL decoupl-able from HTTP (at some point). That's a great incentive too by itself since at some point the web may just be media distributions + services. Where media distributions is just HTML/CSS/JS/etc... There's new reasons as time moves forward media distributions needs to take place over HTTP.
The top response to this is caching - in other words every device on the planet knows HTTP so caching in GQL becomes a secondary thought. So yeah there's that. GQL has some variously applied caching semantics - so YMMV but personally I still feel like the gains from the first point are so dramatic that most people won't really care about this.
There's no law against putting joined data into a REST endpoint. As an example, I had to consume an API for a customer's subscriptions. They did it by the book where Customer has Subscriptions which has Items which has Prices. That was a huge performance issue for us because it was 4 round trips. But they didn't need to use GraphQL. They just needed a fullDetails end point for subscriptions.
Essentially, a graph of objects that must be treated as a single entity.
I've seen quite a few REST implementations that have separate calls for the root and branches, like Order and LineItems. In the Aggregate pattern, you treat them as an entity that travels and transacts as one.
Regarding decoupling GraphQL from HTTP - do we need this? cause are there plans to use GraphQL on different protocols like raw TCP? Isn't GRPC better for raw TCP?
When we are using HTTP, diluting HTTP standards - eg. using POST for both creating & updating, returning HTTP 200 for errors seem to be hacky in GraphQL.
I always found REST to be more simple & straight forward. With progression in HTTP itself - HTTP/2, HTTP/3 etc., chatty behavior of REST may not be expensive.
Implementing roles & permissions in GraphQL tricky. We need clear schema definitions to control it. Huge learning curve as well.
Frameworks like WCF (Windows Communication Foundation) used to support different transport level protocols, like TCP, but IMO that discussion is long over, with HTTP the obvious winner. And of course, the overhead is even less these days with HTTP/2.
I find GraphQL not being coupled to HTTP to be a problem, no a benefit - as you say, returning HTTP 200 for everything is just daft.
You're not wrong, and to resolve this, I've implemented something of an in-between for APIs I work on, which I call a "Multi-Endpoint Query". It's a restful(-ish) API, but there's an additional endpoint called "query", which allows for an array of endpoints to gather information from sequentially. The most useful bit of this is that information gathered from earlier in the array can be used as referenced later in the array.
When the endpoint gets this query, it will run a GET on the first endpoint. And then it uses the results from that first endpoint to fill in the uri template on the 2nd endpoint, which would become something like "/api/products/2,4,17/images". And the results of all previous endpoints are merged into the response and can be used to fill subsequent uri templates as well.
As far as the client is concerned this is all done in a single API request. The first version was making literal HTTP requests to localhost, but now is built to call the endpoints directly (internally, no HTTP). This system requires some convention in the responses to make it work, but that convention ends up making responses predictable and easily reusable. Overall it's worked very well and requires almost no additional effort.
I have a similar endpoint that also allows "POST"s which makes for seriously complex single-request interactions, but I don't use that as much due to the inherent complexity of it - at least not for client-side APIs.
When I first looked into GQL, it looked pretty fancy, but seemed to require quite a bit more maintenance to define the query space. This was before a lot of today's libraries were available. So I hacked together the Multi-Endpoint Query, assuming if it failed, I could go back to GQL when there were more libraries available. But I haven't really needed to do so since.
And there's nothing wrong with that. You essentially invented your own GQL because it contained some awesome ideas. Also I think it would be OK if new things similar to GQL came out and challenged it. I don't think GQL is perfect, and I personally wish it had namespacing and more thought about caching built in.
That may be true, but has no bearing on the fact that being able to add a “fields” param is not by any measure an alternative to what GraphQL provides.
Out of all the API implementations I've created in my career I've had the most success implementing BFF (Backends for Frontends) with simple RPC'ish endpoints. Everything else never quite fit right and wasted so many hours.
Not specifically but it can could be used for it. GraphQL just a message spec (like dare I say, SOAP). But if you use it with BFF then the value of the dynamic query aspects seems less valuable. With a BFF your App/UI and endpoints can be tightly coupled. If a specific page/component/action needs data structured in a specific way then you just make the endpoint do exactly that. You can accomplish this over GraphQL but it seems like an unnecessary extra layer at that point.
GraphQL has two notable advantages, neither of which are measured here. First by allowing the client to specify precisely what to retrieve the backend isn't catering to a specific client. This allows for decoupling so each team can develop quicker and with fewer changes down the road.
Of course REST can also be decoupled only needing to follow conventions and defining the resource formats. What it doesn't provide is the second benefit of GraphQL which is eliminating round trips for accessing related resources. This is sometimes done by creating an intermediate 'BFF' service on the back end to serve requests as the client would like them but now that's sensitive to both client and REST API changes.
In short, it's not about the development speed of the first client/server. If that were the case use gRPC etc, generate the interface, implement and you're done. It's about the supporting ongoing changes. Perhaps gRPC or similar will become prevalent enough to replace both REST and GraphQL.
GraphQL vs. REST somewhat resembles SQL vs. (document-oriented) NoSQL. If you can agree with your clients on certain access patterns (queries), you can design your RESTful API and your database collections, respectively, to suit these.
If clients want full flexibility in what they query and avoid over-/underfetching, GraphQL and SQL (databases) might be better.
I have never implemented GraphQL, so I can't speak from experience, there, but it does look pretty cool (if a bit "heavy").
I have implemented REST. I don't like "pure" REST, where POST, PATCH, and PUT statements are submitted as XML and/or JSON.
I tend to do what I term "REST-like," sort of the way most APIs seem to do, these days, where the sending methods are done similar to standard URI GET/POST stuff (URI arguments).
I tend to have responses come back as both JSON and XML, with a translator between them. XML is useful, because I can publish a schema, but otherwise, I prefer JSON.
But REST-like is quite simple. I don't need to bring in any dependencies to provide the API, so it's fast and lightweight.
But then, I have fairly humble servers. I suspect that GraphQL would be something I'd want for more ambitious servers.
Think about an API like a control panel for a machine. It is precisely designed for an operator abstracting away the details of the machine's electrical schematic and every component. It is user focused. GraphQL is like opening up the entire machine, stripping off the panels and working on live circuits.
IMO GraphQL is great to quickly develop something. If your front-end is making 18 queries to REST API and you're making a case for how awesome GraphQL is, I think perhaps your REST API design needs to be properly thought out. I personally like to start with GraphQL, once everything is ready and I know what I want, write proper API endpoints. They can be REST-ful or REST-less. One API call for one function the front-end needs to perform.
If the wiring in the machine changes and need fancy brushless motors are added, the operator doesn't need to worry about it. They can always replace the control panel with a new v2 version if they want additional features, but we guarantee that v1 version will continue to function. This is beautiful design.
APIs don't have to be REST-ful and they should not be some kind of an analog of the backend database schema. API - it is in the name, "Interface".
> IMO GraphQL is great to quickly develop something. If your front-end is making 18 queries to REST API and you're making a case for how awesome GraphQL is, I think perhaps your REST API design needs to be properly thought out. I personally like to start with GraphQL, once everything is ready and I know what I want, write proper API endpoints. They can be REST-ful or REST-less. One API call for one function the front-end needs to perform.
I think a good way of doing this is to use Postgraphile/Hasura to make a quick and easy GQL API, and then when you move to your "proper API endpoints", you can use Postgraphile in schema-only mode [0] at first, and then you have the ability to arbitrarily change endpoints to use custom SQL queries instead of using Postgraphile to build your query, while leaving other, simpler endpoints to still use Postgraphile.
My problem with graphql always comes down to permissions and performance.
I feel like we went through the thought process of: okay, front ends are smart enough to ask for the data that they want, so why not just let them structure their own queries? But my problem is, the only time a client is smart and and to just ask for whatever they want us if I'm building an admin application, otherwise, all sorts of access control needs to be added to various levels of most queries, something I've never felt as though GraphQL handles well.
Then there's the performance stuff, hitting caches or doing joins, it's much harder to make a GraphQL backend know when it can do these things, short of simple cache lookups by very limited keys.
With REST, I'm somewhere in the middle. I'm saying, okay, these are the resources, and these are the actions. You might get a little more or a little less data than you wanted with your response, but I know exactly what you're getting, can optimize it, and test the shit out of it.
I'm by no means resistant to new technology, I have just never worked on a project that avoided these pitfalls.
The two are not exclusionary. For me it is like having a higher abstraction API that provides for 80% of client developer needs, and a 'closer to the metal' API that provides for the remaining 20%. REST gives the popular/frequent queries optimized and canned, while GraphQL gives the power to create completely custom adhoc queries as needed.
If you are creating a data service then I think you really need to have both. Which you do, and how much of it, is more likely to be dictated by time and budget. Many will need to figure out who their 80% audience is and prioritize based on that.
Still a few REST services can be done, then GraphQL added with not mutators, then add mutators and more queryable data, etc. Start with the bare bones, just like you would a startup MVP and grow organically from there.
A while ago I decided to build another one of those Hacker News clone, and decided to use GraphQL, everything was fine, until I got at the comments...
You cannot ask for all the children of a main comment or a thread, something like "give me all the comments and sub comments of thread X" is impossible, I was quite shocked because I read nowhere about this limitation, I solved it adding an extra field to my response adding a "father" field, so I needed to organize and sort the data at the front-end, instead of using an already sorted JSON like would be possible with rest.
With all due respect, I find that most of GraphQL tests and examples return mostly simple data that is kinda easy anyways.
And if someone else is interested to take a look at my clone project (it's in Clojure), for the GraphQL issue you can search for "request recursively nested objects".
Am I missing something, or was this only about implementing queries with a fixed backend?
What's the numbers on implementing these? In different languages other than javascript? Last time I looked, implementing graphQL with django was pretty janky and limited.
Serious question: given how difficult correctly parsing SELECT commands (and avoiding SQLi and other horrible issues like db parser impl bugs) is, why hasn't someone implemented a db whose query language is graphQL?
GraphQL is great to iterate with once you have built up the schema. However, after some time your queries will stop evolving and at that point you are better off with a REST API for the security and performance benefits.
The real cost of GraphQL is resource usage/performance on the backend.
On the development side, the cost is that GraphQL allows developers to get away with writing really messy code whereas REST basically forces them to follow a more structured approach with better separation of concerns between components (to mirror the separation of concerns which REST endpoints naturally have). But I don't think this concern is that important because developers CAN design good software with either approach if they have the right mindset.
Using rest, it's dead easy to enable caching on load balancer or CDN level with background updates.
But with graphql- it's not so trivial. Youll have to do workarounds by caching POST requests (which imo is bad practice) or use GET requests, which might not work because the URL gets too long.
Or use named queries, but then - what did you gain compared to rest?
Another problem is that you expose internals and may more easily make yourself vulnerable to overload attacks.
The problems you described can be easily solved: https://wundergraph.com/features/caching
What you gain is the flexibility of GraphQL together with the performance and cacheability of REST.
On my servers I do away with either. Instead client submits JSON object and gets JSON answer. Simple HTTP based RPC. But my servers of course are not some generic storage of knowledge for mining. Instead they're doing some particular processing and administration. So clients are not given as much freedom as in GraphQL. And I am free from headache of implementing some generic backend query service for no benefit.
There is some kind of toast on this site that never goes away telling me all kinds of info I care nothing about (new member, so and so bookmarked this etc). I would like to fallaciously suggest that only a site with such poor judgement would host papers such as this that come to such erroneous conclusions
Interesting thought because I see GraphQL as removing a layer of abstraction. Instead of modelling the domain activities and abstracting away the storage model, GraphQL can end up just being a way to express a relational model and SQL as JSON. Except it's not as featureful or complete as the real relational system.
Won't graphql shift a lot of the logic to the frontend?
Personally I don't like having businesses logic in the frontend (though I have had "discussions" with other devs who believe it is not problem to have logic scattered about your app).
Read a lot about how Relay is better than GraphQL Apollo client, but haven't got much info on the adoption of Relay vs Apollo client and Relay is too complicated. Now this has added to more GraphQL vs Rest complexity.
I think they're too different for that to be meaningful. GraphQL lets you do way more in a query, which means you need fewer queries. It would be like comparing PostgreSQL with Redis, or something.
Yes, GraphQL would require fewer round trips than corresponding individual queries via REST, but that logic of mapping a GraphQL query to individual SQL queries from the DB resides in the GraphQL layer.
So, the question is how does that overhead compare to the overhead caused by multiple REST api calls?
And, in addition, how do the GraphQL layers scale as the GraphQL queries become complicated and larger.
This is comically meaningless as far as evidence for making a decision goes. It reminds me of the various garbage studies purporting to show that functional programming leads to lower defect rates.
But providing a robust graphql endpoint that is performant, scalable, secure, etc, is much more difficult than REST. GraphQL is optimizing for a different set of developers, and this paper only studied one group.