Move application/json responses to Appendix A#379
Move application/json responses to Appendix A#379martinbonnin wants to merge 11 commits intomainfrom
application/json responses to Appendix A#379Conversation
application/json responses to Appendix A
|
For most of the last 10 years, there's been an unofficial GraphQL-over-HTTP spec which very very many custom servers implemented. There are major servers such as Apollo Server, Yoga, etc that many people depend upon and have been updated to conform with the latest GraphQL-over-HTTP spec, but serving GraphQL over HTTP is typically so simple that a GraphQL server can actually be implemented in very few lines of code, and very very many organizations have gone this route rather than adding a dependency:
This might look something like (scratched out of my head directly into GitHub editor, probably don't use this in production I guess?): import { graphql } from "graphql";
import express from "express";
import schema from "./schema";
export const app = express();
app.use(express.json());
app.post("/graphql", async (req, res) => {
if (typeof req.body !== "object" || req.body == null)
return res.status(400).json({ error: "Invalid GraphQL request" });
const { query, variables, operationName } = req.body;
if (typeof query !== "string")
return res.status(400).json({ error: "Missing query document" });
if (variables != null && (typeof variables !== "object" || Array.isArray(variables)))
return res.status(400).json({ error: "Invalid variables" });
if (operationName != null && typeof operationName !== "string")
return res.status(400).json({ error: "Invalid operation name" });
res.json(
await graphql({ schema, source: query, variableValues: variables, operationName }),
);
});This is all that's needed to serve GraphQL over HTTP, and has been fine for the vast majority of users. It is the opposite of complex: it's incredibly simple. The GraphQL-over-HTTP spec as it currently stands allows this kind of very simple server. HTTP status codes are here to serve a very specific audience: namely intermediary services on the server side of the network that do not understand GraphQL or do not want to parse the response body, but still want to know what's going on. This can be reverse proxies, caches, logging, anomaly detection, intrusion detection, etc. HTTP status codes do not exist to benefit GraphQL clients. Clients don't need them, they simply need to know:
They should not use HTTP status codes for any other purpose - they can determine everything else from the payload. GraphQL's new HTTP status codes are only there for the backend folks: backend developers, operations, caching, SRE, etc. If a GraphQL adopter does not need these, then it feels wrong to thrust upon them the requirement that they use a pre-built GraphQL-over-HTTP package that complies with all the status codes in the spec when they could solve it with the few lines of code above. This pattern has been demonstrated since the very introduction of GraphQL in 2015 (ref: https://github.com/graphql/express-graphql/blob/ef4cff8ebf6cda2280c086e5f88de4d04e9d90f2/src/index.js) and it's just as valid today as it has always been. Your proposal is to effectively force GraphQL adopters to use a more complex implementation, one that honors the various status codes, even if they won't benefit from it, because if they don't then over time the GraphQL clients they depend upon may no longer work with their custom servers since support is optional. In my opinion, this move forces additional complexity on GraphQL adopters. GraphQL adopters should be encouraged to use status codes, because it will likely benefit them in the long term. GraphQL server libraries and frameworks should be encouraged to support these status codes out of the box. GraphQL client libraries and frameworks should be encouraged to send requests that are compatible with status codes even if their servers don't yet support them because it enables the server team to enable this additional functionality (at the cost of a more complex server). But we should not mark all existing servers as GraphQL-over-HTTP non-compliant as a way of "forcing" them to adjust to what we currently feel is the best practice. If they don't need it, they should not have to adopt that complexity. |
|
IMO the golden path is a better way to "enforce" this. All "golden path" servers MUST support the new media type and status code. But the GraphQL-over-HTTP spec, like the GraphQL Spec itself, should remain flexible and widely compatible. |
|
@benjie can you share examples of projects that would be negatively affected by this move? Both Apollo server and Yoga have been supporting |
benjie
left a comment
There was a problem hiding this comment.
For the reasons outlined in #379 (comment) I do not think we should make this change.
Any project that doesn't use an off-the-shelf GraphQL server and instead wrote their own GraphQL middleware will be impacted by this in the long run. I cannot share specific examples (I treat client engagements and private discussions with confidence) but I can tell you that I've seen quite a number of GraphQL adopters who write middleware like the above into their codebase rather than adding a dependency - who needs an extra dependency when it's so simple, that's just going to get you more dependabot alerts!
Indeed, I noted this above:
Essentially, this move requires the use of a GraphQL-over-HTTP capable library because it increases the complexity of supporting GraphQL-over-HTTP from a few lines of code to a significantly larger undertaking that must break out parse/validate and be more precise with status codes. If you don't need that, the complexity is unnecessary, and I think mandating it would be a net negative for GraphQL. This would be akin to requiring a GraphQL client rather than just allowing |
|
Thanks for the discussion. In general, I agree with what @benjie wrote but I would love to hear more and see these examples of the simple servers you are talking about - who are they? where do you see them? Also, I would love to also add to the discussion the benefits of @martinbonnin's change - what it helps with and for whom? Seems like we need to choose between people building custom implementations and people who use tools? |
|
My 2 cents is agreeing with this statement:
and
and
But still, I'd love to weigh the pros and cons. Why's having |
|
Thanks all for the comments. I will need a bit of time to process everything wanted to give a few comments while we're looking at this: First of, we live in a GraphQL bubble. We read specs all day long (or at least from time to time). But it's not the case for everyone, this issue is the most upvoted issue in this repo and is some indication that simplification would be welcome. Reading all of this, I believe there are 2 different considerations at play:
I'm mostly concerned about 2. 10 years from now, I don't want to receive my GraphQL response with Clarity and good defaults are what make this spec valuable. I think I'm down to rephrase this: as: Some parting thoughts:
I believe the past 10 years are proof that GraphQL observability is not that simple. Sure you can make a simple implementation but that might come back bit you down the road and then you come up on Twitter complaining about "200 OK". Whether this is a spec thing or golden path thing is an open question. It comes down to how much opinionated we want to be at the end of the day. |
I can't find great public examples, because it's very hard to search for and these are the kinds of thing that people generally just put together in their own apps which aren't typically open source. However, it's possible to find tutorials including copy-pasteable example handlers that don't use a server framework - I don't know how to enumerate the people who actually followed these tutorials though!
Our very own "serving over HTTP" page has walked people through the general concepts of doing this themselves since September 2015, so anyone who followed that in a language other than JS (where express-graphql/Apollo Server were recommended at the time) may well have rolled their own. I wouldn't know how to go about enumerating that either!
The majority are almost certainly the latter indeed; but I don't think that means we should exclude the former... I don't think we need to choose between them at all. The current version of the spec states:
and:
Together these require clients to support at least the new version and they should1 support the legacy version - as mentioned above, it doesn't really make much odds to them, they just need to do the "is 200 or application/graphql-response+json" check when receiving a response and include both media types in their Servers have the much harder time so they may choose which version they support. For maximal compatibility (i.e. if you're building a server library that others will use) they should support both, but they don't have to. This allows server implementers, the people who benefit (or not) from HTTP status codes, to make the decision as to whether they want status codes (in which case they'd probably use an off the shelf library that offers this by default) or whether they want simplicity/minimal bundle size (in which case they might just add a few lines of code themselves, or have their LLM do it). It allows us to serve both people rolling their own and people who use tools.
This still requires servers to be more complex than is required today.
If and when that happens, you as the server developer can switch out your 10 line JS function for a GraphQL server that implements the HTTP status codes, it should be very simple to achieve when you need it - add the dependency, switch out your middleware, nothing else needs to change.
And people can respond: "just use a decent GraphQL HTTP library, they handle this!" because that's what the spec will encourage of all HTTP libraries now. Hence the
Agree, and we have precedent for this. The spec should focus on compatibility and interoperability. The documentation should focus on best practices. The golden path should focus on guiding people to the pit of success (which would, indeed, require usage of the new media type). Footnotes
|
This was partially addressed by the removal of the watershed following the discussions; the spec is already simpler than it was when that issue was filed. |
* Removing Appendix A: My brain is fried and I won't have the time to process graphql/graphql-over-http#379 in time * Add aim for other GraphQL over HTTP items * Add Can I use.
* Removing Appendix A: My brain is fried and I won't have the time to process graphql/graphql-over-http#379 in time * Add aim for other GraphQL over HTTP items * Add Can I use.
|
@benjie been looking at this a bit over the weekend. What about changing your export const app = express();
app.use(express.json());
app.post("/graphql", async (req, res) => {
// change response content-type to application/graphql-response+json
res.set("Content-Type", `application/graphql-response+json; charset=utf-8`);
// Wrap errors in an array
if (typeof req.body !== "object" || req.body == null)
return res.status(400).json({ errors: [{ message: "Invalid GraphQL request" }] });
const { query, variables, operationName } = req.body;
if (typeof query !== "string")
return res.status(400).json({ errors: [{ message: "Missing query document" }] });
if (variables != null && (typeof variables !== "object" || Array.isArray(variables)))
return res.status(400).json({ errors: [{ message: "Invalid variables" }] });
if (operationName != null && typeof operationName !== "string")
return res.status(400).json({ errors: [{ message: "Invalid operation name" }] });
res.json(
await graphql({ schema, source: query, variableValues: variables, operationName }),
);
});That's one more line for setting the response content-type and a few extra chars for wrapping I believe it is also compliant with the current GraphQL over HTTP state? Sure by returning 400 always, it does not follow the recommendations but in terms of MUST, I think it clears the bar. All of that while paving a nice future where What am I missing? |
|
Isn't the snip by @benjie intended to demonstrate what is typically found in many GraphQL servers today, rather than what could be written today? For instance, the GraphQL.NET main library includes an extremely-simple server implementation within its sample, as it has always done, which just returns Even if this were changed, it is impossible to know how many users have simply copied the server implementation from the sample without utilizing the recommended implementation within the GraphQL.Server.All NuGet package (and updated it to v7+). Since the GraphQL package has been downloaded 74M times, and GraphQL.Server.Transports.AspNetCore only 19M times, stands to reason that many of the other 55M downloads have their own server implementation, probably using Further, here's a relatively new GraphQL server implementation that returns So, correct me if I'm wrong, but I think the point is that if |
|
@Shane32 It's all a tradeoff and I'd rather focus at the future than the past in this occasion. A clear, consistent media type with observable status codes will help the ecosystem a lot. I thought Benjie's point was that
But I can't really see the extra complexity. Are we talking about the extra line to set the response content-type and extra chars to wrap the errors in an array ? I'm down with that if it means we have a single media type for GraphQL responses. Edit: @Shane32 fun times, I just bumped into this PR of yours that is basically the same as this one minus the moving to appendix and was also approved by @enisdenjo |
|
@martinbonnin No, your example above is not compliant. But @Shane32 is right, my argument against this is not complexity1. My key concern is backwards compatibility and the innumerable APIs out there that use GraphQL servers derived from the recommended practices of the past. This WG was set up to standardize this existing pattern first and foremost, and to guide to a better future as a secondary objective. If we're changing the mission of the WG to "break the past to simplify the future" then that needs to go back to the primary working group and gain consensus on the new direction. Footnotes
|
Can you ellaborate? The current spec says:
My reading of this sentence is that the meaning of "appropriate" is left to the reader. So 400 is "appropriate" if I decide so for my implementation. Sure it's not great like using 405, etc... but it's what I decided made sense for my server. That would make the above implementation compliant. If that's not the intent of the current spec, I suggest we rephrase it.
For the record, current implementations can, and will continue to function. Clients are encouraged to support both media types for the time being, I'm certainly planning to do so for Apollo Kotlin. The upgrade story is quite good and the end state is clean. I'm down to go back to the drawing board to get clarity on this. I scheduled a meeting today if anyone wants to talk. If not we can discuss in the primary working group in a couple of week.
To answer to @Urigo's question, I believe the "implementer vs end user" distinction does not apply here. The HTTP headers and status code are very observable behaviour by everyone doing GraphQL. Every end userhas opened the network tools at least once to see the details of the HTTP request. A strong and clear path forward would help everyone there. |
…the server wording
Rephrased in #389, but the first issue is not related to your use of HTTP 400, it's that your example will return HTTP 200 for a GraphQL document that does not parse.
If we want to invalidate many existing custom servers out of the box then that's a concretely different purpose to that which this WG was set up for; that change in direction will need to be approved at the primary WG. Worth discussing at HTTP WG first to see if there's consensus. I've already stated my position a few times, I think breaking backwards compatibility would be a really bad move and bad look for GraphQL - it's one of our major selling points - instead, I think we should encourage people to do the right thing more gradually. I can't make today's WG because I have a school commitment, but I'll be at the next one on 30th. |
Right 👍 Updated app.post("/graphql", async (req, res) => {
res.set("Content-Type", `application/graphql-response+json; charset=utf-8`);
if (typeof req.body !== "object" || req.body == null)
return res.status(400).json({ errors: [{ message: "Invalid GraphQL request" }] });
const { query, variables, operationName } = req.body;
if (typeof query !== "string")
return res.status(400).json({ errors: [{ message: "Missing query document" }] });
if (variables != null && (typeof variables !== "object" || Array.isArray(variables)))
return res.status(400).json({ errors: [{ message: "Invalid variables" }] });
if (operationName != null && typeof operationName !== "string")
return res.status(400).json({ errors: [{ message: "Invalid operation name" }] });
const result = await graphql({ schema, source: query, variableValues: variables, operationName })
if (!('data' in result)) return res.status(400).json(result);
res.json(result);
});The diff compared to the export const app = express();
app.use(express.json());
app.post("/graphql", async (req, res) => {
+ res.set("Content-Type", `application/graphql-response+json; charset=utf-8`);
if (typeof req.body !== "object" || req.body == null)
- return res.status(400).json({ error: "Invalid GraphQL request" });
+ return res.status(400).json({ errors: [{ message: "Invalid GraphQL request" }] });
const { query, variables, operationName } = req.body;
if (typeof query !== "string")
- return res.status(400).json({ error: "Missing query document" });
+ return res.status(400).json({ errors: [{ message: "Missing query document" }] });
if (variables != null && (typeof variables !== "object" || Array.isArray(variables)))
- return res.status(400).json({ error: "Invalid variables" });
+ return res.status(400).json({ errors: [{ message: "Invalid variables" }] });
if (operationName != null && typeof operationName !== "string")
- return res.status(400).json({ error: "Invalid operation name" });
- res.json(
- await graphql({ schema, source: query, variableValues: variables, operationName }),
- );
+ return res.status(400).json({ errors: [{ message: "Invalid operation name" }] });
+ const result = await graphql({ schema, source: query, variableValues: variables, operationName })
+ if (!('data' in result)) return res.status(400).json(result);
+ res.json(result);
});I still believe it's very acceptable. If this is still non-compliant, let me know and I'll update it.
Right, this is the biggest question. My thinking is that omitting I don't see this as a big bang. I expect everything to keep working. If at some point down the road something breaks, we should examine the cost of fixing the errors, vs the cost of having to support both media types forever. Let's discuss this on Apr 30th. |
Not having been involved with the WG I do not know the original intent behind it. From the outside perspective, when looking at the spec, IMHO it should describe what is the intended end result and not the status quo as otherwise it will never move the line (never quite understood the need for the watershed in previous drafts). I am unsure also why "every" existing GraphQL server out there has to be compatible with a spec that was not yet released? TLDR +1 to simplify |
jerelmiller
left a comment
There was a problem hiding this comment.
Other than the link destination, this looks great!
|
I want to lay out where I stand on this and why, since I think the discussion has been circling the same points without converging. First, lets be clear about what this PR actually does. It does not remove A few things I want to address directly. On backward compatibility. GraphQL's BC story has always been about schemas. You add fields, old clients keep working. That is the promise we make and it is worth protecting. HTTP transport details have never been part of that promise. Conflating the two muddies the conversation. And as @dariuszkuc already pointed out, graphql-over-http is a supplemental spec. Servers that do not conform to it do not break. They were not conforming to it before this PR either. They will continue to function exactly as they always have. Saying we are "invalidating existing servers" is not accurate. What we are saying is that going forward the spec's compliance bar is On the mission of the WG. The spec has already moved on this once. The sunset for So this is NOT a violation of the WG's mission, it is the same kind of decision being made with more information. If we need primary WG approval to confirm direction, lets get it. I would rather have that conversation explicitly than keep relitigating inside this PR. On complexity. @martinbonnin's diff already settled this. Going from What is not low is the cost of requiring every server, forever, to support both. That is where the actual complexity lives. To return That is why with Hot Chocolate 16.0.0 we have dropped On "200 for everything". This is one of the most common complaints about GraphQL. Issue #329 in this repo, the most upvoted issue we have, is about it. The whole reason We finally have a fix. The major server implementations adopted it years ago. The major clients adopted it years ago. This PR is the spec catching up to what the ecosystem already did. Calling that "breaking the ecosystem" gets it backwards. On the simple servers. I know there are real codebases with handwritten 10-line GraphQL middleware. I am not trying to break them. The appendix exists exactly so they have something to point to. What I push back on is the idea that the spec's main normative section should be defined by what the most minimal possible implementation can do. That is not how a spec moves an ecosystem forward. The main spec defines the direction. The appendix preserves the past. Bottom line. I am for this PR. Nothing is being deleted. The procedural concern actually cuts the other way. The BC argument is about schemas, not transports. And the spec's main section should describe where we are going, not freeze where we have been. If the answer to "why is |
Backwards compatibility is about clients, not schema.
No. GraphQL's BC story has always been about clients. Old, deployed, non-updatable clients keep working no matter what changes you make. What's the point of keeping the schema backwards compatible if the old clients can't issue requests against it any more because you've broken the transport layer? Asymmetry is an accommodation for legacyI was explaining on the last call that the reason for the asymmetry in the spec as it stands currently is that proper GraphQL clients must use the new media types, but there's a huge number of weird and wonderful clients out there that don't. Ad-hoc clientsThere are an unknown number of ad-hoc clients out there, for example command line scripts like Legacy software/utilitiesThere are also an unknown number of abandoned GraphQL utilities - SDL downloaders, documentation generators, IDE integrations, command-line clients, clients in less popular languages, etc - that dropping support for Breaking the legacy
This is one of my key concerns: we say servers don't have to support the old media type since all clients MUST support the new media type, and now all those ad-hoc clients and random abandoned software no longer work. Now people go on reddit and instead of complaining about 200 OK to which we can reply "use more modern client/server and you'll automatically use the right status codes", there's people complaining that they're following some tutorial (or LLM answer) but X command is returning Y error and it's all really confusing - GraphQL seems too complicated, they're going back to REST. Old content is an asset, with this change it becomes a liabilityThere are lots of clients, tutorials, utilities, articles, books, leaflets, podcasts, videos and so on out there that will or can never be updated. This is all useful content for GraphQL, but won't be if we make it so people can't use it anymore. Worse: it all becomes a liability rather than an asset. Sunset
No, the spec text was always meant to be asymmetric: clients must follow the recommended practices (when you write a client, it must follow the new way of doing things), but servers should accept legacy requests for interoperability because we can't change the past. Sanctioning "broken" behavior
No, it explicitly requires clients to support the new behavior. It does however accommodate a legacy - since we can't change the past. If you're complaining "my car from 1920 doesn't have seatbelts - this is unsafe, why is this allowed?!" then that's kind of your problem, is it not? That's not the law sanctioning the lack of seatbelts - the law requires all new cars to have seatbelts - but it is allowing for the legacy: cars built before the rules came into effect may continue to use the roads knowing the drivers of those cars are taking the risk. The law isn't forcing those drivers to take on the cost of retrofitting their cars with seatbelts because that would be a large burden. Legacy support even for new servers/schemas?!You might think that we can say that for new servers there's no need to support the legacy media type since there are no legacy clients, but I'm not sure that's the case. I'm worried if servers drop support for the legacy media type the adoption story for some users will become more complex - tools that used to work and were referenced in tutorials or READMEs or similar will no longer work. New adopters who hit these speed bumps need to search out more up to date information; if they're using a more esoteric language they might need to build a more modern client if the existing ones are abandoned and no longer work. It may lead to more user frustration, and the ultimate reason: GraphQL explicitly chose to stop supporting old clients. New servers should implement the new media typeI agree with making it so servers MUST support the new media type to be spec-compliant (though I would personally do this in a v2 release - so people can say they implement the v2 release of the spec). Servers may reject any request for any reasonI also want to stress: any server may reject any request for any reason. If a user doesn't want their server to support Specs are about interoperability, not best practicesThe focus of a specification like this is interoperability. Dropping support for a large swathe of ad-hoc and obsolete clients is a significant trade-off, and I'd don't find "people writing articles about 200 OK" to be a compelling reason to do so. Just tell them they're holding it wrong and they should use a more modern client/server setup. Best practices are for documentation, the golden path, and so on. As you can probably tell, this post was written without any LLM assistance, and the headings were added at the end for easier reference. |
|
I think we have actually narrowed the disagreement quite a bit with your post, and there are a few things I want to grant outright before getting into where I still see this differently. Where I agree with you.
Yes. That is the right frame. The question is which structure produces more interoperability over the long term, and that is a real question worth working through. Interoperability includes a wide array of components, from proxies to servers to clients.
I think we are not actually disagreeing about the destination, we are disagreeing about how to get there and where the legacy material lives in the document. That is a much smaller gap than this thread suggests.
Agreed. This is also consistent with what this PR enables: a server that wants to only serve On the seatbelt analogy. I want to push back on this one because I think it actually argues for the appendix structure, not against it. Your framing is that the law requires new cars to have seatbelts, but accommodates legacy cars on the road without forcing them to retrofit. That is exactly what this PR does. New servers MUST support The servers you argue for are hobby servers that crafted their middleware by themselves. If we require the new response type, and they want to comply with the spec, they can add it without dropping the old response type. Everything is fine. No break. With Hot Chocolate, we simply respond with the new media type even when the client sends The disagreement is not whether to accommodate legacy. It is whether the main normative section of the spec should be defined by the legacy accommodation or by the contract going forward. The seatbelt analogy points to the second one. Driving manuals do not lead with "your car may not have seatbelts." They lead with the current rule and document the legacy case where it matters. This also takes the weight off the legacy concern. If a hobby server really needs On "BC is about clients, not schema." I think we are using "backwards compatibility" to mean different things, and that is muddying the conversation. GraphQL's BC story, as it has been communicated for a decade, is about the schema contract. Add fields, do not remove them, do not change types. That is what gets put in talks, blog posts, and design guides. "Old clients keep working" is a consequence of schema BC, it is not the definition. The GraphQL spec itself is explicit that it is not concerned with transport. Clients like Relay do not even provide a network implementation. That has always been treated as an implementation detail. People can plug in whatever they want. You are now extending "BC" to mean the HTTP transport surface. That is a move worth making explicitly if you want to make it, but it is not a shared starting assumption that I can just nod along to. The transport layer has its own evolution mechanics: On the ad-hoc clients. The
And as I noted above, a server is free under HTTP to respond with The thing that would break ad-hoc scripts is a status code change, where an error response now returns On Hot Chocolate 16 dropping I want to clarify this because I think it has been read as "Michael wants to force every server to do what HC did." That is not the proposal. Hot Chocolate's decision to drop What this PR changes is whether servers are required to serve |
|
I do very much agree with not focusing on application/json heavily, but rather define it as legacy and see it as appendix. The library/framework authors will need to make the ultimately decision how much legacy support is required. For existing ones they might want to support application/json for some time or maybe forever. If I start a new one, I might start without it. We need to support the decision making by making clear that the legacy format is clearly documented and can be referenced, which I believe we do. |
This what I'm concerned with, as legacy (sometimes even new) GraphQL servers often only support
This is a prime example. In my opinion, there's no reason to drop support for Here at GraphQL.NET, we do our best to support the desires of the developer, whatever that may be. As an example, GraphQL.NET targets .NET Standard 2.0 - the developer can choose to run the latest version of GraphQL.NET on .NET Framework, .NET Core 2.1, 3.1, or any newer version. It runs on them all. (Hot Chocolate only supports .NET 8+). However, we recommend best practices. So while CSRF protection is enabled by default, it can be easily disabled. Not trying to bash Hot Chocolate here at all -- it certainly supports some features that GraphQL.NET doesn't, such as incremental delivery -- just pointing out a difference in viewpoint of the library developer's team. As they said, "they did it for their users and their reasons". Fine. If we can specify that GraphQL clients MUST accept Is there any way this could be written into the spec? |
| `application/json` and the `application/graphql-response+json` responses. When | ||
| doing this, it is recommended that the client set the `Accept` header to | ||
| `application/graphql-response+json, application/json;q=0.9`. | ||
|
|
There was a problem hiding this comment.
| Regardless, a client MUST accept 2xx responses with the `application/json` | |
| media type, even if not specified in the `Accept` header, processing them | |
| as if they had the `application/graphql-response+json` media type. | |
Something like this.
Yeah, I'm much more concerned about client libraries supporting But when client libraries stop supporting |
Valid argument. But again, the server operator has a relationship with their users. They can choose to break support or not. Old GraphQL servers didn't require CSRF headers for 'basic' requests. New servers might. This breaks client support. But it would be a purposeful decision by the server operator that they made. Perhaps a company wants to log GraphQL errors at the transport layer via status codes. Perhaps they value this over ensuring 100% backwards compatibility for existing clients. They issue a notice they are transitioning, they give client devs a certain number of months to transition, then discontinue So I think it's much more important to have modern client libraries and tooling continuing to support |
For a new server, they can cater to their new clients. Not so important to support old clients for a new server. Much more important for new clients to support old servers. |
I have no problem with this -- specifically, that handwritten 10-line GraphQL middleware doesn't meet spec. But I think tooling/clients should support these "legacy" servers, which is very easily achieved by simply requiring them to interpret 2xx
I know, but everyone seems to want |
|
To add to this by @Shane32
I actually think the reality is even stronger: every general purpose client library or tool will have to support application/json basically indefinitely. GrapiQL for example will probably never be allowed to remove application/json support. But I do think that is fine. |
Right, and the best way to address this is with enforcing client compatibility. (imo) |
@Shane32 Can you ellaborate on that point? |
|
Here is a sample from the popular urlq library: response = await (operation.context.fetch || fetch)(url, fetchOptions);
const contentType = response.headers.get('Content-Type') || '';
let results: AsyncIterable<ExecutionResult>;
if (/multipart\/mixed/i.test(contentType)) {
results = parseMultipartMixed(contentType, response);
} else if (/text\/event-stream/i.test(contentType)) {
results = parseEventStream(response);
} else if (!/text\//i.test(contentType)) {
results = parseJSON(response);
} else {
results = parseMaybeJSON(response);
}As you can see, the content type is validated before attempting to parse the result. In this case there is a fallback to parse any content type as JSON, regardless of status code. Here's a sample from the graphql-client C# library: /// <summary>
/// Delegate to determine if GraphQL response may be properly deserialized into <see cref="GraphQLResponse{T}"/>.
/// Note that compatible to the draft graphql-over-http spec GraphQL Server MAY return 4xx status codes (401/403, etc.)
/// with well-formed GraphQL response containing errors collection.
/// </summary>
public Func<HttpResponseMessage, bool> IsValidResponseToDeserialize { get; set; } = DefaultIsValidResponseToDeserialize;
private static readonly IReadOnlyCollection<string> _acceptedResponseContentTypes = new[] { "application/graphql+json", "application/json", "application/graphql-response+json" };
public static bool DefaultIsValidResponseToDeserialize(HttpResponseMessage r)
{
if (r.Content.Headers.ContentType?.MediaType != null && !_acceptedResponseContentTypes.Contains(r.Content.Headers.ContentType.MediaType))
return false;
return r.IsSuccessStatusCode || r.StatusCode == HttpStatusCode.BadRequest;
}In this case the validation is more strict; the response is refused if the response content type was specified but does not match a known/whitelisted content type. The response is also rejected if not 2xx or 400 (contrary to the included comment). Here's the Apollo GraphQL client code: function parseJsonEncoding(response: Response, bodyText: string) {
if (response.status >= 300) {
throw new ServerError(
`Response not successful: Received status code ${response.status}`,
{ response, bodyText }
);
}
try {
return JSON.parse(bodyText);
} catch (err) {
throw new ServerParseError(err, { response, bodyText });
}
}
function parseGraphQLResponseJsonEncoding(
response: Response,
bodyText: string
) {
try {
return JSON.parse(bodyText);
} catch (err) {
throw new ServerParseError(err, { response, bodyText });
}
}
function parseResponse(response: Response, bodyText: string) {
const contentType = response.headers.get("content-type");
if (contentType?.includes("application/graphql-response+json")) {
return parseGraphQLResponseJsonEncoding(response, bodyText);
}
return parseJsonEncoding(response, bodyText);
}This implementation works different yet, processing all As stated above, I suggest we require 2xx If the requirement was not added, as an example, perhaps |
|
Thanks @Shane32 for providing those examples. My question was more in the context of I agree that those clients are very compatible today and that dropping But there's no silver bullet there. Either we:
As a maintainer of Apollo Kotlin, I will support both content-types for the foreseeable future, it's a small branch and if it can make Apollo Kotlin more compatible, I'm all for it. I do like that When that happens, I will make the change in a new major version of Apollo Kotlin and will tell my users. For users that still have a legacy server, I'll tell them to either:
There are solutions. But if we burn |
|
Glossing over most of the debate1, I think Michael's raised a really good point about If we can assert this is the case for the vast majority of these kind of ad-hoc and legacy clients (those in unmaintained clients/tooling/etc) then responding with Since Michael is already following this approach, I've gone over @martinbonnin's changes with an editorial comb and also factored this new recommendation into the text in #398 and recommend its inclusion into this pull request: I'm still uncomfortable with the change, because there are so many unknown unknowns, but I think this change massively offsets the main risk: server implementers refusing to respond to Footnotes
|
Follow up from #370 and #327
Move the
application/jsonresponses to a separate appendix to keep the spec easy to parse, implement and broadcast a strong signal on the preferred path moving forward (see also #329).The main normative change is the requirement to support
application/graphql-response+json. In a nutshell:All the rest is moving legacy parts of the spec to an appendix for better readability.
Servers can (and probably should) support both media types for better compatibility.
This is our one opportunity at a clean slate, let's keep the entropy and LLM contexts low.