Hacker Newsnew | past | comments | ask | show | jobs | submit | davidmccabe's commentslogin

GraphQL is really intended to model UI concepts rather than directly exposing your database schema.

First of all, your GraphQL schema is basically append-only due to older clients that may remain in the wild. So you don't want to expose implementation details that may change.

Second, you want to write client code that handles mutations. This is easier if the data the client receives is organized in a UI-centric way. I'll give you a simple example that came up at work recently: a single conceptual category (a "user account") that, due to implementation details, was spread across two different tables with different columns. Because the GraphQL schema in this case mapped each table to its own GraphQL type, somebody was then able to write client code that only handled one type and not the other, causing an inconsistent UI.

I would suggest thinking carefully about your GraphQL schema, treating it as an API, and not auto-generating it. Of course, you want it to be convenient to construct, just not fully automatic and thoughtless.


I imagine a lot of people don't share this particular opinion. The problem GraphQL was created to solve is precisely the ability to compose ad-hoc queries to feed into UI, without being tied to a particular data model or mapping. It allows applications to change at a faster pace without requiring centralized API modifications.

If you create those UI-centric models when exposing data through your GraphQL schemas, you just moved the modelling work elsewhere but haven't actually facilitated anything, and it's still centralized. At this point you're better off embracing the 'BFF' architecture and skipping GraphQL altogether.

There may be a useful middle ground (for example, ensuring a single User type), but it's a slippery slope to stand on.


I think if you look at it from the otherside it makes more sense.

A problem that is common is that we send these JSON blobs over the wire that aren't purpose fit. So, with GraphQL, we can construct a query graph that makes the JSON blobs slimmer and more purpose fit, by constructing the queries to return the data we desire for some particular business reasons we need to represent in the UI, for instance.

I got the argument with mutations need to match things more closely, but I'd argue you can cross compose mutations w/ resolvers too.

I think this is what they're getting at. Just throwing the database schema over the wall via GraphQL isn't that much better than REST, and takes zero advantage of abilty to use revolvers to construct purpose built queries


This is what I constantly tell people. Use GraphQL to efficiently create permutations of access so the data is shaped for the client more or less exactly how they need it. To that end, I often recommend that at least one UI developer is involved in approving schema changes and regularly dogfoods the GQL setup before its deployed.

I have been met with a lot of resistance around this notion for some reason.

To be honest, I fought the same argument with OpenAPI (Swagger) too.

API developers seemingly just want to chuck their schema over the wall and walk away


I think part of this is that when an API developer tries to write frontend code, they'll naturally use the underlying schema as their frontend domain model because they're already thinking in those terms.

I definitely suffer from that problem when trying to work out what a good API for frontend would be, and frontend people often have their own blind spots which makes working together to find something actually good a tricky process.

Then again, getting REST/GraphSQL/API-of-choice designs 'actually good' is hardly a trivial problem at the best of times, so how much of this is developer biases and how much inherent difficulty isn't something I'd be confident trying to estimate.


Yeah this dynamic definitely exists. I think this is because there's a big mismatch between what frontend wants ("we'd prefer a single, super fast endpoint with no params") and what backend wants ("we automatically exposed our database tables complete with RBAC, query away"). Backend engineering has more or less become all implementing this mapping layer, whether it's with graphql resolvers, db views, or whatever, but the work to be done is like 90% here.


I’ve been kicking around a project idea for years that would be a niche fit for these considerations. It’s effectively an extremely niche reference corpus of very field-specific data, effectively represented by JSON documents and having a few relationships that don’t require transactions. It wouldn’t be editable by clients at all. Every client would be nothing more than a consumer, and the database would be versioned and only edited by the maintainer.

That is certainly not a common use of web based APIs, to have an application that is read only every client, but the reason I came to this thread was because I’ve considered SQLite for this idea in the past.


Could you explain that “two tables” issue again? I’m trying to think of a way that DB constraints (I’m using Postgres, fwiw) couldn’t handle denormalization, but imo there’s a constraint for everything. Force a 1:1 relationship, or add a check constraint so you literally cannot denormalize.


Hey, this is David from the React team at Meta. I've been working on revamped documentation for Relay, our GraphQL data fetching library. Relay is an incredible technology but has been virtually undocumented up until now. This tutorial covering the basics is just a down payment on improving matters. Looking forward to your feedback!


React doesn't have to commit to the DOM if there are no changes, but it still has to render each component and compare the output. Even if the component is memoized, it still has to compare the props. This is fast enough most of the time but it can become a bottleneck in some cases.


Yes, and component memoization (and PureComponent and shouldComponentUpdate) only prevents re-rendering of the component's descendants. But every component that uses a context value will still re-render every time that context value changes, which could potentially be way too many.

For example, if every Cell component in a big Table component is using the same context value, they will all need to re-render when any cell value changes, regardless of whether the Cell component or some of its descendants are memoized. This is a common pattern in Redux, which solves this problem and will only re-render the Cells which are using cell values which have changed. Recoil would provide the same benefit.


It’s actually designed to make server rendering easy. We’ll add a guide about this eventually. Atoms aren't global singletons: their values are scoped to React trees.


Can you talk about SSR here just a little to give us an idea of how recoil makes it easy?


Well, I should say "doesn't make it harder". The `RecoilRoot` component accepts a prop called `initializeState` that lets you specify the state of all atoms in the initial render.


Shouldn't the app be allowed to "pull" the required state from the db, rather than having to "push" the initial state into the root of the app? It's not like we should dump the whole database into atoms just in case the app needs to look up one item, right?

I'm mostly curious how this might tie into a server-side DB. Recoil's API provides the fundamentals for a firebase-like persistence system that allows people to skip the complexity of GraphQL, and just use the type system provided by the language (flow/ts/reason).

In any case, congrats on launching such an elegant API. This is one of the nicest reactive systems I've seen for React :-D


We are planning in the next few weeks an overhauled persistence API. Among other things this will add the ability to provide a callback in `initializeState` rather than a value. However, this doesn't help if you need to do async work to retrieve the value. The way we generally think of SSR is that you get a single render pass and then you're done, no time for async work. So for hitting a database you want something like Relay that statically knows your data dependencies and can do a single request to initialize. It's true that there's some complexity there, but there are also great solutions to really hard problems. Recoil doesn't attempt to address that space.


How exactly does the default state work if that's the case? Is it just up to the user to treat the state as immutable and copy it rather than modifying it?


The term 'atom' is a clojure-ism, that's where both I and reframe get it from.


Re-frame's atoms are actually Reagent's 'ratoms' ('reactive atoms'). They're built on Clojure atoms, but can reactively prompt a re-render of any component that depends on them when the content of the ratom changes.

Re-frame wraps Reagent, and introduces the concept of "subscriptions". A subscription either returns the content of an atom, or state derived from it, equivalent to Recoil's selectors. Re-frame also introduces the concept of a global application DB, a single atom that contains various pieces of state, such that you can develop your entire app around it.

I haven't tried Recoil, but I'll give it a shot on my next JS project - I tend to use ClojureScript for front-end precisely because I find the Reagent/Reframe approach simpler and more effective than any of the plain JS React approaches (for complex apps, anyway).


Lots of software has "atomic" concepts. Like SQL databases.


If you update multiple atoms within a React batch, they will be updated at the same time, just as with React local state. You don't need to wrap the changes in anything to have them occur together.

In other words, this updates both of the atoms together:

  const [a, setA] = useRecoilState(atomA);
  const [b, setB] = useRecoilState(atomB);
  ...
  onClick={() => {
    setA(a => a + 1);
    setB(b => b + 1);
  }}

If the new values of multiple atoms are interdependent on each others' current values, it's possible to update multiple atoms together using the transactional/updater/function form, but we need to add a nicer interface for doing this. Coming soon! The nicer interface would probably look something like this:

  const update = useTransactionalUpdate([atomA, atomB]);
  ...
  onClick={() => {
    update(([a, b]) => [a + b, b - a]);
  }}
It's then easy to make a hook that provides a reducer-style interface over a set of atoms. But now, unlike with Redux or useReducer, each atom still has its own individual subscriptions!


It seems like something could be written around useRecoilCallback() to watch/get the current value of an atom outside of a React component. Does that sound right?


For a specific set of atoms you could subscribe to them with a component and then use an effect to send the values out. For all atoms you could use useTransactionObservation.


Well, I know that on one tool we saw a 20x or so speedup compared to using Redux. This is because Redux is O(n) in that it has to ask each connected component whether it needs to re-render, whereas we can be O(1).

useReducer is equivalent to useState in that it works on a particular component and all of its descendants, rather than being orthogonal to the React tree.

I think if you can model something with pure functions, you should. That's the approach we try to take for asynchronous processes: just a pure function that you happen to evaluate on some other process. This obviates the need for things like sagas. So I agree with you there I guess.

If you post an example of what you don't think it could handle I will tell you how we would handle it.


I've never had my bottlenecks end up being because of Redux, but that could just be me.

I'd love to take a look at a larger project using Recoil though, just to get a sense for how it looks with a relatively complex state setup. My first impression is that it would get messy pretty quick, but I've been wrong many times before :)

Also, I'm not trying to shit all over your project, congrats on rethinking state management. Regardless of how I feel about your library, that's still awesome.


The app in question had thousands of connected components, so that was a huge bottleneck for them. For many apps it doesn't matter.

The app that Recoil was originally extracted from has an extremely complex set of state and interdependent derived processes -- also heavily hooked into and modified by third-party plugins. This type of complexity is exactly what Recoil was designed to handle.

Thanks for your kind words.


That's awesome. Best of luck going forward!


You can hoist useReducer out to context and end up with essentially a very lightweight redux. I’ve had some success with that in smaller apps. For something large or long term I’d probably avoid that approach though.


>Well, I know that on one tool we saw a 20x or so speedup compared to using Redux. This is because Redux is O(n) in that it has to ask each connected component whether it needs to re-render, whereas we can be O(1).

Don't I get the same perf with selectors?


I could be mistaken but I think that you still have to do a shallow compare on the connected props of every component.


Nah, you're completely correct


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: