Rationale
I love React and I love Redux, but one of the things I struggle with a lot is how complicated the latter can make codebases. You add it and look away for five seconds and suddenly it’s all boilerplate and wiring code, sometimes to do very simple things. But we’ve had pretty cool tech advances in the last few years and some of them, like GraphQL, seem like they could solve our problems — so what if we gave that a shot?
Speaking from experience, a lot of the “problems” attributed to Redux really depend on how poorly you use it and for how many things. But it remains true that logic in and around your store grows in parallel to your domain code. The larger your domain code, the more boilerplate you’ll have — actions, reducers, selectors, thunks, sagas, and so on. When that ecosystem of logic stays in the background, it’s not an issue, but troubles start when it leaks into your components layer. Components should be treated like controllers in the backend: transports that are as pure and ignorant of what’s going on as possible and instead focusing on merely being input/output — interactions aside of course.
But how do you keep your components ignorant of all this Redux logic when it’s there specifically to help them consume your domain and state? Usually, your component will end up making use of actions, selectors, or thunks — all things that involve the component knowing how your Redux store is structured to operate. And once you start to handle relationships, bridging entities together, filtering, sorting, pagination, and so on, the Redux layer usually grows drastically in complexity and becomes harder to consume, cluttering your components with more and more wiring.
Redux, meet GraphQL
On a recent project, we had this exact problem of our Redux logic becoming increasingly complex and harder to consume. We had a whole pipeline to construct and destruct objects and to hydrate the components. It was not only a lot of boilerplate every time, but it muddied the water of what the components were trying to achieve in the first place.
The crux of the issue is this: the more your components are aware of where your data comes from and how it’s returned from your data source, the more you will try to bend them to the specific usage instead of keeping the API pure and their use cases open. This is usually very visible if you compare a component that was made “on the job” with the actual real-world data to a component that was designed in isolation (in Storybook for example) with predefined use cases in mind.
Since I was very interested in Gatsby at the time and how concise components can be when using it (thanks to GraphQL), I thought about introducing something similar but for our Redux store. I wanted a way to centralize all data fetching and building into a clear and simple query that would make the components ignorant of where the data comes from and how it was fetched.
How it works
When working with GraphQL, there is one library in particular that stands out from the rest, and that’s of course Apollo. It’s an ecosystem of libraries written for various frameworks and comes with everything you’d need. But more interestingly despite being predominantly “GraphQL branded”, Apollo lets you use the GraphQL query language with other more traditional data sources such as REST APIs, databases (SQL, Mongo), and so on. Again very similar to what you find in Gatsby.
When using Apollo you’d usually have two sides: the server and the client. Your client lives on the client-side, and passes queries to your servers which answers them. So far so good. But what we built is a bit different; it uses a feature of Apollo called “local state management” which allows the client part of Apollo to both make and answer the queries itself. This means no actual server is involved. No HTTP request will be made. It’s a “fake GraphQL” server running within each request and whose purpose is to query data from that same request (here, from our Redux store).
This feature wasn’t made with Redux in mind. It was made to use Apollo as your store by writing and reading from an InMemoryCache
instance. But on this Apollo client, you can also define resolvers
which basically tell GraphQL how to retrieve the data that was queried for.
For example, if I wrote a query to get the ID of all users and the groups they’re in:
query {
users {
id
groups {
id
}
}
}
I could then tell Apollo Client how to get that data through resolvers. And since we have access to our Redux store there (since it’s a global singleton), that means we can make that query functional by doing this:
import store from "./Store";
const typeDefs = gql`
type Query {
users: [User]
}
type User {
groups: [Group]
}
type Group {}
`;
const client = new ApolloClient({
cache: new InMemoryCache(),
typeDefs,
resolvers: {
Query: {
users: () => store.getState().users,
},
User: {
groups: user =>
store
.getState()
.groups.filter(group => user.group_ids.includes(group.id)),
},
},
});
And that’s it! That was the proof of concept and surprisingly enough, it worked. You still have to provide a schema to the typeDefs
option, as you might have noticed, but that schema will not be used for validation. It will be used to know which resolvers to call but it will never be used to validate requests or responses as it’s too heavy of an operation performance-wise and is disabled for Apollo Client (i.e. only Apollo Server uses it, but we don’t have one here).
The advantage we had on this project was that we had already set up a whole slew of selectors (functions that receive the state and return a piece of it) to query various parts of the state. This meant we were able to easily make the whole state queryable through GraphQL by using selectors extensively:
const resolveSelector = selector => selector(store.getState());
const client = new ApolloClient({
cache: new InMemoryCache(),
typeDefs,
resolvers: {
Query: {
users: () => resolveSelector(getUsers),
},
User: {
groups: user => resolveSelector(getUserGroups(user)),
},
},
});
1. Getting the data into the store
This is a great first step as it makes the components unaware of the shape of the store and the selectors; they just need to know what they need to render, and they get it. But I saw I could take it one step further and also make the components unaware of how to fetch that data and get it into the store in the first place. For this, since we were using thunks which are promises, we tied one thunk to every resolver:
const resolveSelector = selector => selector(store.getState());
const fetchAndResolve = async (thunk, selector) => {
await thunk(store.dispatch, store.getState());
return resolveSelector(selector);
};
const client = new ApolloClient({
cache: new InMemoryCache(),
typeDefs,
resolvers: {
Query: {
users: () => fetchAndResolve(fetchUsers(), getUsers),
},
User: {
groups: user =>
fetchAndResolve(
fetchUserGroups(user),
getUserGroups(user),
),
},
},
});
With this in place, this meant we were able to have our components go from this:
To this:
const Users = ({ fetchUsers, fetchUserGroups, users, groups }) => {
useEffect(() => {
fetchUsers();
if (users.length) {
users.forEach(fetchUserGroups);
}
}, [fetchUsers, fetchUserGroups, users]);
return (
<div>
{Object.values(users).map(user => (
<div>
<h1>{user.name}</h1>
<h2>Groups</h2>
<ul>
{user.group_ids.map(groupId => (
<li>{groups[groupId].name}</li>
))}
</ul>
</div>
))}
</div>
);
};
const mapStateToProps = state => ({
users: getUsers(state),
groups: getGroups(state),
});
export default connect(
mapStateToProps,
{ fetchUsers, fetchUserGroups },
)(Users);
To this:
const QUERY = gql`
{
users @client {
name
groups {
name
}
}
}
`;
const Users = () => {
const { data: { users } } = useQuery(QUERY);
return (
<div>
{users.map(user => (
<div>
<h1>{user.name}</h1>
<h2>Groups</h2>
<ul>
{user.groups.map(group => (
<li>{group.name}</li>
))}
</ul>
</div>
))}
</div>
);
};
export default Users;
As you can notice, the component is aware of much, much less than before. It simply asks for the data it needs to render, gets it, and renders it. The useQuery
hook comes from Apollo and is the hook counterpart of the Query
render component that does the same thing. Both variations come with a bunch of built-in goodies such as the abilities to:
- Refresh your query at any time (useful after stateful actions)
- Display loading states
- Handle errors
The query uses the @client
directive to tell Apollo it’s a query meant for Apollo Client and not for Apollo Server, i.e. that query should not leave the current request. This is the most important part as without it Apollo will try to execute your request against a real GraphQL server. This is because you could be using both this and an actual server and query both indiscriminately depending on if you pass @client
or not which is an interesting idea, too.
2. Query arguments
This is nice for the standard use case but what about when things need to be queried with arguments, such as in the case of pagination? Well, you can define arguments on your query and they’ll be received by the resolvers allowing you to write a component like this:
const QUERY = gql`
query($page: Int, $perPage: Int) {
users(perPage: $perPage, page: $page, orderBy: "age") @client {
name
groups {
name
}
}
}
`;
const Users = ({ perPage = 15 }) => {
const {
data: { users },
loading,
refetch,
} = useQuery(QUERY, { page: 1, perPage });
return (
<Table
columns={...}
rows={users}
loading={loading}
perPage={perPage}
onPageChange={page => refetch({ page, perPage })}
/>
);
};
export default Users;
And power it like this (as long as your selectors/thunks are already pagination-aware for it, which ours were):
Query: {
users: async (_, { perPage, page, orderBy }) => {
const users = await fetchAndResolve(
fetchUsers(page, perPage),
getUsers
);
return sortBy(users, orderBy);
}
}
As you can see, since Apollo allows us to refetch a query with different arguments, we can easily fetch intricate sets of data and arrange it precisely how the component wants, without having to make the component aware of how the data is manipulated.
3. Writing in addition to reading
While we focused purely on the read layer in our application (since our write layer was already very terse thanks to thunks), you could very well move the latter into your GraphQL server by using mutations:
const QUERY = gql`
query($id: Int) {
user(id: $id) {
id
name
}
}
`;
const MUTATION = gql`
mutation($id: Int, $name: String) {
updateUser(id: $id, name: $name) { }
}
`;
const UserForm = ({ id }) => {
const { data: { user } } = useQuery(QUERY, { id });
const [updateUser] = useMutation(MUTATION);
return (
<Formik
initialValues={{ ...user }}
onSubmit={user => updateUser(user.id, user.name)}
>
{() => ({})}
</Formik>
);
};
export default Users;
You could then define the matching resolver, again reusing your Redux logic and thunks:
const client = new ApolloClient({
cache: new InMemoryCache(),
typeDefs,
resolvers: {
// ...
Mutation: {
updateUser: async (source, { id, name }) => {
await updateUserThunk(id, { name })(store.dispatch);
return resolveSelector(getUser(id));
},
},
// ...
},
});
Doing this, your components would be able to read and write to your store without ever being aware of where that data actually is, how it’s fetched, how it’s structured, and so on. This keeps your components somewhat implementation agnostic in that you could swap your resolvers midway through with a real GraphQL API and your components wouldn’t see any difference.
4. Testing
Next comes the question of testing: we’ve decoupled our component from the Redux store but coupled it to GraphQL, so how would we test this? There’s two approaches, the first and most evident one is to simply export two components:
export const UsersTable = ({ users }) => (
<Table columns={...} rows={users} />
);
const Users = () => {
const { data: { users } } = useQuery(QUERY);
return <UsersTable columns={...} users={users} />;
};
export default Users;
Then we could simply import { UsersTable } from "./Users
and test this by providing dummy props directly. And that’s precisely what we were doing before, but in doing so you don’t test the full story either. Thankfully you can easily mock GraphQL queries and mutations in tests thanks to a MockedProvider
exposed by Apollo:
const mocks = [
{
request: { query: QUERY, variables: { } },
results: {
data: myDummyUsers,
},
},
];
it("can render a list of users", () => {
const result = render(
<MockedProvider mocks={mocks}>
<Users />
</MockedProvider>,
);
// Test the component
});
As you see, you mock responses to individual queries. This means you have more assurance that your component is querying the data correctly since Apollo will match any query exactly and it will match it only once. This might seem cumbersome but it allows us to:
- Mock responses differently to a first fetch and a refetch
- Mock responses differently depending on query arguments
- Simulate errors, failures, and so on since you can also return failed requests by providing an
error
field
This is a much more complete way to test your components since you mock your data source directly instead of bypassing it and testing the lower layer.
5. Killing Redux
Now this isn’t something we did on our project but one of the benefits of this approach is that suddenly, it becomes that much easier to get data from your API to your component. Things don’t necessarily need to pass through your store.
We’ve mostly implemented resolvers by using thunks and selectors so far, but you don’t really need either for this to work, the following would be completely acceptable:
const client = new ApolloClient({
cache: new InMemoryCache(),
typeDefs,
resolvers: {
// ...
Query: {
groups: () => axios.get("my/api/groups"),
},
// ...
},
});
This works if the API returns the data in the correct format but even if that weren’t the case you could just wrap the call in a normalizeGroups
function. My point is not everything has to be in the Store; if it’s only used for one page, if it’s used on the same page that requests it, why bother? Why do a whole trip through the store? By avoiding the store as often as possible, you’ll often notice how redundant it becomes and how you’ve subconsciously trained yourself to just shove everything in it “just in case you need it later.”
This is of course very dependent on how complex your app is; I’m not advocating for No Redux. On the project I mentioned, we definitely needed to make entities go through the store first to normalize everything, but on smaller projects that may not be a constraint.
Final Words
Doing this introduced a barrier of entry to working with our components, but not a greater one than learning how thunks, selectors, and all of Redux work. While knowledge of both the components layer and Redux layer is required to make the GraphQL client return new data, it does mean writing new components with existing data requires very minimal knowledge.
Overall, we’ve made our components much purer and we can easily reuse and abstract our whole fetching layer by exporting query components:
const QUERY = gql`
{
users @client {
name
groups {
name
}
}
}
`;
export const UsersQuery = ({ children }) => {
const { data: { users } } = useQuery(QUERY);
return children(users);
};
// Somewhere else
<UsersQuery>
{ users => renderSomethingWithUsers(users) }
</UsersQuery>
Since the queries also return information about loading states and errors, it’s also very easy to centralize all that handling to a common render prop component (instead of using hooks):
const LoadedQuery = props => (
<Query {...props}>
{({ data, ...results }) => {
if (results.loading) {
return <Loader />;
}
if (results.error) {
return <ErrorMessage error={results.error} />;
}
return children(data, results);
}}
</Query>
);
There are a lot of interesting directions you can go from this query layer and I, for one, am very excited to see GraphQL being used more and more outside of an actual GraphQL context. I think we’ve gotten so used to having our hands in the engine to retrieve data for our components that we’ve forgotten how pure they should be in the first place. It’s great to be able to bring back that simplicity without leaking implementation details everywhere.
Member discussion