Why immutability?
When working on a React application that needs to handle state, one of the main pitfalls to watch out for is accidental mutations. This is fancy talk for mistakenly modifying stuff you didn’t want to change:
let user = { name: "foo" };
let updated = user;
updated.name = "bar";
updated.name; // "bar"
user.name; // "bar"
In this case, imagine we’re modifying a Profile form; user holds the current user information and we want to create an updated user object with the modified attributes. As you can see this has the unfortunate effect of modifying the original object which can cause all kinds of weirdness and hard to track bugs.
There are two ways to solve this usually:
- By creating new objects every time you need to modify existing ones. This example uses built-in syntax capabilities to easily “copy over” the properties of the existing object to the new one, without having any kind of link existing between the two.
let user = { name: "foo" };
let updated = { ...user, name: "bar" }; // ES6 approach
// Or
let updated = Object.assign({}, user, { name: "bar" }); // ES5 approach
updated.name; // "bar"
user.name; // "foo"
This is a fairly lightweight and straightforward approach, but it starts to break down as you need to do more and more complex manipulations such as deep updates or selective updates (e.g. copying over only some of the previous object’s properties):
let user = {
name: "foo",
role: "admin",
preferences: { home: ["foo"], profile: ["bar"] },
};
let updated = {
...user,
preferences: {
...user.preferences,
profile: user.preferences.profile.filter(
preference => preference != "bar",
),
},
};
delete updated.role;
- By using a library which lets you handle immutable structures and/or immutable changes on top of Javascript. One example of such a library is Immutable.js, which we’ve used extensively at madewithlove in the past. It approaches things differently by wrapping native structures into immutable ones that have a slew of methods which allow for easy manipulation.
import { Map } from "immutable";
let user = { name: "foo" };
let updated = new Map(user);
updated = updated.set("name", "bar");
updated = updated.toJS();
updated.name; // "bar"
user.name; // "foo"
We can see that compared to the previous approach, this is a lot heavier — you need all your structures to be wrapped in third-party wrappers and you need to convert them back to to access the basic object. But this approach does have the advantage of making more complex manipulations a breeze compared to the native approach:
let user = {
name: "foo",
role: "admin",
preferences: { home: ["foo"], profile: ["bar"] },
};
let updated = new Map(user)
.delete("role")
.updateIn(["preferences", "profile"], preferences =>
preferences.filter(preference != "bar"),
);
The best of both worlds
Both approaches have their pros and cons, depending on how you use them. For example, is your whole Redux state an Immutable.js structure? Are you passing plain objects to your components?
There are other libraries which have since attempted to tackle this issue in different ways, but one of the most popular ones is a new package named immer. It takes a bit of a hybrid approach to the problem in that it allows you to do immutable updates to objects but by still using the base native API. Let’s go back to our deep update example and see what that means:
import { produce } from "immer";
let user = {
name: "foo",
role: "admin",
preferences: { home: ["foo"], profile: ["bar"] },
};
let updated = produce(user, draft => {
delete draft.role;
draft.preferences.profile = draft.preferences.profile.filter(
preference => preference != "bar",
);
});
How it works is that you pass it the object you want to modify as the first argument and from this object a draft object will be created which is a proxy of the original. You make mutable changes to the draft object like you would any other object and when you’re done
There are many benefits to this approach. The most obvious one is that you can use the normal mutable syntax to modify your draft object, which is much more straightforward than either previous option. A second big benefit is that since your object is now a good ol’ plain object, it’s incredibly easy to type and to provide intelligence/completion for. The draft object is whatever type user is:
interface User {
name: string;
preferences: {[key: string]: string[]}
};
let updated = produce(user, (draft: User) => {
delete draft.role; // Property role does not exist on type User
});
Taking things to the next level
Now you’ve seen how to use immer in its most basic way, but because we’ve boiled down immutable updates to a simple input → output, it’s incredibly easy to leverage this to create more complex patterns. The most evident is making state reducers like the ones you’d find in Redux:
const reducer = (state, action) => produce(state, draft => {
switch(action.type) {
case UPDATE_USER:
draft.name = action.name;
break;
}
});
By default, any change made to the draft is applied; you don’t need to return anything from produce. In fact, if you do then whatever was returned will be used instead of the draft (for cases where you simply want to return a completely new value).
You can also create producers by omitting the state argument and with that create individual producers for one or more operation(s):
let removePreference = produce((draft, type, removed) => {
draft.preferences[type] = draft.preferences.filter(
preference => preference != removed,
);
});
const reducer = (state, action) => {
switch (action.type) {
case REMOVE_PREFERENCE:
return removePreference(
state,
action.preferenceType,
action.preference,
);
}
};
As you can see, extra arguments passed to the producer are passed in the draft callback which allows you to create parametrised producers and reuse them in myriad places such as setState:
let showUserForm = user =>
produce(draft => {
draft.showForm = true;
draft.user = user;
});
this.setState(showUserForm(user));
There are a few more advanced concepts in
users.map(produce(
draft => void (draft.name = "foo", draft.age += 1)
));
Using immer : what’s the catch?
The library has a few limitations to keep in mind before you use it in your next project. The most evident is that because it uses proxy objects (drafts) to function, your target environment needs to support it. If it doesn’t, everything will still work but it will use a slower polyfill.
The library also does not support the most complex object types; most of the time you will want to feed
Other interesting reads
Member discussion