Updated: Nov 16, 2018
Recycling previously allocated rows that went off-screen is a very popular optimization technique for list views implemented natively in iOS and Android. The default ListView implementation of React Native avoids this specific optimization in favor of other cool benefits, but this is still an awesome pattern worth exploring. Implementing this optimization under the “React state-of-mind” is also an interesting thought experiment.
Lists Are a Big Part of Mobile Development
Lists are the heart and soul of mobile apps. Many apps display lists — whether it’s the Facebook app with a list of posts in your feed, Messenger with lists of conversations, Gmail listing emails, Instagram listing photos, Twitter listing tweets… As your lists become increasingly complex, with larger sources of data, thousands of rows, and rich memory-hungry media , they also become harder to implement. On the one hand, you want to keep your app fast. Scrolling at 60 FPS has become the gold standard of native UX. On the other hand, you want to keep a low memory footprint—and mobile devices are not known for their abundance of resources. Winning both of these fronts is not always a simple task.
Searching for the Perfect List View Implementation
It’s a common rule of thumb in software engineering that you can’t optimize in advance for every scenario. Let’s borrow from a different field—there is no single perfect database to hold your data. You’re probably familiar with SQL databases that excel in some use cases, and NoSQL databases that excel in others. Because it’s unlikely you would be implementing your own DB, as a software architect, you need to choose the right tool for the job. The same rule holds for list views. You probably won’t find a single list view implementation that will win in every use case—keeping both FPS high and memory consumption low.
Two Types of Lists
Roughly speaking, we can characterize two types of use cases for lists in mobile:
Nearly identical rows with a very large data source. Every contact row probably looks the same and has the same structure. We want to let users browse through many rows quickly until they find what they’re looking for. Example: a contact directory.
High variation between rows and a smaller data source. Every row here is different and includes a variable amount of text. Some hold media. Users will typically read messages progressively and not browse through the whole thread. Example: a chat conversation thread.
The benefit of splitting the world into different use cases is that we can offer different optimization techniques for each one.
The Stock React Native List View
There are probably many reasons why, but I would guess that it has to do with the use cases we’ve mentioned earlier. iOS UITableView and Android ListView use similar optimization techniques that perform very well under the first use case: nearly identical rows with a very large data source. The stock React Native ListView is simply optimized for the second.
The flagship of lists in the Facebook ecosystem is the Facebook feed. The Facebook app has been implemented natively in iOS and Android long before React Native. The initial implementation of the feed probably did rely on the native UITableView in iOS and ListView in Android, and as you can imagine, did not perform as well as expected. The feed is a classic example of the second use case. There is high variation between rows because each post is different, with varying amounts of content, types of media, and structure. Users read through the feed progressively, and they normally don’t browse through thousands of rows in a single sitting.
Aren’t We Supposed to Talk About Recycling?
If the second use case applies to you—high variation between rows and a smaller data source — you should probably stick with the stock ListView implementation. If your use case falls under the first, and you’re unhappy with how the stock implementation performs, it might be a good idea to experiment with alternatives.
Reminder, the first use case is: Nearly identical rows with a very large data source. In this scenario, the main optimization technique that has proven to be useful is recycling rows.
Because our data source is potentially very large, we obviously can’t hold all the rows in memory at the same time. To keep memory consumption to a minimum, we would only hold in memory rows that are currently visible on screen. As the user scrolls, rows that are no longer visible will be freed, and new rows that become visible will be allocated.
However, it is very CPU-intensive to constantly free and allocate rows as the user scrolls. This naive approach will probably prevent us from reaching our 60 FPS target. Fortunately, under the current use case, the rows are nearly identical. This means that instead of freeing a row that went off-screen, we can repurpose it for a new row. We are simply going to replace the data it displays with data from the new row, thus avoiding new allocations altogether.
Time to Get Our Fingers Dirty
Let’s set up a simple example to experiment with this use case. We will offer a sample of 3,000 rows of data similar in structure to the following:
UITableView as a Native Base
The actual wrapping will be done in RNTableView.m, and it will mostly revolve around passing the props forward and using them in the correct places. There’s no need to dive too deeply into the next implementation since it’s still missing the interesting parts.
The Key Concept—Connecting Native and JS
The best way to pass React components to our native component is as children. When we use our native component from JS by adding our rows in JSX as children, we’ll make React Native transform them to UIViews that will be provided to the native component.
The trick is that we don’t need to make components out of all the rows in the data source. We only need a small amount of rows to display on-screen since the entire point is to keep recycling them. Let’s estimate a maximum of 20 rows that will be displayed on-screen at the same time. One way to make this estimate is to divide the screen height (736 logical pixels in iPhone 6 Plus) by the height of every row ( 50 in our case), which amounts to about 15, and then add a few extra rows for good measure.
When these 20 rows are passed to our component as subviews on initialization, we won’t actually display them yet. We’ll just hold them in a bank of “unused cells”.
Now comes the interesting part. The native UITableView recycling works by trying to “dequeueReusableCell”. If a cell can be recycled (from a row off-screen), this method will return the recycled cell. If no cell can be recycled, our code needs to allocate a new one. Allocation of new cells only happens in the beginning until we fill the screen with visible rows. So how will we allocate a new cell? We’ll simply take one of the unused cells in our bank:
Tying It All Together
There’s one additional optimization we want to do. We want to minimize the number of re-renders. This means we only want to re-render a row after it has been recycled and re-bound.
That’s the purpose of ReboundRenderer. This simple JS component takes as props the data source row index that this component is currently bound to (the boundTo prop). It only re-renders itself if the binding changes (using the standard shouldComponentUpdate optimization):
This post was written by Tal Kol
You can follow him on Twitter