Memory management in mobile apps

Throughout it’s lifetime, an app is allowed to make use of the internal memory, as object instances obtain parts of it for temporary use and then return it. When an objects doesn’t use the memory anymore but refuse to let it go, a memory leak is created.

Common causes of memory leaks

In the beginning of a leak the memory footprint might be small, but constant and gradual loss of memory will make the application unresponsive.

There are a few common causes for which memory leaks happen.

Improper object disposal

Complex objects need to be disposed in a proper way. It’s usually a good idea to implement the IDisposable interface, which provides the Dispose() method, where we can write our custom disposal logic. This is useful because objects might require the disposal of  their own children, besides their own. Otherwise, a reference to the parent object might still be kept in memory. The IDisposable objects are to be used in conjunction with the using statement.

Holding references to managed objects

Holding a reference to an object that we no longer use will take up unnecessary space. Object can keep references to other objects, even indirectly. Failing to release unmanaged resources is also an issue.

Unsubscribing from events

Not removing event listeners when they are no longer used is a top cause for leaks. If an object (let’s call it a subscriber) hooks up to an other object’s event (publisher), the publisher object will hold a reference to the subscriber via an event handler delegate. If the publisher lives longer than the subscriber, then it will keep the subscriber alive even when there are no other references to the subscriber.

Any event listener that is created with an anonymous method or lambda expression that references an outside object will keep those objects alive.

An easy way to solve this is by using weak event references. Generally, weakly referenced objects can be removed from memory if they are not in use, but can also be quickly recreated when the application needs them.

Static and thread local fields

Static fields have been considered evil for good reason, as they tend to stay around throughout the entire application lifecycle. Thread local variables do not fall far behind, as they are responsible for memory leaks too if they’re not properly disposed.

Memory profiling techniques and tools

There are two main ways to profile the memory an app makes use of.

Reference counting

To be collected by the garbage collector, objects should not be referenced anywhere anymore. The GC works by looking at the objects on the managed heap and seeing which are still rooted (accessible by the application via an application root). An objects that is not disposed properly will have a high number of references pointing to it.

Heap analysis

Heap analysis consists of taking memory snapshots at key points in time and comparing them. Usually the culprit can be found pretty quickly, since an increased number of object instances between two snapshots indicate a memory leak.

Memory profiling in Xamarin

We have the option of doing in the IDE or with a stand-alone app.

Xamarin Studio & Monotouch Profiler

Xamarin.iOS has had the advantage of profiling via the Monotouch Profiler, available in Xamarin Studio. It is based on the Mono Log Profiler and besides memory profiling, it provides performance profiling. Memory snapshots can be taken at regular intervals or on demand. 

Xamarin Profiler

The Xamarin Profiler comes as a standalone application, which you can download. This works for both iOS and Android, providing mostly the same functionality as the old Monotouch Profiler. Its main screen consists of two tabs: Time and Allocation. It is also capable of saving and loading saved profiles for later analysis.

The Time tab is useful for recording the duration of various executions times, as it will indicate the methods where the app spent most time, causing bottlenecks.

The Allocations tab shows how objects are created and disposed. Generally, the object that leaks will have the highest number of allocated instances. In this tab, we can take memory snapshots for later comparison. So inspecting the stacktrace (this can be done via Call Tree/Inverted Call Tree) to see where the allocations took place will leads us to our leak.

1 comment

Leave a Reply

Your email address will not be published. Required fields are marked *