How It's Made

Exploring Images in Jetpack Compose

Jetpack Compose was announced at Google IO 2019 and is going to change how we do UI development on Android. I’ve been working a lot with the Android image APIs as I’ve been developing Coil and was curious how it treats images and how the concept of image loaders would fit in.

In the current Android UI framework ImageView is the main way to show an image on screen. Images are represented as Drawables, which often (though not always) wrap Bitmaps. Bitmaps are raw, uncompressed pixel data stored in memory.

In Jetpack Compose there is no ImageView because there is no View. Instead, views are replaced by Composableswhich define a composable piece of UI to add to the view hierarchy. Likewise, there are no Drawables. Instead, it’s replaced by Image which is a minimal interface that wraps a NativeImage. At the moment NativeImage is defined as a typealias to Bitmap. Interestingly, NativeImage is prefixed with a commented out expectdeclaration, which is a Kotlin Multiplatform keyword 🤔. Currently there isn’t any support for animated images, though I’d expect AnimatedImage to be added later. Here are the rough API analogs:

Creating Images

At the moment, there is only one Image creation function, imageResource, which synchronously loads an image from resources. The API is noted as transient and will be replaced with an asynchronous API (likely using Coroutines) in the future. However, if we want to create an Image from a URL, a file, a URI, or another data source we’ll have to write it ourselves for now. Fortunately, we can offload the heavy lifting to an image loading library.

The easiest way to accomplish this is to write an effect. Effects are positionally memoized blocks of code that have a return value. They can be called from a Composable and will cause the composition (basically the view hierarchy) to rebuild if its output value is updated. Here’s an image effect implementation backed by Coil:

/**
 * A simple [image] effect, which loads [data] with the default options.
 */
@CheckResult(suggest = "+")
fun image(data: Any) = effectOf<Image?> {
    // Positionally memoize the request creation so
    // it will only be recreated if data changes.
    val request = +memo(data) {
        Coil.loader().newGetBuilder().data(data).build()
    }
    +image(request)
}

/**
 * A configurable [image] effect, which accepts a [request] value object.
 */
@CheckResult(suggest = "+")
fun image(request: GetRequest) = effectOf<Image?> {
    val image = +state<Image?> { null }

    // Execute the following code whenever the request changes.
    +onCommit(request) {
        val job = CoroutineScope(Dispatchers.Main.immediate).launch {
            // Start loading the image and await the result.
            val drawable = Coil.loader().get(request)
            image.value = AndroidImage(drawable.toBitmap())
        }

        // Cancel the request if the input to onCommit changes or
        // the Composition is removed from the composition tree.
        onDispose { job.cancel() }
    }

    // Emit a null Image to start with.
    image.value
}

What do these functions do?

Cool, now let’s take a look at the JetNews sample app. At the moment it eagerly loads all its resources in MainActivity.onCreate. Using image, we can replace all the eager loading with lazy, non-blocking, asynchronous calls. Additionally, we can replace all the hardcoded resources with URLs! Here’s what PostImage looks like after being converted:

@Composable
fun PostImage(post: Post) {
    val image = +image(post.imageThumbUrl) ?: +imageResource(R.drawable.placeholder_1_1)

    Container(width = 40.dp, height = 40.dp) {
        DrawImage(image)
    }
}
Great! We’re done, right? While this will theoretically work (currently Coroutines doesn’t work with the Jetpack Compose compiler), the image function is missing a number of features and can be optimized:

Most of these issues stem from the clean separation between Compose and the traditional UI framework classes like View and Drawable. That said, separating from those classes is absolutely the right idea since they are tied to the platform, rely on inheritance, and hold a lot of internal state (View is almost 30k lines long!). Composable and Image aren’t tied to the platform, favour composition over inheritance, and hold minimal to no internal state.

Overall I’m extremely excited by the progress on Jetpack Compose and look forward to ensuring Coil works effortlessly with Compose (when it’s ready). Also, if you want to see my fork of the JetNews app with the lazy loading changes, you can find it here.

Colin White

Author

Most Recent in How It's Made

One Model to Serve Them All: How Instacart deployed a single Deep Learning pCTR model for multiple surfaces with improved operations and performance along the way

How It's Made

One Model to Serve Them All: How Instacart deployed a single Deep Learning pCTR model for multiple surfaces with improved operations and performance along the way

Authors: Cheng Jia, Peng Qi, Joseph Haraldson, Adway Dhillon, Qiao Jiang, Sharath Rao Introduction Instacart Ads and Ranking Models At Instacart Ads, our focus lies in delivering the utmost relevance in advertisements to our customers, facilitating novel product discovery and enhancing…...

Dec 19, 2023
Monte Carlo, Puppetry and Laughter: The Unexpected Joys of Prompt Engineering

How It's Made

Monte Carlo, Puppetry and Laughter: The Unexpected Joys of Prompt Engineering

Author: Ben Bader The universe of the current Large Language Models (LLMs) engineering is electrifying, to say the least. The industry has been on fire with change since the launch of ChatGPT in November of…...

Dec 19, 2023
Unveiling the Core of Instacart’s Griffin 2.0: A Deep Dive into the Machine Learning Training Platform

How It's Made

Unveiling the Core of Instacart’s Griffin 2.0: A Deep Dive into the Machine Learning Training Platform

Authors: Han Li, Sahil Khanna, Jocelyn De La Rosa, Moping Dou, Sharad Gupta, Chenyang Yu and Rajpal Paryani Background About a year ago, we introduced the first version of Griffin, Instacart’s first ML Platform, detailing its development and support for end-to-end ML in…...

Nov 22, 2023