2013-05-23

In this tutorial, we’ll look at some of the advanced features and usage patterns of the humble UIImage class in iOS. By the end of this tutorial, you’ll have learned the following: how to make images in code, how to create resizable images for UI elements like callouts, and how to create animated images.

Theoretical Overview

If you’ve ever had to display an image in your iOS app, you’re probably familiar with UIImage. It’s the class that allows you to represent images on iOS. This is by far the most common way of using UIImage and is quite straightforward: you have an image file in one of several standard image formats (PNG, JPEG, BMP, etc.) and you wish to display it in your app’s interface. You instantiate a new UIImage instance by sending the class message imageNamed:. If you have an instance of UIImageView, you can set its image property to the UIImage instance, and then you can stick in the image view into your interface by setting it as a subview to your onscreen view:

You can also carry out the equivalent of the above procedure directly in Interface Builder.

There are some other ways of instantiating an image, such as from a URL, or from an archived image that was stored as an NSData type, but we won’t focus on those aspects in this tutorial.

Before we talk about creating images in code, recall that at the most primitive level, we know what a 2D image really is: a two-dimensional array of pixel values. The region of memory representing the pixel array for an image is often referred to as its bitmap store. This is sometimes useful to keep in mind when making memory considerations. However, it is important to realize that a UIImage is really a higher level abstraction of an image than that of a pixel array, and one that has been optimized according to the demands and usage scenarios of a mobile platform. While it is theoretically possible to create an image by populating an entire pixel array with values, or to reach into an existing image’s bitmap and read or modify the value of an individual pixel, it is rather inconvenient to do so on iOS and is not really facilitated by the API. However since the majority of app developers seldom find real need to mess with images at the pixel level, it’s usually not an issue.

What UIImage (or more generally, UIKit and Core Graphics) does facilitate the developer to do is to create a new image by compositing existing images in interesting ways, or to generate an image by rasterizing a vector drawing constructed using UIKit‘s UIBezierPath class or Core graphic’s CGPath... functions. If you want to write an app that lets the user create a collage of their pictures, it’s easy to do with UIKit and UIImage. If you’ve developed, say, a freehand drawing app and you want to let the user save his creation, then the simplest approach would involve extracting a UIImage from the drawing context. In the first section of this tutorial, you’ll learn exactly how both of these ideas can be accomplished!

It is important to keep in mind that a UIImage constructed this way is no different than an image obtained by opening a picture from the photo album, or downloading from the Internet: it can be saved to the archive or uploaded to the photo album or displayed in a UIImageView.

Image resizing is an important type of image manipulation. Obviously you’d like to avoid enlarging an image, because that causes the image’s quality and sharpness to suffer. However, there are certain scenarios in which resizable images are needed and there actually are sensible ways to do so that don’t degrade the quality of the image. UIImage caters for this situation by permitting images that have an inner resizable area and “edge insets” on the image borders that resize in a particular direction, or don’t resize at all. Furthermore, the resizing can be carried out either by tiling or stretching the resizable portions for two somewhat different effects that can be useful in different situations.

The second part section of the implementation will show a concrete implementation of this idea. We’ll write a nifty little class that can display any amount of text inside a resizable image!

Finally, we’ll talk a bit about animating images with UIImage. As you can probably guess, this means “playing” a series of images in succession, giving rise to the illusion of animation much like the animated GIFs that you see on the Internet. While this might seem a bit limited, in simple situations UIImage‘s animated image support might be just what you need, and all it takes is a couple of lines of code to get up and running. That’s what we’ll look at in the third and final section of this tutorial! Time to roll up our sleeves and get to work!

1. Starting a New Project

Create a new iOS project in Xcode, with the “Empty Application” template. Call it “UIImageFun”. Check the option for Automatic Reference Counting, but uncheck the options for Core Data and Unit Tests.

A small note, before we proceed: this tutorial uses several sets of images, and to obtain these you’ll need to click where it says “Download Source Files” at the top of this page. After downloading and unzipping the archive, drag the folder named “Images” into the Project Navigator – the leftmost tab in the lefthand pane in Xcode. If the left pane isn’t visible, then press the key combination ⌘ + 0 to make it visible and ensure the leftmost tab – whose icon looks like a folder – is selected.

Ensure that the “Copy items into destination group’s folder (if needed)” option in the dialog box is checked”.

The downloaded file also contains the complete Xcode project with the images already added to the project, in case you get stuck somewhere.

2. Creating an Image in Code

Create a new file for an Objective-C class, call it ViewController and make it a subclass of UIViewController. Ensure that the options related to iPad and XIB are left unchecked.

Replace all the code in ViewController.m with the following:

Configure the App Delegate to use an instance of ViewController as the root view controller by replacing the code in AppDelegate.m with the following:

Let’s examine the code for viewDidLoad: where all the action happens. We’ll refer to the numbers in the code context.

We want to start by drawing an image, which means we need a “canvas”. In proper terminology, this is called an image context (or bitmap context). We create one by calling the UIGraphicsBeginImageContextWithOptions() function. This function takes as arguments a CGSize, which we’ve set to the size of our view controller’s view, meaning the entire screen. The BOOL tell us whether the context is opaque or not. An opaque context is more efficient, but you can’t “see through” it. Since there’s nothing of interest underneath our context, we set it to YES. The scale factor, which is a float that we set to 0.0 (a value that ensures device-specific scale). Depending on whether the device has a Retina display, the scale factor will be set to 2.0 or 1.0 respectively. I’ll talk a bit more about the scale factor shortly, but for a comprehensive discussion, I’ll refer you to the official documentation (specifically, the “Points vs. Pixels” section in the Drawing and Printing Guide for iOS).

Once we create an image context this way, it becomes the current context. This is important because to draw with UIKit, we must have a current drawing context where all the implicit drawing happens. We now set a fill color for the current context and fill in a rectangle the size of the entire context.

We now create a UIBezierPath instance in the shape of a circle, which we stroke with a thick outline and fill with a different color. This concludes the drawing portion of our image creation.

We create an array of images, with each image instantiated via the imageNamed: initializer of UIImage. It’s important to observe here that we have two sets of rock images: rock1.png, rock2.png,… and rock1@2x.png, rock2@2x.png, the latter being twice the resolution of the former. One of the great features of UIImage is that at runtime the imageNamed: method automatically looks for an image with suffix @2x presumed to be of double resolution on a Retina device. If one is available, it is used! If the suffixed image is absent or if the device is non-Retina, than the standard image is used. Note that we don’t specify the suffix of the image in the initializer. The use of single- and double-resolution images in conjunction with the device-dependent scale (as a result of setting scale to 0.0) ensures the actual size of the objects on screen will be the same. Naturally, the Retina images will be more crisp because of the higher pixel density.
If you view the rock images, you’ll notice that the double-resolution images are flipped with respect to the single-resolution ones. I did that on purpose so we could confirm that at runtime the correct resolution images were being used – that’s all. Normally the two sets of images would be the same (apart from the resolution).
We compose our image in a loop by placing a randomly chosen rock from our picture set at a random point (constrained to lie in a circle) in each iteration. The UIImage method drawAtPoint: draws the chosen rock image at the specified point into the current image context.

We now extract a new UIImage object from the contents of the current image context, by calling UIGraphicsGetImageFromCurrentImageContext().

The call to UIGraphicsEndImageContext() ends the current image context and cleans up memory.

Finally, we set the image we created as the image property of our UIImageView and display it on screen.

Build and run. The output should look like the following, only randomized differently:

By testing on both Retina and non-Retina devices or by changing the device type in the Simulator under the Hardware menu, you can ensure that the rocks are flipped with one in respect to the other. Once again, I only did this so we could easily confirm that the right set of images would be being picked at runtime. Normally, there’s no reason for you to do this!

To recap – at the risk of belaboring the point – we created a new image (a UIImage object) by compositing together images we already have on top of a drawing we drew.

On to the next part of the implementation!

3. Resizable Images

Consider the figure below.

The left image shows a callout or “speech bubble” similar to the one seen in many messaging apps. Obviously, we would like the callout to expand or shrink according to the amount of text in it. Also, we’d like to use a single image from which we can generate callouts of any size. If we magnify the entire callout equally in all directions, the entire image gets pixellated or blurred depending on the resizing algorithm being used. However, note the way that the callout image has been designed. It can be expanded in certain directions without loss of quality simply by replicating (tiling) pixels as we go along. The corner shapes can’t be resized without changing image quality, but on the other hand, the middle is just a block of pixels of uniform colour that can be made any size we like. The top and bottom sides can be stretched horizontally without losing quality, and the left and right sides vertically. All this is shown in the image on the right hand side.

Luckily for us, UIImage has a couple of methods for creating resizable images of this sort. The one we’re going to use is resizableImageWithCapInsets:. Here the “cap insets” represent the dimensions of the non-stretchable corners of the image (starting from the top margin and moving counterclockwise) and are encapsulated in a struct of type UIEdgeInsets composed of four floats:

The figure below should clarify what these numbers represent:

Let’s exploit resizable UIImages to create a simple class that lets us enclose any amount of text in a resizable image!

Create a NSObject subclass called Note and enter the following code into Note.h and Note.m respectively.

The initializer method for Note, -initWithText:fontSize: noteChrome:edgeInsets:maximumWidth:topLeftCorner: takes several parameters, including the text string to be displayed, the font size, the note “chrome” (which is the resizable UIImage that will surround the text), its cap insets, the maximum width the note’s image view may have, and the top-left corner of the note’s frame.

Once initialised, the Note class’ noteView property (of type UIImageView) is the user interface element that we’ll display on the screen.

The implementation is quite simple. We exploit a very useful method from the NSString‘s category on UIKit, sizeWithFont:constrainedToSize:lineBreakMode:, that computes the size that a block of text will occupy on the screen, given certain parameters. Once we’ve done that, we construct a text label (UILabel) and populate it with the provided text. By taking into account the inset sizes and the calculated text size, we assign the label an appropriate frame, as well as make our noteView‘s image large enough (using the resizableImageWithCapInsets method) so that the label fits comfortably on top of the interior area of the the image.

In the figure below, the image on the left represents what a typical note containing a few lines worth of text in it would look like.

Note that the interior has nothing of interest. We can actually “pare” the image to its bare minimum (as shown on the right) by getting rid of all the pixels in the interior with image editing software. In fact, in the documentation Apple recommends that for best performance, the interior area should be tiled 1 x 1 pixels. That’s what the funny little image on the right represents, and that’s the one we’re going to be passing to our Note initializer. Make sure that it got added to your project as squeezednote.png when you dragged the Images folder to your project.

In ViewController.m, enter the #import "Note.h" statement at the top. Comment out the previous viewDidLoad: form and enter the following:

We’re simply creating Note objects with a different amount of text. Build, run, and observe how nicely the “chrome” around the note resizes to accommodate the text inside its boundaries.

For the sake of comparison, here’s what the output would look like if “squeezednote.png” were configured as a “normal” UIImage instantiated with imageNamed: and resized equally in all directions.

Admittedly, we wouldn’t actually use a “minimal” image like “squeezednote” unless we were using resizable images in the first place, so the effect shown in the previous screenshot is greatly exaggerated. However, the blurring problem would definitely be there.

On to the final part of the tutorial!

4. Animated Images

By animated image, I actually mean a sequence of individual 2D images that are displayed in succession. This is basically just the sprite animation that is used in most 2D games. UIImage has an initializer method animatedImageNamed:duration: to which you pass a string that represents the prefix of the sequence of images to be animated, so if your images are named “robot1.png”, “robot2.png”, …, “robot60.png”, you’d simply pass in the string “robot” to this method. The duration of the animation is also passed in. That’s pretty much it! When the image is added to a UIImageView, it continuously animates on screen. Let’s implement an example.

Comment out the previous version of viewDidLoad: and enter the following version.

We added a set of PNG images to our project, explosion1.png through explosion81.png which represent an animated sequence of a fiery explosion. Our code is quite simple. We detect a tap on the screen and either place a new explosion animation at the tap point, or if there was already an explosion going on at that point, we remove it. Note that the essential code consists of just creating an animated image via animatedImageNamed: to which we pass the string @"explosion" and a float value for the duration.

You’ll have to run the app on the simulator or a device yourself in order to enjoy the fireworks display, but here’s an image that captures a single frame of the action, with several explosions going on at the same time.

Admittedly, if you were developing a high-paced action game such as a shoot ‘em up or a side scrolling platformer, then UIImage‘s support for animated images would seem quite primitive. They wouldn’t be your go-to approach for implementing animation. That’s not really what the UIImage is built for, but in other less demanding scenarios, it might be just the ticket! Since the animation runs continuously until you remove the animated image or the image view from the interface, you can make the animations stop after a prescribed time interval by sending a delayed message with – performSelector:withObject:afterDelay: or use an NSTimer.

Conclusion

In this tutorial, we looked at some useful but less well known features of the UIImage class that can come in handy. I suggest you take a look at the UIImage Class Reference because some of the features we discussed in this tutorial have more options. For example, images can be composited using one of several blending options. Resizable images can be configured in one of two resizing modes, tiling (which is the one we used implicitly) or stretching. Even animated images can have insets. We didn’t talk about the underlying CGImage opaque type that UIImage wraps around. You need to deal with CGImages if you program at the Core Graphics level, which is the C-based API that sits one level below UIKit in the iOS framework. Core Graphics is more powerful than UIKit to program with, but not quite as easy. We also didn’t talk about images created with data from a Core Image object, as that would make more sense in a Core Image tutorial.

I hope you found this tutorial useful. Keep coding!

Show more