Many applications allow to process images by applying filters and create variations of the original ones. Usually that’s feasible thanks to the Core Image framework and the APIs it provides. Core Image contains a long list of built-in filters that we can use in order to modify images, with the required steps to do so being specific and straightforward.
In this post you are going to meet how to make use of built-in filters, how to supply parameters in order to achieve the most desirable results, as well as how to chain filters for a combined result. On top of outlining the various stages of the process, I’m also presenting a simple SwiftUI app that demonstrates how everything fits in the context of a project. With no further delay, let’s get started by going straight into the point.
How to apply a filter
Well, it all starts with an image, which you should obtain in the most appropriate way for your app. Most often that will be a UIImage
in iOS and a NSImage
in macOS. In this example here we are going to use a UIImage
that will be initialized with an actual image taken from the app bundle:
1 2 3 |
let image = UIImage(named: “sample_image”) |
Notice that image
is an optional UIImage
object.
The next step is to get a CIImage object from the original image you already have. Note that a CIImage
instance (where CIImage
is a class in the CoreImage
framework) does not represent an actual image, but it contains image data that is not directly displayable, for example in a SwiftUI Image view or a UIImageView in UIKit.
To come up with a CIImage
object it’s necessary to perform an intermediate step; to get the original image as a CGImage first, and then use that representation in order to initialize a CIImage
instance. Doing so takes just a couple of lines of code, however be warned; the resulting CGImage
is an optional object, meaning it can be nil, and don’t forget that image
in the previous snippet is also optional. So, the safest approach to go from the original UIImage
to the desired CIImage
is to use a guard let
statement (or an if let
if you prefer so):
1 2 3 4 |
guard let cgImage = image?.cgImage else { return } let ciImage = CIImage(cgImage: cgImage) |
Preparing the filter
With the CIImage
object handy, we can then initialize a filter using the CIFilter class. Right at the initialization time we provide the name of the built-in filter we want to use as argument. For example, the following initializes a filter which will result to a sepia effect later on:
1 2 3 |
let filter = CIFilter(name: “CISepiaTone”) |
This particular filter accepts two parameter values; the first is the image that the filter will be applied to as a CIImage
object, while the second is the intensity of the sepia effect as a double value in the closed range [0.0, 1.0].
Important: You can find the full list of built-in filters along with their parameters, possible values and examples on this page from Apple.
Note that CIImage
is a KVO-compliant class, and that means we provide parameter names and values using the setValue(_:forKey:)
method as key-value pairs.
There are two ways to specify keys for filter parameters. The first is as string literals:
1 2 3 4 |
filter?.setValue(ciImage, forKey: “inputImage”) filter?.setValue(0.9, forKey: “inputIntensity”) |
The second way is to use the appropriate constant value contained in the Core Image framework:
1 2 3 4 |
filter?.setValue(ciImage, forKey: kCIInputImageKey) filter?.setValue(0.9, forKey: kCIInputIntensityKey) |
It really doesn’t matter which approach you will follow to provide keys, as long as you ensure that they are correct. For the latter especially there is a “recipe” to construct the name of the required constant from a parameter name found in the previously given link:
- Prefix the parameter name with the “kCI” value,
- capitalize the first letter of the parameter name,
- add the “Key” suffix to the end.
A few examples:
- inputImage:
kCIInputImageKey
- inputIntensity:
kCIInputIntensityKey
- inputRadius:
kCIInputRadiusKey
- inputAngle:
kCIInputAngleKey
In the code that follows next I’m going to use the second option and the constant values, but feel free to use the other one if you feel more comfortable with it.
Getting the filtered image
With the selected filter configured, the next step is to get the modified image as a CIImage
object. That’s easy as there is a specific property in the CIFilter
instance that we can get it from. It’s called outputImage
:
1 2 3 |
guard let output = filter?.outputImage else { return } |
Make sure to unwrap as shown above or using an if-let
statement, because besides the filter
in this demo, outputImage
is optional as well.
The final move is to get the image as a UIImage
or NSImage
object. To do that properly we need a Core Image context which is described programmatically by a CIContext instance. Creating such an instance is computationally expensive, so make sure to initialize it once and then use it as many times as needed:
1 2 3 |
let context = CIContext() |
Using the context
we can now get a CGImage
representation of the modified image:
1 2 3 |
guard let cgImage = context.createCGImage(output, from: output.extent) else { return } |
The output
argument is the modified image as a CIImage
object. The output.extent
gives us the image frame (rectangle) needed as second argument. The resulting CGImage
is an optional object and it can be nil, therefore make sure to proceed safely by optionally unwrapping it.
To get a UIImage
from the above is as simple as that:
1 2 3 |
let modifiedImage = UIImage(cgImage: cgImage) |
All the above content summarizes how to apply a single built-in filter on images in Swift. If that’s what you have been looking for, then you’re good to stop here. However, if you want to find out how all these fit to the context of a simple project, then keep reading on. Note that the discussion about how to chain filters comes right after the next part.
Applying filters in a demo SwiftUI app
The application that will be demonstrated here is pretty simple. It displays an image and right after a picker with various filters to select and apply:
The starting point towards its implementation is a simple enumeration where we’ll list all filters we would like to present and use:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
enum Filters: CaseIterable, CustomStringConvertible { case none, sepia, motionBlur, colorInvert, crystallize, comic var description: String { switch self { case .none: return “None” case .sepia: return “Sepia” case .motionBlur: return “Motion Blur” case .colorInvert: return “Color Invert” case .crystallize: return “Crystallize” case .comic: return “Comic Effect” } } } |
There are five filters in total, as you can see in the above snippet:
- Sepia
- Motion Blur
- Color Invert
- Crystallize
- Comic
Each one of the above filters accept a different number and kind of parameters, which can be found in the link already provided above.
The Filters
enum conforms to CaseIterable
protocol so we can iterate through its cases in a loop. It also conforms to CustomStringCovertible
in order to make the description
computed property available here. A user-friendly text is provided for each filter in it.
Besides the Filters
enum, we’ll also define the following ObservableObject
type:
1 2 3 4 5 |
class FilterMaker: ObservableObject { @Published var image = UIImage() } |
The image
property marked with the @Published
property wrapper is going to contain the image that will be displayed in the SwiftUI part. We’re about to add some quite interesting content to this class in a few moments.
In the frontend side, a single SwiftUI view is sufficient to generate the simple visual results we need. Its initial implementation is given right next:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
struct ContentView: View { @State var selectedFilter = Filters.none @StateObject var filterMaker = FilterMaker() var body: some View { VStack { Image(uiImage: filterMaker.image) .resizable() .frame(height: 250) .scaledToFit() Picker(“”, selection: $selectedFilter) { ForEach(Filters.allCases, id: \.self) { filter in Text(filter.description) } } .padding(.top) } .padding() } } |
The currently selected filter is kept in the selectedFilter
stored property annotated with the @State
property wrapper. Its value is set to none
initially. Also, the filterMaker
property holds a FilterMaker
instance. Notice that it’s prefixed with the @StateObject
property wrapper so it remains unchanged during subsequent renderings of the view.
In a VStack
container you can find both the image and picker view. See that the source for the former is the image
property in the FilterMaker
instance.
Besides that, it’s interesting to observe the use of the selectedFilter
property, as well as how all available cases in Filters
enum are listed in the picker; the ForEach
container iterates through them by accessing the allCases
property, with the latter being exposed by the CaseIterable
protocol. In addition, the name of each filter is taken from the description
property.
Implementing the filters
The initial implementation of the FilterMaker
class is quite short. However, this is going to change now, as it will contain all the code related to filters. For starters we’ll add a couple of stored properties, with the first one keeping a CIContext
instance:
1 2 3 |
let context = CIContext() |
Remember what I mentioned previously; a CIContext
object should be initialized only once, and then be used in multiple places.
Next, we’ll add the next property that will store the original, unmodified image as a UIImage
object:
1 2 3 |
var originalImage: UIImage |
In addition to the above two, we’ll also define a computed property that will be returning the CIImage
representation of the original image:
1 2 3 4 5 6 |
var ciImage: CIImage? { guard let cgImage = originalImage.cgImage else { return nil } return CIImage(cgImage: cgImage) } |
Following the steps presented in the first part of this post, see that we get the CGImage
representation initially, which then use to initialize and return a CIImage
object.
Next, let’s add an initializer method to the FilterMaker
class where we’ll perform two actions:
- We’ll load the original image from the assets catalog.
- We’ll assign it to the
image
property so as to be displayed in the SwiftUI view.
1 2 3 4 5 6 |
init() { originalImage = UIImage(named: “sample_image”) ?? UIImage() image = originalImage } |
Now, and right before we implement the filters, we’ll implement a method that is going to be reused several times next. It will be accepting a CIFilter
as argument, it will construct a new UIImage
and assign it to the image
property of the FilterMaker
class. Without creating this method, we would need to implement the code you see next as many times as the filters we want to implement are:
1 2 3 4 5 6 7 |
fileprivate func getImage(usingFilter filter: CIFilter?) { guard let output = filter?.outputImage else { return } guard let cgImage = context.createCGImage(output, from: output.extent) else { return } image = UIImage(cgImage: cgImage) } |
The getImage(usingFilter:)
method does nothing else but following the steps described at the beginning of this tutorial. To recap:
- In the first line we optionally unwrap the modified image and we get it as a
CIImage
object by accessing theoutputImage
property of thefilter
object. The unwrapped value is stored in theoutput
constant. - In the second line we create a
CGImage
image using theoutput
from the previous step. Here’s where thecontext
property (the Core Image context) gets into play. ThecreateCGImage(_:from:)
method returns an optional value, so it’s necessary to optionally unwrap once again. Besides theoutput
that we provide as first argument to that method, the second argument is the frame of the image that is fetched with theextent
instance property of theCIImage
class. - Finally, in the last line we initialize a
UIImage
object using theCGImage
representation, which is then assigned to theimage
property.
Time to add the filters implementation. In order to keep things simple, we’re going to have a method for each single filter. Notice that the last line in each method next is a call to the getImage(usingFilter:)
in order to get the filtered image:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 |
func applySepia() { guard let ciImage else { return } let filter = CIFilter(name: “CISepiaTone”) filter?.setValue(ciImage, forKey: kCIInputImageKey) filter?.setValue(0.9, forKey: kCIInputIntensityKey) getImage(usingFilter: filter) } func applyMotionBlur() { guard let ciImage else { return } let filter = CIFilter(name: “CIMotionBlur”) filter?.setValue(ciImage, forKey: kCIInputImageKey) filter?.setValue(30.0, forKey: kCIInputRadiusKey) filter?.setValue(20.0, forKey: kCIInputAngleKey) getImage(usingFilter: filter) } func applyColorInvert() { guard let ciImage else { return } let filter = CIFilter(name: “CIColorInvert”) filter?.setValue(ciImage, forKey: kCIInputImageKey) getImage(usingFilter: filter) } func applyCrystallize() { guard let ciImage else { return } let filter = CIFilter(name: “CICrystallize”) filter?.setValue(ciImage, forKey: kCIInputImageKey) filter?.setValue(35, forKey: kCIInputRadiusKey) filter?.setValue(CIVector(x: 200, y: 200), forKey: kCIInputCenterKey) getImage(usingFilter: filter) } func applyComicEffect() { guard let ciImage else { return } let filter = CIFilter(name: “CIComicEffect”) filter?.setValue(ciImage, forKey: kCIInputImageKey) getImage(usingFilter: filter) } |
See that all the above methods look quite similar. It’s just the name of the filter and the provided key-value pairs that change. Despite any differences, they all accept the CIImage
instance as input for the filter they define.
I will intentionally avoid to discuss the parameters of each filter here. I will prompt you to the official documentation once again, where you can explore them thoroughly and see examples of the respective filters.
All these methods presented in the previous snippet cover all cases in the Filters
enum, except one! The none
case.
This case is particular, as we should not apply any filter here. Instead, what we need to do is something much simpler; to assign the original image to the image
property. And for that, here’s the last method in the FilterMaker
class:
1 2 3 4 5 |
func removeFilter() { image = originalImage } |
Eventually, here’s the FilterMaker
class complete:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 |
class FilterMaker: ObservableObject { @Published var image = UIImage() var originalImage: UIImage let context = CIContext() var ciImage: CIImage? { guard let cgImage = originalImage.cgImage else { return nil } return CIImage(cgImage: cgImage) } init() { originalImage = UIImage(named: “sample_image”) ?? UIImage() image = originalImage } func removeFilter() { image = originalImage } func applySepia() { guard let ciImage else { return } let filter = CIFilter(name: “CISepiaTone”) filter?.setValue(ciImage, forKey: kCIInputImageKey) filter?.setValue(0.9, forKey: kCIInputIntensityKey) getImage(usingFilter: filter) } func applyMotionBlur() { guard let ciImage else { return } let filter = CIFilter(name: “CIMotionBlur”) filter?.setValue(ciImage, forKey: kCIInputImageKey) filter?.setValue(30.0, forKey: kCIInputRadiusKey) filter?.setValue(20.0, forKey: kCIInputAngleKey) getImage(usingFilter: filter) } func applyColorInvert() { guard let ciImage else { return } let filter = CIFilter(name: “CIColorInvert”) filter?.setValue(ciImage, forKey: kCIInputImageKey) getImage(usingFilter: filter) } func applyCrystallize() { guard let ciImage else { return } let filter = CIFilter(name: “CICrystallize”) filter?.setValue(ciImage, forKey: kCIInputImageKey) filter?.setValue(35, forKey: kCIInputRadiusKey) filter?.setValue(CIVector(x: 200, y: 200), forKey: kCIInputCenterKey) getImage(usingFilter: filter) } func applyComicEffect() { guard let ciImage else { return } let filter = CIFilter(name: “CIComicEffect”) filter?.setValue(ciImage, forKey: kCIInputImageKey) getImage(usingFilter: filter) } fileprivate func getImage(usingFilter filter: CIFilter?) { guard let output = filter?.outputImage else { return } guard let cgImage = context.createCGImage(output, from: output.extent) else { return } image = UIImage(cgImage: cgImage) } } |
Using the filters in the SwiftUI view
Every time a different option is selected in the Picker view, the selectedFilter
property will be updated with that new value. As a refresher, this is the selectedFilter
in the SwiftUI view of this sample app:
1 2 3 |
@State var selectedFilter = Filters.none |
To detect changes occurring in this particular property, we’ll use the onChange(of:perform:)
view modifier. We’ll apply it on the outermost view, the VStack
container, as it doesn’t actually modify any view:
1 2 3 4 5 6 7 8 |
VStack { … } .onChange(of: selectedFilter) { newValue in } |
The newValue
parameter in the closure contains the new value of the selectedFilter
property. Depending on it, we’ll call the proper method that applies the respective filter to the image. All that will take place in a switch
statement:
1 2 3 4 5 6 7 8 9 10 11 12 |
.onChange(of: selectedFilter) { newValue in switch newValue { case .none: filterMaker.removeFilter() case .sepia: filterMaker.applySepia() case .motionBlur: filterMaker.applyMotionBlur() case .colorInvert: filterMaker.applyColorInvert() case .crystallize: filterMaker.applyCrystallize() case .comic: filterMaker.applyComicEffect() } } |
This is the only necessary addition in the SwiftUI view. Here’s its entire implementation:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 |
struct ContentView: View { @State var selectedFilter = Filters.none @StateObject var filterMaker = FilterMaker() var body: some View { VStack { Image(uiImage: filterMaker.image) .resizable() .frame(height: 250) .scaledToFit() Picker(“”, selection: $selectedFilter) { ForEach(Filters.allCases, id: \.self) { filter in Text(filter.description) } } .padding(.top) } .padding() .onChange(of: selectedFilter) { newValue in switch newValue { case .none: filterMaker.removeFilter() case .sepia: filterMaker.applySepia() case .motionBlur: filterMaker.applyMotionBlur() case .colorInvert: filterMaker.applyColorInvert() case .crystallize: filterMaker.applyCrystallize() case .comic: filterMaker.applyComicEffect() } } } } |
The demo app is now ready, illustrating how to make use of built-in filters and modify an image on demand.
Note: You can download the sample project from this link.
Chaining filters
It’s possible to chain multiple built-in filters and end up with a combined result. The process to do so is probably easier than what you might be expecting. All you need to do is to provide the output of one filter as input to the next one! The final image is taken from the last filter using the CIContext
instance as it is already presented in the previous parts.
To see an example of how to do that, we are going to chain two filters together, the sepia effect and the motion blur. The sample application presented previously is where this final addition is going to take place.
In the FilterMaker
class we’ll add a new method:
1 2 3 4 5 |
func chainFilters() { } |
We’ll start here by configuring a sepia filter, exactly as we have already met earlier:
1 2 3 4 5 6 7 8 9 |
func chainFilters() { guard let ciImage else { return } let sepiaFilter = CIFilter(name: “CISepiaTone”) sepiaFilter?.setValue(ciImage, forKey: kCIInputImageKey) sepiaFilter?.setValue(0.9, forKey: kCIInputIntensityKey) } |
As a reminder, the source ciImage
is taken from this computed property:
1 2 3 4 5 6 |
var ciImage: CIImage? { guard let cgImage = originalImage.cgImage else { return nil } return CIImage(cgImage: cgImage) } |
Now, instead of creating a UIImage
object using the modified image resulting from the above filter, we’ll first ensure that we can get its output CIImage
:
1 2 3 4 5 6 7 |
func chainFilters() { … guard let sepiaOutput = sepiaFilter?.outputImage else { return } } |
Next, we’ll initialize a new CIFilter
instance and configure it in order to produce a motion blur effect. Here’s the most important part! See that the input image provided to that new filter is the sepiaOutput
fetched right in the previous snippet:
1 2 3 4 5 6 7 8 9 10 |
func chainFilters() { … let blurFilter = CIFilter(name: “CIMotionBlur”) blurFilter?.setValue(sepiaOutput, forKey: kCIInputImageKey) blurFilter?.setValue(30.0, forKey: kCIInputRadiusKey) blurFilter?.setValue(20.0, forKey: kCIInputAngleKey) } |
Lastly, and given that the blurFilter
is the last one in the chain, we can get the new UIImage
object:
1 2 3 4 5 6 7 |
func chainFilters() { … getImage(usingFilter: blurFilter) } |
To refresh your memory once again, this is the implementation of the getImage(usingFilter:)
that we already talked about previously:
1 2 3 4 5 6 7 |
fileprivate func getImage(usingFilter filter: CIFilter?) { guard let output = filter?.outputImage else { return } guard let cgImage = context.createCGImage(output, from: output.extent) else { return } image = UIImage(cgImage: cgImage) } |
Here is the chainFilters()
method as one piece:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
func chainFilters() { guard let ciImage else { return } let sepiaFilter = CIFilter(name: “CISepiaTone”) sepiaFilter?.setValue(ciImage, forKey: kCIInputImageKey) sepiaFilter?.setValue(0.9, forKey: kCIInputIntensityKey) guard let sepiaOutput = sepiaFilter?.outputImage else { return } let blurFilter = CIFilter(name: “CIMotionBlur”) blurFilter?.setValue(sepiaOutput, forKey: kCIInputImageKey) blurFilter?.setValue(30.0, forKey: kCIInputRadiusKey) blurFilter?.setValue(20.0, forKey: kCIInputAngleKey) getImage(usingFilter: blurFilter) } |
Using the chained filters
In order to keep things simple, we’ll just add a Button to the SwiftUI view that will trigger the above method. We’ll add it as the last view in the VStack
container:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
VStack { … Button { filterMaker.chainFilters() } label: { Text(“Chain Filters”) } .padding(.top) } // … view modifiers … |
Here’s the result of the above; right when the “Chain Filters” button is tapped the image is processed and the two chained filters are applied to it.
Conclusion
The steps to apply built-in filters to images in Swift are specific, and the only thing that changes is the filter name and the number of parameters. It’s also easy to chain filters together and come up with an image that contains them all. Hopefully, the sample app presented above is helping to better understand how to use filters in a SwiftUI based project. In a final note, I would recommend to take a look at the official documentation about Core Image filters for further study on this subject.
Thank you for reading! ????