...
 
Commits (2)
# CoreImage in Color and Depth Workshop
The workshop has been created for a local meetup in Copenhagen [Peer Lab](https://www.meetup.com/CopenhagenCocoa/events/260892812/) and the Swift conference [Swift Aveiro 2019](http://swiftaveiro.xyz) in Portugal.
The workshop has been created for a local meetup in Copenhagen [Peer Lab](https://www.meetup.com/CopenhagenCocoa/events/260892812/) and the Swift conference [Swift Aveiro 2019](http://swiftaveiro.xyz) in Portugal.
The authors are [Kalle](https://twitter.com/kkabell) and [Tobias](https://twitter.com/tobiasdm) from [Kabell & Munk](https://twitter.com/kabellmunk).
......@@ -8,8 +8,8 @@ This project contains workshop material that besides this document includes a sa
This file includes:
- A theoretical introduction to Core Image,
- an introduction guide for tasks to complete in the sample project, and
- A theoretical introduction to Core Image,
- an introduction guide for tasks to complete in the sample project, and
- a list of references.
## Prerequisites
......@@ -44,7 +44,7 @@ When applying multiple effects or manipulations to an image, this delayed proces
The framework is also highly extensible by enabling integration of custom filters. This allows the processing graphs to stay highly performant by utilizing advanced software and hardware optimizations built in to Core Image.
The closest integration of custom logic into Core Image can be accomplished using [`CIKernel`](https://developer.apple.com/documentation/coreimage/writing_custom_kernels)s. The custom logic is written in a subset of the [Metal Shading Language](https://developer.apple.com/metal/MetalCIKLReference6.pdf) specifically for Core Image Kernels.
The closest integration of custom logic into Core Image can be accomplished using [`CIKernel`](https://developer.apple.com/documentation/coreimage/writing_custom_kernels)s. The custom logic is written in a subset of the [Metal Shading Language](https://developer.apple.com/metal/MetalCIKLReference6.pdf) specifically for Core Image Kernels.
Core Image can also be extended using [`CIImageProcessorKernel`](https://developer.apple.com/documentation/coreimage/ciimageprocessorkernel)s that enables the integration of custom image processors into the Core Image filter chain. This includes using other image processing technologies available on Apple's platform like [Metal Performance Shaders](https://developer.apple.com/documentation/metalperformanceshaders), [Core Graphics](https://developer.apple.com/documentation/coregraphics), [Accelerate vImage](https://developer.apple.com/documentation/accelerate/vimage). One can also create completely custom CPU-based processing logic in Swift or Objective-C.
......@@ -66,11 +66,11 @@ To create a filter, use:
let filter = CIFilter(name: "CIPhotoEffectNoir")
```
where the filter name is provided as a string, here `"CIPhotoEffectNoir"`.
where the filter name is provided as a string, here `"CIPhotoEffectNoir"`.
All built-in filters can be looked up at [CIFilter.io](https://cifilter.io) by [Noah Gilmore](https://twitter.com/noahsark769). Apple's own documentation ([Core Image Filter Reference](https://developer.apple.com/library/archive/documentation/GraphicsImaging/Reference/CoreImageFilterReference/index.html)) is outdated, so it lacks filters introduced in the recent years.
All built-in filters can be looked up at [CIFilter.io](https://cifilter.io) by [Noah Gilmore](https://twitter.com/noahsark769). Apple's own documentation ([Core Image Filter Reference](https://developer.apple.com/library/archive/documentation/GraphicsImaging/Reference/CoreImageFilterReference/index.html)) is outdated, so it lacks filters introduced in the recent years.
Some simple filters just take an input image and no other arguments. Set the input image using the designated string key `"inputImage"` like this:
Some simple filters just take an input image and no other arguments. Set the input image using the designated string key `"inputImage"` like this:
```swift
filter.setValue(originalImage, forKey: "inputImage")
......@@ -95,13 +95,14 @@ filter.setValue(10, forKey: "inputRadius")
Here the radius is set for a filter named `"CIBoxBlur"` that also takes a `"inputImage"`. For this filter, each pixel in the `outputImage` is computed by taking the average of a square with a side length of 2 times `10` pixels, centered at the pixel. The documentation for the keys for the built-in filters (including this one) can be found at [CIFilter.io](https://cifilter.io).
The subtasks are ready for you in `Task2.swift`.
## Task 3
In task 1 and 2 we have created a foreground and background, and the objective now is to combine the two.
Each pixel in an image consists of 4 components that represents the color channels *red, green*, and *blue* and the opacity channel *alpha*. Often shorted as RGBA. Each channel has a value between `0` and `1`.
In task 1 and 2 we have created a foreground and background, and the objective now is to combine the two.
Each pixel in an image consists of 4 components that represents the color channels _red, green_, and _blue_ and the opacity channel _alpha_. Often shorted as RGBA. Each channel has a value between `0` and `1`.
For example, a solid red pixel has a value `1` in the `R`, and `A` channels and `0` for all other channels. A purple pixel would also have a value of `1` in its blue channel.
For example, a solid red pixel has a value `1` in the `R`, and `A` channels and `0` for all other channels. A purple pixel would also have a value of `1` in its blue channel.
Let's say we are combining two images, and for a particular position we have a dark red and dark purple pixel respectively. The `"CIMultiplyCompositing"` filter multiplies the pixel values. This is done separately in each RGBA-channel, so the resulting in output pixel will be calculated like this:
......@@ -122,23 +123,49 @@ let blendedImage = filter.outputImage
Head to the `Task3.swift` file now to find your favorite blending filter.
## Task 4
Let's try to build our own blending filter, using the image's depth map. We will selectively decide which areas of the output image should be sampled from the two images created in task 1 and 2.
To create a custom image processing logic, Core Image has kernels that can be used in place of `CIFilter`s. The kernels are defined using pure functions written in Metal.
In task 3 we used the blending filters that come built-in to Core Image. Your job now is to recreate the blending filter that you have chosen, by writing the logic yourself.
To create a custom image processing logic, Core Image has kernels that can be used in place of `CIFilter`s. The kernels are defined using pure functions written in Metal.
One kind of kernel is the `CIColorKernel` whose Metal function is called once for each pixel, with the corresponding color value(s) from the input image(s) and it must return a single color value which will be used in the final image:
One kind of kernel is the `CIBlendKernel` whose Metal function is called once for each pixel, with the corresponding color values from a foreground image and a background image. It must return a single color value which will be used in the final image:
```c++
float4 aColorKernel(sample_t input) {
return float4( /* color components */ );
float4 aBlendKernel(sample_t foregroundImage, sample_t backgroundImage) {
return float4( /* color components */ );
}
```
A color value is of type `float4`. `sample_t` is also a `float4`. The color channels are accessed as the properties `r`, `g`, `b` and `a`.
An full image kernel function could look like this:
A full blend kernel function could look like this:
```c++
float4 subtract(sample_t foregroundImage, sample_t backgroundImage) {
// Blend the images by subtracting the color components of the
// foreground from the corresponding components in the background
float red = backgroundImage.r - foregroundImage.r;
float green = backgroundImage.g - foregroundImage.g;
float blue = backgroundImage.b - foregroundImage.b;
float alpha = input.a;
// Return a new float4
return float4(red, green, blue, alpha);
}
```
With a kernel function, you have full control of how two images should be blended together. You are free to do any math on the color values to achieve the desired effect.
Go on to `Task4.swift` now and follow the steps.
## Task 5
Let's try to build our own blending filter, using the image's depth map. We will selectively decide which areas of the output image should be sampled from the two images created in task 1 and 2.
To achieve this, we will use a color kernel. The `CIColorKernel` is a Metal function which is called once for each pixel, with the corresponding color value(s) from the input image(s) and it must return a single color value which will be used in the final image:
A full color kernel function could look like this:
```c++
float4 darken(sample_t input) {
......@@ -171,9 +198,10 @@ float4 redAndBlue(sample_t input, sample_t depthMap) {
}
```
Take your new knowledge about Metal Shading Language and depth maps with you to `Task4.swift` and follow the instructions to play around with your own image kernels.
Take your new knowledge about Metal Shading Language and depth maps with you to `Task5.swift` and follow the instructions to play around with your own image kernels.
# References
- [Core Image](https://developer.apple.com/documentation/coreimage) by Apple
- [CIFilter.io](https://cifilter.io) by [Noah Gilmore](https://twitter.com/noahsark769)
- [Processing an Image Using Built-in Filters](https://developer.apple.com/documentation/coreimage/processing_an_image_using_built-in_filters) by Apple.
......