Swift is Open Source

[)roi(]

Executive Member
Joined
Apr 15, 2005
Messages
6,282
Awesome man, very interesting and cool concepts.

Semi related to this I am currently reading "Learn You a Haskell for Great Good!" for fun
Great book to get started.

What's surprisingly since Swift's launch is how many of those learning Swift have also started to learn Haskell in order to better grasp the concepts of functional programming.

Monoids in the category of endofunctors.:sick:
One of the biggest mind f..ks in functional programming is trying to understand what a monad is, and even more impossible trying to explain this to someone else; in the Swift community these articles are the least confusing (hence successful) attempts at this:

 

_kabal_

Executive Member
Joined
Oct 24, 2005
Messages
5,923
[)roi(];16793229 said:
Great book to get started.

What's surprisingly since Swift's launch is how many of those learning Swift have also started to learn Haskell in order to better grasp the concepts of functional programming.

Monoids in the category of endofunctors.:sick:
One of the biggest mind f..ks in functional programming is trying to understand what a monad is, and even more impossible trying to explain this to someone else; in the Swift community these articles are the least confusing (hence successful) attempts at this:


I tried to read those while I am sitting in transit after a very long flight. Gave up shortly after. Another time :)
 

[)roi(]

Executive Member
Joined
Apr 15, 2005
Messages
6,282
endofunctor topics are probably more effective than counting sheep.
 

[)roi(]

Executive Member
Joined
Apr 15, 2005
Messages
6,282
Here's an article I published on the challenges in light of the Swift evolution / deprecation approvals:
  • SE-0003 Removing var from Function Parameters and Pattern Matching
  • SE-0004 Remove the ++ and — operators
  • SE-0007 Remove C-style for-loops with conditions and incrementers

I focus on the impact on the removals, a possible reason why SE-0003 was approved, and code solutions to recreate the functionality of SE-0004 and SE-0007.
 

[)roi(]

Executive Member
Joined
Apr 15, 2005
Messages
6,282
I previously talked about protocol extensions coupled with generics; I thought for those who are interested it might be useful to give a few examples:

Code:
public protocol Tapable {}

extension Tapable where Self: UIView {
    init(@noescape block: Self -> Void){
        self.init()
        block(self)
    }
}

extension UILabel: Tapable {}
extension UIButton: Tapable {}

let label = UILabel() {
    $0.frame = CGRectMake(0.0, 0.0, 150.0, 24.0)
    $0.font = UIFont.boldSystemFontOfSize(13.0)
    $0.text = "Hello, World"
    $0.textAlignment = .Center
    $0.backgroundColor = UIColor.whiteColor()
}

let button = UIButton() {
    $0.frame = CGRectMake(0.0, 0.0, 150.0, 24.0)
    $0.setTitle("Click me!", forState: UIControlState.Normal)
    $0.setTitleColor(UIColor.blackColor(), forState: UIControlState.Normal)
    $0.backgroundColor = UIColor.greenColor()
}
The above example creates a new protocol called Tapable in order replicate the behaviour of the Ruby tap method, we basically do the following:
  1. define a new protocol
  2. extend the protocol with a default initialiser which is a closure marked as @noescape to indicate to the compiler that we do not intend to reprocess the closure outside of this method, but more importantly it allows us to refer to self implicitly within the closure. Note: there are two types of self: Self (initial caps) and self (lowercase). Initial caps Self refers to the type, and lowercase self refers to the current instance.
  3. we then extend to types that we want to add this functionality to with, for example:
    Code:
    extension UILabel: Tapable {}
  4. Then its ready for use, as shown in the 2 examples, which initialise instances on UILabel and UIButton. The curly braces is where the closure is contained, and in this closure we reference self by $0 (sugar syntax).
Note: If self was a tuple, we would be able to access each successive value with its numeric index or parameter name (if it had one), for example: $0.0 (references the 1st parameter in the self tuple), $0.name (references the name parameter in a tuple self)

We can further refine the inheritance of the Tapable protocol, by extending a class common to both UILabel and UIButton i.e. UIView; this would then also add this behaviour to all the UI objects that are based on UIView, which in the case of iOS is almost all of them. i.e.
Code:
extension UIView: Tapable {}

Here's another example of this behavior using a more functional then construct; also similar to Ruby's tap method, with a slightly different implementation i.e. not bound to the initialiser:
Code:
public protocol Then {}

extension Then {
    public func then(@noescape block: Self -> Void) -> Self {
        block(self)
        return self
    }
}

extension UILabel: Then {}
extension UIButton: Then {}

let label1 = UILabel().then {
    $0.frame = CGRectMake(0.0, 0.0, 150.0, 24.0)
    $0.font = UIFont.boldSystemFontOfSize(13.0)
    $0.text = "Hello, World"
    $0.textAlignment = .Center
    $0.backgroundColor = UIColor.whiteColor()
}

Again we can apply this protocol to our own objects either directly or through their protocol inheritance chain.
Btw if you wanted this behaviour to be available globally, then you could just do this:
Code:
extension NSObject: Then {}
extension NSObject: Tapable {}
Which would add this functionality to every object that inherits from NSObject, which in this case is the majority of iOS and OSX objects i.e. would apply even to NSString, NSArray, NSRegularExpression, etc. but also to all the UI objects: UILabel, UIButton, UIColor, NSColor, NSImage, etc.

Hopefully this helps you to grasp a bit of the power behind generics and POP (protocol oriented programming)

If you're unfamiliar with UIKit or AppKit, then this is the way it would have been without the Tapable or Then protocols:
Code:
var label1 = UILabel()
label1.frame = CGRectMake(0.0, 0.0, 150.0, 24.0)
label1.font = UIFont.boldSystemFontOfSize(13.0)
label1.text = "Hello, World"
label1.textAlignment = .Center
label1.backgroundColor = UIColor.whiteColor()
Now while that might not seem like you haven't saved all that much ito typing, you should at least agree its better contained in the POP examples and obvious what goes together. The biggest benefit however is what happens under the covers, as I said the @noescape is a clear instruction to the compiler that it can optimize the closure as no additional changes will be made, reducing the stack and registers, and results in faster and more optimized code both visually and in the CPU bitcode.
 
Last edited:

[)roi(]

Executive Member
Joined
Apr 15, 2005
Messages
6,282
Picking up from the theme "Get off Your Horse and Stop Coding like a Cowboy"; I'm considering doing a post or 2 on brevity vs. clarity, or more specifically the Swift constructs that assist in simplifying functions without reverting to unnecessarily segmented ravioli code.
 

[)roi(]

Executive Member
Joined
Apr 15, 2005
Messages
6,282
The complexity of Strings

In Swift one of the things that most programmers new to Swift will find confusing / irritating is how Swift strings are so different from Strings in other languages.

Let's start with an example:
Code:
String input = "a quick movement"

In C# and Java, we can extract the word quick respectively as follows:
Code:
String output = input.Substring(2, 5)
String output = input.substring(2, 5)

In Swift:
Code:
let input = "a quick movement"

let start = input.startIndex.advancedBy(2)
let end = start.advancedBy(5)
let range = start..<end
let output = input[range]

or all in 1 line like this:
Code:
let output2 = input[input.startIndex.advancedBy(2)..<input.startIndex.advancedBy(2).advancedBy(5)]

This should immediately bring about a WTF :sick: moment; why so complicated?

The short answer is that Swift doesn't treat characters the same way that C# and Java do. The String type in Swift is a collection of Character values. A Swift Character represents one perceived character (what a person thinks of as a single character, called a grapheme). Since Unicode often uses two or more code points (called a grapheme cluster) to form one perceived character, this implies that a Character can be composed of multiple Unicode scalar values if they form a single grapheme cluster. (Unicode scalar is the term for any Unicode code point except surrogate pair characters, which are used to encode UTF-16.)

Ok, that's a mouth full, show me an example:

C#
Code:
string nfc = "\u03D4"; // equals ϔ
string nfd = "\u03D2\u0308"; // equals ϔ
var b = nfc == nfd; // false

Swift
Code:
var nfc: String = "\u{03D4}" // equals ϔ
var nfd: String = "\u{03D2}\u{0308}" // equals ϔ
var b = nfc == nfd // true

In the above example, we assign a Greek upsilon with diaeresis and hook symbol (ϔ) to the variables nfc and nfd. We do this using different Unicode code points. Both examples do the same thing, but as you can see the results differ.

In .NET, comparison is done at byte level, so it is actually important to either normalize it before comparison or to use the IsNormalized method to check that both strings use the same Normalization Form. In Swift, the result is true because “their extended grapheme clusters are canonically equivalent”.

An extended grapheme cluster is a sequence of Unicode scalars as illustrated by the variable nfd in both examples. So when are they canonically equivalent? Apple provides the following explanation: “they have the same linguistic meaning and appearance, even if they are composed from different Unicode scalars behind the scenes.” In short Swift Character indexes are consistent irrespective of the method used to construct an extended grapheme cluster i.e. that's why Swift's index does not equal an int byte offset as it does it C# and Java.

  • characters is a collection of Character values, or extended grapheme clusters.
  • unicodeScalars is a collection of Unicode scalar values.
  • utf8 is a collection of UTF–8 code units.
  • utf16 is a collection of UTF–16 code units.
If we take the word “café”, comprised of the decomposed characters [ c, a, f, e ] and [ ´ ], here's what the various string views would consist of:
Screen Shot 2016-01-09 at 1.21.16 AM.png

As you can see depending on your point of view (or the your language's default), the above is not the same. In Swift, you can easily break a String down into its extended grapheme clusters (Character), or UTF8, UTF16 or even Unicode Scalars; these are simple .method calls on a String.

Now for some code fun:
With Swift's consistent support of extended grapheme clusters, you can even use these as part of your Swift code, for example:
Screen Shot 2016-01-09 at 1.11.16 AM.png ...and that's how you load an ark.


Ok, as a final bit let's see if Swift really has to be so complex when cutting up a String; in short no it doesn't as you can extend the language to simplify this. First thing we're going to extend string to support an Int range e.g. 2...5
Code:
extension String
{
    subscript (range: Range<Int>) -> String?
    {
        guard range.startIndex >= 0 &&
            range.endIndex <= self.characters.count else { return nil }

        let subStart = self.startIndex.advancedBy(range.startIndex)
        let subEnd = self.startIndex.advancedBy(range.endIndex)
        return self[subStart...subEnd]
    }
}
Ok whew, what does that give us, well let's take the first example, and see how the substring will now work in Swift:
Code:
let input = "a quick movement"
let output = input[2...5]  // this is now our substring command
Ok if it's that easy, you might ask why doesn't Apple just include it as part of the standard library; well the short answer is that because of the differences between Swift String and other languages, they want programmers to make an informed choice by working at index level.

In the long run this will most likely be included, but for now we're all just building our own custom standard library extensions.
 
Last edited:

[)roi(]

Executive Member
Joined
Apr 15, 2005
Messages
6,282
Swift Emoji Magic

Let's start with a family emoji:
Screen Shot 2016-01-09 at 3.15.06 AM.png

Now let's split them in to their parts:
Screen Shot 2016-01-09 at 4.20.10 AM.png

The result of this is:
Screen Shot 2016-01-09 at 4.01.13 AM.png

Now if we create those parts using their hexadecimal unicode character value, and join those parts together using the unicode zero width joiner:
Screen Shot 2016-01-09 at 4.17.14 AM.png


The result is, the family emoji:
Screen Shot 2016-01-09 at 3.15.06 AM.png

Note:
 
Last edited:

[)roi(]

Executive Member
Joined
Apr 15, 2005
Messages
6,282
Project build - Custom NSImage Filters
As mentioned before I'm going to introduce some of the Swift constructs that help with refactoring and generally improving code layout, but to hopefully keep this interesting I'm also going to do quite a bit of a ground up build of image filters, probably will cover at most two, however if you're interested I will point you at the end to a play project I'm busy with that includes many more filters, including pixel dithering.

Reason for deciding on filters for the demonstration:
Now those who have been developing for a while would know that most UI frameworks come already fully stocked with image filters so why would I want to tackle this? Although iOS and OSX provide many filters they do not cover every eventuality and there will be time where you might want a very specific type of filter i.e. used as part of a custom animation. Apple's CI framework allows you to build your own custom filter kernels, but without an understanding of how to build a filter in the first place, you're going to probably won't get it right.

Note: This is not a tutorial for CI Filter construction, but rather a back to basics filter build i.e. we're going to manipulate the pixels read directly from an image file, modify these using a formula and then writing the results back to an image file.

Construction of this will basically follow this process:
  1. Build the extensions onto NSImage to access the pixels in their most basic form i.e. RGBA (red, green, blue & alpha) values stored as UInt8 (Unassigned 8 bit Integer); 8 bits per pixel because each component value has a minimum value of 0 and maximum value of 255)
  2. Convert the pixel memory to an accessible array and tie this into a struct for easy access
  3. Reverse process i.e. pixel array back to pixel memory
  4. Build a save method for NSImage to easily write out the result to disk
  5. Build filter 1
  6. Build filter 2
  7. Refactor filter to remove duplication
Note:
  • The filters I will build will focus on applying an algorithm to a single pixel using only that pixels values i.e. for simplicity I will avoid tackling complex filters which typically utilise the surrounding pixels values as part of the algorithm.
  • If however there is interest after this; I'll be more than happy to tackle a complex filter example or two.
  • I will be trying to explain the code as I proceed, plus trying to demonstrate before and after refactoring, this process will take quite a few posts; probably a minimum of 6 as shown above.
  • Before and after filter pictures will accompany this tutorial.
  • The assumption is that the readers have some programming background, but don't let that scare you off asking questions.
 

[)roi(]

Executive Member
Joined
Apr 15, 2005
Messages
6,282
Filter Project - NSImage: Routines to access raw pixel memory

In order to manipulate pixel colour component values directly we first need to load the source image into memory by creating a NSBitmapImageRep, then locking the graphic's state, drawing a composite copy of the image into the NSBitmapImageRep, flushing the buffers, releasing the graphic's state lock and finally return an instance of NSBitmapImageRep.

Code:
import AppKit

// MARK: - NSImage to NSBitmapImageRep -
public extension NSImage
{
    ///  Convert NSImage to NSBitmapImageRep
    ///  - returns: NSBitmapImageRep
    public func bitmapImageRep() -> NSBitmapImageRep?
    {
        let width = Int(self.size.width)
        let height = Int(self.size.height)
        
        guard let bitmapImageRep = NSBitmapImageRep(
            bitmapDataPlanes: nil,
            pixelsWide: width,
            pixelsHigh: height,
            bitsPerSample: 8,
            samplesPerPixel: 4,
            hasAlpha: true,
            isPlanar: false,
            colorSpaceName: NSCalibratedRGBColorSpace,
            bytesPerRow: width * 4,
            bitsPerPixel: 32) else { return nil }
        
        let graphicsContext = NSGraphicsContext(bitmapImageRep: bitmapImageRep)
        
        NSGraphicsContext.saveGraphicsState()
        NSGraphicsContext.setCurrentContext(graphicsContext)
        
        self.drawAtPoint(NSZeroPoint,
            fromRect: NSZeroRect,
            operation: NSCompositingOperation.CompositeCopy,
            fraction: CGFloat(1.0))
        
        graphicsContext?.flushGraphics()
        NSGraphicsContext.restoreGraphicsState()
        return bitmapImageRep
    }
}
Note:
  • The NSBitmapImageRep class renders an image from bitmap data.
  • Bitmap data formats supported include GIF, JPEG, TIFF, PNG, and various permutations of raw bitmap data.
  • 8 bits per sample -> the allocation for each component (red, green, blue, alpha) i.e. 0 to 255
  • 4 samples per pixel -> how many components (red, green, blue, alpha)
  • 32 bits per pixel -> 8 (bits per sample) x 4 (samples per pixel)
  • The func bitmapImageRep() -> NSBitmapImageRep? meaning it returns an optional i.e. it may or may not have an image; this we will have to test for later.

In order to make sense of these 32 bits, we need a object structure to store and manage the pixel bits; we'll call this Pixel and define it as follows:

Code:
// MARK: - NSImage Pixel Components -
public extension NSImage {
    ///  Pixel Components
    public struct Pixel {
        var red: UInt8
        var green: UInt8
        var blue: UInt8
        var alpha: UInt8

        private static func toUInt8(value value: Double) -> UInt8
        {
            return value > 1.0 ? UInt8(255) : value < 0 ? UInt8(0) : UInt8(value * 255.0)
        }

        init(red: UInt8, green: UInt8, blue: UInt8, alpha: UInt8)
        {
            self.red = red
            self.green = green
            self.blue = blue
            self.alpha = alpha
        }

        init(red: Double, green: Double, blue: Double, alpha: Double)
        {
            self.red = Pixel.toUInt8(value: red)
            self.green = Pixel.toUInt8(value: green)
            self.blue = Pixel.toUInt8(value: blue)
            self.alpha = Pixel.toUInt8(value: alpha)
        }
    }
}
Note:
  • We've define 4 properties (as var because we'll want to manipulate them later), 1 for each of the components. It's important when working with image data to know the order in which the component resides in memory. From the documentation we've can be certain the order is going to be RGBA (red, green, blue and alpha), hence the struct is design to match.
  • We also implemented 2 initializers one from UInt8 and the other from Double (this is something we'll be using later); we also provide a function to convert from Double to UInt8, this work in conjunction with the second initialiser, converting from Double to UInt8. The reason will become apparent later.
  • The Double component values have a minimum value of 0.0 and a maximum value of 1.0; this happens to be the default for the way that NSColor stores in components. To convert from UInt8 to Double we simply divide by 255; the reverse Double to UInt8 is achieved by multiplying by 255

Now we need to convert our instance of NSBitmapImageRep to something we can work with i.e. something we can manipulate. This is done by retrieving data from one of it's properties, namely: bitmapData

Code:
// MARK: - NSImage to UnsafeMutablePointer<Pixel> -
public extension NSImage {
    ///  Converts NSImage to UnsafeMutablePointer<Pixel>
    ///  Used for pixel component access and/or manipulation
    ///  - returns: UnsafeMutablePointer<Pixel>
    public func pixelArray() -> UnsafeMutablePointer<Pixel>? {
        // Convert to NSBitmapImageRep For Pixel Access
        guard let imageRep = self.bitmapImageRep() else
        {
            return nil
        }
        return UnsafeMutablePointer<Pixel>(imageRep.bitmapData)
    }
}
Note:
  1. First this we do is use Swift guard statement to check if we can unwrap the optional i.e. is there a valid image stored in NSBitmapImageRep. If its ok we continue, else we return nil and exit.
  2. Swift's guard statement is similar a reverse if statement; meaning we check for the positive condition we want as opposed to the negative; it's behavior is similar to an assert in that failure must result in a break out of the current context.
  3. You can think of guard as physical body guard who checks whether conditions are favorable to continue.
  4. We then retrieve the value of the bitmapData property. This is a read only memory pointer to the bitmap data; a C array of UInt8. As we saw above one pixel represents 32 bits or in this case 4 x UInt8.
  5. Finally we recast the C memory pointer to something we can use as an array in Swift (recast to the an array of the Pixel struct we create previously): UnsafeMutablePointer<Pixel>(imageRep.bitmapData)
  6. The UnsafeMutablePointer simply implies this is a C allocated space in memory i.e. there are not safe guards that prevent you from accessing invalid data for indexes beyond the bounds of the Memory pointer; meaning we need to build this into our code.

The only thing left as part of this first part of the tutorial, is converting the Swift UnsafeMutablePointer array of Pixel back into an NSImage. To achieve this we will be using another Core Graphics function called CGBitmapContextCreate, which will allow use to reconstitute an image from an array of UInt8.

Code:
// MARK: - UnsafeMutablePointer<Pixel> to NSImage -
public extension NSImage {
    ///  Recomposites UnsafeMutablePointer<Pixel> Back To NSImage
    ///  Works in conjunction with pixelArray() functions.
    ///  - parameter pixelData: UnsafeMutablePointer<Pixel>
    ///  - parameter size: NSSize of image data contained in UnsafeMutablePointer<Pixel>, can't be computed
    ///  - returns: NSImage
    public static func recompositePixelData(
        pixelData: UnsafeMutablePointer<Pixel>,
        size: NSSize) -> NSImage? {
            let width = Int(size.width)
            let height = Int(size.height)
            let colorSpace = NSColorSpace.genericRGBColorSpace().CGColorSpace
            let bytesPerRow = sizeof(Pixel) * width
            let bitsPerComponent = 8
            let bitmapInfo = CGBitmapInfo.ByteOrder32Big.rawValue |
                CGImageAlphaInfo.PremultipliedLast.rawValue
            
            /* Create empty CGcontext */
            guard let bitmapContext = CGBitmapContextCreate(
                pixelData,
                width,
                height,
                bitsPerComponent,
                bytesPerRow,
                colorSpace,
                bitmapInfo) else
            {
                return nil
            }
            
            guard let cgImage = CGBitmapContextCreateImage(bitmapContext) else
            {
                return nil
            }

            let imageSize = NSSize(width: width, height: height)
            return NSImage(CGImage: cgImage, size: imageSize)
    }
}
Note:
  • CGBitmapContext requires a pointer to the destination in memory where the drawing is to be rendered from. The size of this memory block is bytesPerRow * height.
  • We calculate the bytes per row by simply multiply the image's width by the size (memory) of the Pixel struct.
  • The rest of the data is simply a rehashing of the NSBitmapImageRep information i.e.8 bits per component, etc...
  • We then use the CGBitmapContextCreateImage function to convert the CGBitmapContext to a CGImage
  • Finally we convert the CGImage back into an NSImage, ready for further manipulation or saving to disk.

...and that's it for this 1st part of the tutorial. The next one will deal with building a filter to manipulate the pixels for a desired result.
 
Last edited:

[)roi(]

Executive Member
Joined
Apr 15, 2005
Messages
6,282
Filter Project - NSImage: Sepia Filter

Before we start with the Sepia Filter, we're going to define a component struct which stores it values as Double; the reason why we are doing this is two fold:
  1. For future compatiblity with NSColor, which as previously mentioned stores its value as Double between 0.0 and 1.0
  2. The Pixel struct components are UInt8; this is a problem because during the calculation process we could easily overrun this allocation. So to avoid any unnecessary issues we convert to Double, and allocation that is far less restrictive than UInt8

Here's the code for the Color Components defined as Double, including initialisers for both UInt8 and Double, also fun to convert between UInt8 and Double i.e. divide by 255 or multiply by 255
Code:
// MARK: - Color Components -
public extension NSImage
{
    ///  Color Components
    internal struct Components
    {
        let red: Double
        let green: Double
        let blue: Double
        let alpha: Double

        private static func toUInt8(value value: Double) -> UInt8
        {
            return value > 1.0 ? UInt8(255) : value < 0 ? UInt8(0) : UInt8(value * 255.0)
        }

        private static func toDouble(value value: UInt8) -> Double
        {
            return value > 255 ? 1.0 : value < 0 ? 0.0 : Double(value) / 255.0
        }

        init(pixel: Pixel)
        {
            self.red = Components.toDouble(value: pixel.red)
            self.green = Components.toDouble(value: pixel.green)
            self.blue = Components.toDouble(value: pixel.blue)
            self.alpha = Components.toDouble(value: pixel.alpha)
        }
    }
}
Now let's look at how to implement the Sepia Filter
A very common camera-effect found in most digital camera devices is Sepia. This is a beautiful brownish oldie effect is achieved by manipulating the pixel components with these algorithms:

red.png
green.png
blue.png
In the implementation I have slightly adjusted the algorithms (sepia component value x level x 5) to allow control of the brightness using a level parameter: 0.0 = black to 1.0 = white; default is 0.2

Ok let's have a look at the code to do this:

Code:
// MARK: - Sepia Filter -
public extension NSImage
{
    ///  Convert NSImage To Sepia
    ///  - parameter level: Sepia brightness adjustment 0.0 to 0.1 (default is 0.2)
    ///  - returns: New Sepia Instance Of NSImage
    private func sepia(level level: Double = 0.2) -> NSImage?
    {
        // Create a 2D pixel Array for pixel processing
        guard let pixelArray = self.pixelArray() else
        {
            return nil
        }

        let width = Int(self.size.width)
        let height = Int(self.size.height)

        // Loop through each pixel and apply dither
        for rowIndex in 0 ..< height
        {
            for columnIndex in 0 ..< width
            {
                let offset = rowIndex * width + columnIndex
                let currentColor = pixelArray[offset]
                let components = Components(pixel: currentColor)

                let red = components.red * 0.393 * level * 5 +
                     components.green * 0.769 * level * 5 +
                     components.blue * 0.189 * level * 5

                let green = components.red * 0.349 * level * 5 +
                     components.green * 0.686 * level * 5 +
                     components.blue * 0.168 * level * 5

                let blue = components.red * 0.272 * level * 5 +
                     components.green * 0.534 * level * 5 +
                     components.blue * 0.131 * level * 5

                pixelArray[offset] = Pixel(
                           red: red,
                           green: green,
                           blue: blue,
                           alpha: components.alpha)
            }
        }

        // Recomposite image from pixelArray
        return NSImage.recompositePixelData(pixelArray, size: self.size)
    }
}
Notes:
  1. We start off by getting a Pixel array from the function we previously created called pixelArray()
  2. The we retrieve the width and height from the NSImage properties size.width and size.height
  3. We then create 2 for loops to run through all the pixels: 0 to height and 0 to width (arrays are zero indexed).
  4. We calculate the memory offset for pixel at row / column as follows: let offset = rowIndex * width + columnIndex
  5. We retrieve the values at this index with the Pixel struct (UInt8) and then convert it to the Component struct (Double)
  6. We then apply our three algorithms to the component values, creating new variables called red, green and blue
  7. The algorithms have been slightly adjusted to allow control of the brightness using the level parameter: 0.0 to 1.0 (default = 0.2)
  8. We update the memory offset with new values by linking it to an updated Pixel struct, using the variables red, green, blue and the unaltered alpha value from components.alpha
  9. After the loops complete, we recompose a NSImage from the pixel array using the function we created before called NSImage.recompositePixelData

And that's it. Here's before and after examples of an image with this filter applied:
Before:
zork3.png

After:
zork3.png
Performance of this filter is pretty quick if we consider it involves no GPU or SIMD code i.e. it runs purely on the CPU. Execution time is between 10 to 14ms (8 Core)

...and that's it for now. Next filter will be image tinting, and after that refactoring the code to improve the overall API, functionality of the filters, removing any duplication, and to cut back on the length of our functions without introducing unnecessary confusion.

Level adjustment example: level set to 0.1 (default is 0.2)
zork3a.png
 
Last edited:

[)roi(]

Executive Member
Joined
Apr 15, 2005
Messages
6,282
Filter Project - NSImage: Tint Filter
red.png
green.png
blue.png
In the formula above iR iG and iB represents the original color component input values for a pixel,
R% G% and B% in turn represents a colour component tint percentage.
Note that tint values are expressed as fractional values (-0.1 to 0.1).

Code:
// MARK: - Tint Filter -
public extension NSImage
{
    ///  Convert NSImage To Tint
    ///  - parameter red: amount of red to add or remove -1.0 to 1.0 (default is 0.0)
    ///  - parameter green: amount of green to add or remove -1.0 to 1.0 (default is 0.0)
    ///  - parameter blue: amount of blue to add or remove -1.0 to 1.0 (default is 0.0)
    ///  - returns: New Tint Instance Of NSImage
    private func tint(
          red red: Double = 0.0,
          green: Double = 0.0,
          blue: Double = 0.0) -> NSImage?
    {
        // Create a 2D pixel Array for pixel processing
        guard let pixelArray = self.pixelArray() else
        {
            return nil
        }

        let width = Int(self.size.width)
        let height = Int(self.size.height)

        // Loop through each pixel and apply dither
        for rowIndex in 0 ..< height
        {
            for columnIndex in 0 ..< width
            {
                let offset = rowIndex * width + columnIndex
                let currentColor = pixelArray[offset]
                let components = Components(pixel: currentColor)

                let redAdd = red > initial ? 
                          (1.0 - components.red) * red : components.red * red

                let greenAdd = green > initial ? 
                          (1.0 - components.green) * green : components.green * green

                let blueAdd = blue > initial ? 
                          (1.0 - components.blue) * blue : components.blue * blue

                pixelArray[offset] = Pixel(
                     red == initial ? components.red : components.red + redAdd,
                     green: green == initial ? components.green : components.green + greenAdd,
                     blue: blue == initial ? components.blue : components.blue + blueAdd,
                     alpha: components.alpha)
            }
        }
        // Recomposite image from pixelArray
        return NSImage.recompositePixelData(pixelArray, size: self.size)
    }
}
Notes:
  1. Again we start off by getting a Pixel array from the function we previously created called pixelArray()
  2. Then we retrieve the width and height from the NSImage properties size.width and size.height
  3. We then create 2 for loops to run through all the pixels: 0 to height and 0 to width (arrays are zero indexed).
  4. We calculate the memory offset for pixel at row / column as follows: let offset = rowIndex * width + columnIndex
  5. We retrieve the values at this index with the Pixel struct (UInt8) and then convert it to the Component struct (Double)
  6. We then apply our three algorithms to the component values, creating new variables called redAdd, greenAdd and blueAdd i.e. the amount that we are going to adjust our compone
  7. We have included a check to see if the initial adjustments are still 0.0 i.e. no adjustment asked; in that case we just return the original component value.
  8. We update the memory offset with new values by linking it to an updated Pixel struct, using the variables red, green, blue and the unaltered alpha value from components.alpha
  9. After the loops complete, we recompose a NSImage from the pixel array using the function we created before called NSImage.recompositePixelData
... and you can seen there is a bit of code repeated between this filter and the previous one; we'll deal with that as part of the refactor.

Ok let's play with the tints:
tint.png
Ok that's it for now, next step refactor and API design considerations.
 
Last edited:

[)roi(]

Executive Member
Joined
Apr 15, 2005
Messages
6,282
Btw I will be posting the conclusion later tonight, and I'll also include a github link to a working Xcode playground of this.


For those that don't have OSX and/or a Mac:
I am also working on getting some GUI code to work with GTK/GDK+ in Linux, however there is a blocking bug in the LLVM compiler on linux related to includes, so a working linux solution will have to wait for now.
 

[)roi(]

Executive Member
Joined
Apr 15, 2005
Messages
6,282
Ok at this stage we've extended NSImage with quite a bit of code, and I've shown you the results of the filters, but I forgot to show you how we use them in our own code. Another thing I neglected to include is how we save the results of our filter to file.

Btw if you wanted to know where I obtained the image: Nestor Marinero deviant art collection.

Here's the code to write an NSImage to disk:
Code:
// MARK: - Save NSImage -
public extension NSImage {
    ///  Save NSImage to file
    ///  - parameter filename:  String filename (without or without path)
    ///  - parameter imageType: NSBitmapImageFileType, e.g. NSBitmapImageFileType.NSPNGFileType
    public func save(filename: String, imageType: NSBitmapImageFileType) {
        self.bitmapImageRep()?
            .representationUsingType(imageType, properties: [:])?
            .writeToFile(filename, atomically: false)
    }
}

Here's how how I ran the Tint filter: (these settings produce the pinkish image)
Code:
let filename = "zork3"
let filepath = "~/Desktop/images/\(filename).png"

if let image = NSImage(contentsOfFile: file path)?.filterTint(red: 0.5, blue: 0.5) 
{
    image.save("\(filename)-tint.png", imageType: NSBitmapImageFileType.NSPNGFileType)
}
Pinkish outcome re 50% red and 50% blue added to the image.
zork3.png


Filters can also be merged together (run 1 after the other):
Code:
let filename = "zork3"
let filepath = "~/Desktop/images/\(filename).png"

if let image = NSImage(contentsOfFile: file path)?
     .filterSepia(level: 0.15)?
     .filterTint(red: 0.5, blue: 0.5)  
{
    image.save("\(filename)-sepiatint.png", imageType: NSBitmapImageFileType.NSPNGFileType)
}
Sepia 15%, red 50% and blue 50% result in an image that has brown sepia under tones and a pinkish overall tint.
zork3.png

A quick summary of what happening in this code:
  1. We declare an immutable String variable called filename to store the source image name.
  2. We declare an immutable String variable to store the path to the source file; this code \(filename) appends the filename variable to the path. i.e. file path now includes the filename.
  3. We wrap the entire initialisation of NSImage in an if let image statement because initialisation is not a guaranteed process; it could fail if the image is not found at the path specified. This construct ensures we do not try to execute the code in the if closure if the NSImage initialisation failed.
  4. You might also be confuses by the ?.filterSepia and ?.filterTint portions of the if let statement; they in essence are the same safe guards as the if let statement i.e. they are inline guards to ensure that no filter will be applied to an NSImage that didn't load and/or no additional filter is executed on a previous failure. In C# or Java these constructs are similar to the if x != nil { } constructs.

Finally if the initialisation was successful and the filters did not fail; we save the image to the current working folder using the original filename appended with "-sepiatint.png".

API:
As you can see the API is already quite easily to use. What you might not have noticed is that all default value parameters are optional in Swift e.g. in the Tint we had three parameters: red, green and blue. as I didn't need green I simply left it out because in it's declaration I had specified a default value.
Note: Even default parameters must be specified in the original order in which they were declared i.e. blue cannot be entered before red or green.

In the next part I will focus in on the choices for the API; why this is even important, and secondly refactoring of the code (methods to improve the code, add safety, simplify and to remove duplication)
 
Last edited:

[)roi(]

Executive Member
Joined
Apr 15, 2005
Messages
6,282
Thought I'd share this...

[)roi(] said:
XennoX said:
Hi Droid,
So I'm seeing your posts about Swift. Can you, in a few succinct lines, explain why you consider it to be one of the greatest, if not the greatest, programming language of all time? Keep in mind, I'm a developer at heart but not an actual developer. So industry jaragon might go over my head.

- X
Hi XennoX,
The "few succinct lines" is quite a big ask especially considering you say you're "not an actual developer"; as that makes it difficult to assume anything about your background. Nevertheless I'll try, can't promise it'll be very succinct though.

Maybe a good place to start is to group languages by their inherent design limitations. For this I'm going to group some popular languages by the extent of their capability, let's start by describing the two main categories I'll be using for this split:

Systems Programming Language (Capability):

Non Systems Programming Languages:
  • Either cannot do or have limited ability to perform any of the capabilities of a Systems Programming Language.
  • Either Scripted or utilise Just In Time (JIT) compilation.
  • Bad or very limited memory management; most employ automatic mechanisms called: Garbage Collection.
  • Slow when compare against Systems Programming Languages

Ok now let's list the Systems Programming Languages (it's a small group):
  • ESPOL, PL/I, PL360, C, PL/S, BLISS, PLS/8, PL-6, SYMPL, C++, Ada, D, Go, Rust, Assembler
Of that only Assembler, C, C++, Go and D are still very actively used. With C & C++ being the most popular

The Non Systems Programming List is huge; it includes:
  • Java, C#, Pythion, PHP, Visual Basic, Javascript, Ruby, and many more...

Swift is a Systems Programming Language.

Now that we've categorised Swift, the only thing left to do is to say why Swift is better than C or C++ i.e. the two category leaders in the Systems Programming Space:
  • It's speed profile after 1 year is already faster than Objective-C and C; and in many tests in very comparable to C++. If you consider C++ is 32 years old, then you should appreciate that Swift is only going to get faster; by all measures it should match if not exceed C++ performance. The reasons why Swift can get faster is are very technically complicated, but in short: Swift is younger and therefore has less baggage.
  • Between C, C++ and Swift: it is the only statically typed language. What this means is that Swift will by design avoid most of the bugs that were inherent in C & C++; less bugs mean safer, secure and more reliable applications.
  • It a multi paradigm language: Supporting not only Object Oriented Programming, but also Functional Programming and a new style called Protocol Oriented Programming. i.e. Flexibility surpassing what C or C++ currently have.

Ok so let me try to summarise:
  • Swift is a Systems Programming Language
  • Swift is very fast; comparable to C++
  • Swift is a safer language; it won't allow you to make most of the mistakes programmers make in C and C++
  • Swift is a new style multi paradigm language; supporting OOP, FP, and POP
  • Swift's language syntax is easy to grasp, allowing very complex models and computation to be expressed simply; hence adoption will be easier.
  • Swift after 1 year is already ported to Linux, with rough versions working on many other Operating systems and hardware: e.g. Android, CHIP, Arduino, etc.
  • Swift has been rated the most loved language.

Hope that helps.

It's much longer than XennoX wanted, but it's difficult to answer a question if I'm unsure what a person's background is; if I knew it was very technically:

I probably would just have said:
"Swift is a new age Systems Programming Language with the performance characteristics of C++ offering OOP, FP and POP paradigms whilst circumventing most of the programmer bugs inherent in C and C++"

Oops I missed an important plus point:
It's Open Source and the largest and most active project on Github. anybody can fix bugs, write code, propose how it evolves, etc...
 
Last edited:
Top