Swift is Open Source

[)roi(]

Executive Member
Joined
Apr 15, 2005
Messages
6,282
Got stuck with some work last night and today; hence the delay in completing the filter project.

I'll push to try complete some of this tonight, or hopefully everything by tomorrow. the Xcode Playground I'll share via Github is almost complete, and I've taken the liberty to add most of the basic filters:
  • Gamma
  • Inversion
  • Black/White (binary)
  • Grayscale
  • Brightness
  • Color Shading
  • Solarization
  • Contrast
  • Tint
  • Sepia

I'm also considering including an example of more advanced / complex filters: pixel dithering using the following algorithms:
  • Atkinson
  • Floyd Steinberg
  • Burkes
  • Sierra
  • Sierra Two Row,
  • Sierra Lite
  • Stucki
  • Jarvis Judice Ninke

Hope you like it.
 

[)roi(]

Executive Member
Joined
Apr 15, 2005
Messages
6,282
Filter Project Final leg - Refactoring

Ok let's start of with a review of what refactoring means; according to wikipedia:
Code refactoring is the process of restructuring existing computer code – changing the factoring – without changing its external behavior. Refactoring improves nonfunctional attributes of the software. Advantages include improved code readability and reduced complexity; these can improve source code maintainability and create a more expressive internal architecture or object model to improve extensibility.

You may ask why do we even need to worry about a separate refactoring step; "I'm a skilled programmer and I tend to do all of this while I'm coding".

The trouble is that when you are trying to get a program to work, you're not thinking about a future developer. Yet it's likely that someone in future is going to try to read your code to make a change. Refactoring for this purpose can make the difference between whether that will take them a week or an hour (had they had understood your code).

Remember computers really only need this:
Screen Shot 2016-01-15 at 5.57.26 AM.png

Language Syntax: Variable names, Classes, Functions, ... is there for programmer's to reason and make sense in code.
Any fool can write code that a computer can understand. Good programmers write code that humans can understand.

Now let's have a look at some of the approach programmer's take to refactoring:
  • A function should only do just one thing, classes do many things.
  • Anything longer than "x" lines is a target for refactoring.
  • Focus on implementing test cases, finding bugs, improving performance.
Pragmatically refactoring should be a combination of all of these; in this write up I will however not be showing anything about:
  • the testability of code
  • nor will I focus specifically on the performance of code ( e.g. SIMD routines and/or writing GPU kernels)
Occasionally you may also hear a programmer say: "why do I need to both about refactoring when I write lots of comments; they can just read that", but then we should remember:
  • Comments get outdated first; programmer's rarely go back and update their comments
  • Long comments are usually a good indication that code is too complex, and/or badly named.
the real coder.png

Ok in the next post we'll look at some of the original code vs. a refactored one.
 
Last edited:

[)roi(]

Executive Member
Joined
Apr 15, 2005
Messages
6,282
Filter Project Final leg - Refactoring: Tint vs Sepia Filter (Part 1)

If we look at the Sepia and Tint functions, we basically see that we have two loops encapsulating two different algorithms; the loops themselves are for all intensive purposes the same i.e. we have duplication, so let's look at how we avoid that:

Code:
for rowIndex in 0 ..< height
{
    for columnIndex in 0 ..< width
    {
         // Unique filter algorithm goes here
    }
}

In object oriented programming, there are many publicised ways to tackle this; these approaches are more commonly referred to as patterns, for example: Builder, Bridge, Factory, Visitor, etc...

Well this is not going to be just another post about OOP, this is Swift, so were rather going to find a more functional way to approach this; however before we jump the gun, let's make sure we look at all the parts of the two functions i.e. are they really the same except for the algorithm part?

If we look at their definitions:
Code:
private func tint(red red: Double = 0.0, green: Double = 0.0, blue: Double = 0.0) -> NSImage?
versus.
Code:
private func sepia(level level: Double = 0.2) -> NSImage?

We can clearly see that whilst most of the internal mechanics are similar the input is not:
  • Tint requires three values: red, green and blue
  • Sepia requires one, the level
The is usually where traditional modelling in OOP comes unhinged because we essentially need to construct part of the problems using different inputs, then pass that onto something that is the same (the loop). Ok so what now? What does functional programming do in these cases?

Currying to the rescue...
Wikipedia says: "In mathematics and computer science, currying is the technique of translating the evaluation of a function that takes multiple arguments (or a tuple of arguments) into evaluating a sequence of functions, each with a single argument."

In mathematical terms:
Given a function f of type (X * Y) --> Z , currying it makes a function curry(f) : X --> (Y --> Z)
That is, curry(f) takes an argument of type X and returns a function of type Y --> Z .


Basically it allows us to break something up into components, and then by using a defined construction sequence we can build different things. Many originally non functional languages have adopted these functional paradigms, for example: Java, and Javascript

Ok so after the theory let's move onto exploring this in Swift:
Swift currently has two ways to define a function for currying, however because one of these methods is being deprecated in Version 3, I will only discuss the remaining method, which is by the way the one I prefer as it mirrors the mathematical syntax.

Ok let's try to make sense of this. Our functions are basically (X * Y) --> Z:
  • X is the input parameters that is used together with a unique algorithm
  • Y is the internal processes they share, including the two loops
  • Z is the output i.e. NSImage?

Ok so basically we need to take out the algorithm and package this with the unique input X, then we pass this as a lambda style function to the shared function (the process loops).

Btw a lambda function is a function that you can write inline in your source code (usually to pass in to another function, similar to the idea of a functor or function pointer). A good example of this is when for example you pass an array to a function, and that function returns you a sorted array.

Due forum post length limitations, this will be continue in another post.
 
Last edited:

[)roi(]

Executive Member
Joined
Apr 15, 2005
Messages
6,282
Filter Project Final leg - Refactoring: Tint vs Sepia Filter (Part 2)
continued ...

Ok let's try to stop confusing matters with a lot of unnecessary category theory, well not more than we need it to be.

First let's start with the Sepia filter; extracting what is specifically unique (input + algorithm) and then recreating this as a compositional style function (lambda).

Here's the original function for the Sepia Filter; I have highlighted the unique sections of code that will be used to construct our new lambda function:
Code:
// MARK: - Sepia Filter -
public extension NSImage
{
    ///  Convert NSImage To Sepia
    ///  - parameter level: Sepia brightness adjustment 0.0 to 0.1 (default is 0.2)
    ///  - returns: New Sepia Instance Of NSImage
    private func [COLOR="#FF0000"][B]sepia(level level: Double = 0.2)[/B][/COLOR] -> NSImage?
    {
        // Create a 2D pixel Array for pixel processing
        guard let pixelArray = self.pixelArray() else
        {
            return nil
        }

        let width = Int(self.size.width)
        let height = Int(self.size.height)

        // Loop through each pixel and apply dither
        for rowIndex in 0 ..< height
        {
            for columnIndex in 0 ..< width
            {
[COLOR="#FF0000"][B]                let offset = rowIndex * width + columnIndex
                let currentColor = pixelArray[offset]
                let components = Components(pixel: currentColor)

                let red = components.red * 0.393 * level * 5 +
                     components.green * 0.769 * level * 5 +
                     components.blue * 0.189 * level * 5

                let green = components.red * 0.349 * level * 5 +
                     components.green * 0.686 * level * 5 +
                     components.blue * 0.168 * level * 5

                let blue = components.red * 0.272 * level * 5 +
                     components.green * 0.534 * level * 5 +
                     components.blue * 0.131 * level * 5

                pixelArray[offset] = Pixel(
                           red: red,
                           green: green,
                           blue: blue,
                           alpha: components.alpha)[/B][/COLOR]
            }
        }

        // Recomposite image from pixelArray
        return NSImage.recompositePixelData(pixelArray, size: self.size)
    }
}
...and here's our lambda Sepia function:
Code:
static func sepia(level level: Double) -> Components -> Pixel
{
    return { (components: Components) -> Pixel in
         let red = components.red * 0.393 * level * 5 +
              components.green * 0.769 * level * 5 +
              components.blue * 0.189 * level * 5

         let green = components.red * 0.349 * level * 5 +
              components.green * 0.686 * level * 5 +
              components.blue * 0.168 * level * 5

         let blue = components.red * 0.272 * level * 5 +
              components.green * 0.534 * level * 5 +
              components.blue * 0.131 * level * 5

         return Pixel(red: red, green: green, blue: blue, alpha: components.alpha)
    }
}
Ok, first off you must remember lambda functions are designed to do only one thing, then to consistently provide the same result given the same input and no external references must be used i.e. the data should be statically typed to ensure this consistency.

Note: It is this consistency that is the appeal of functional programming; we make simple functions that provide consistent results for the same input + they're extremely easily to test. In comparison OOP is difficult to test, especially when you're dealing with code that needs to support multithreading. FP function are multithreading ready -- no extra code required.

Now let's try to make sense of the new bits:
Code:
static func sepia(level level: Double) -> Components -> Pixel
Here we define a func called sepia (it's static to exclude internal references); the next part says we provide a level as input, which we then pass to an inner function which expects Components before it can apply the algorithm, from which the result is a Pixel. This same pattern is replicated internal with this:
Code:
return { (components: Components) -> Pixel in
i.e. this internal block of code (the algorithm) will not be executed until it also receives the Components, after which it adjust the values (algorithm) and returns a Pixel. How this behaves can be made simpler by using an alias for this compounded type, here's the code rewritten using a type alias:
Code:
typealias FilterTransform = (Components -> Pixel)

static func sepia(level level: Double) -> FilterTransform
{
    return { (components: Components) -> Pixel in
         let red = components.red * 0.393 * level * 5 +
              components.green * 0.769 * level * 5 +
              components.blue * 0.189 * level * 5

         let green = components.red * 0.349 * level * 5 +
              components.green * 0.686 * level * 5 +
              components.blue * 0.168 * level * 5

         let blue = components.red * 0.272 * level * 5 +
              components.green * 0.534 * level * 5 +
              components.blue * 0.131 * level * 5

         return Pixel(red: red, green: green, blue: blue, alpha: components.alpha)
    }
}
Here you can see we added a line typealias FilterTransform = (Components -> Pixel), this essentially creates a new Type which is equivalent to the tuple (Components -> Pixel). This becomes useful we you start working with many complex input and output currying scenarios, but even in this case is makes it easier to understand that this lambda function returns a FilterTransform i.e. a Component -> Pixel transform process.

Yes, I know this can be very confusing at first but the more you use it the easier it will become; Swift in that regard has one of the easier representations of this.

Now let's do the same for Tint:
Code:
static func tint(red red: Double, green: Double, blue: Double) -> FilterTransform
{
   // Default / Initial value
   let initial = 0.0

   return { (components: Components) -> Pixel in
     let redAdd = red > initial ? (1.0 - components.red) * red : components.red * red
     let greenAdd = green > initial ? (1.0 - components.green) * green : components.green * green
     let blueAdd = blue > initial ? (1.0 - components.blue) * blue : components.blue * blue

     return Pixel(
        red: red == initial ? components.red : components.red + redAdd,
        green: green == initial ? components.green : components.green + greenAdd,
        blue: blue == initial ? components.blue : components.blue + blueAdd,
        alpha: components.alpha)
   }
}
Note: Aside from the lambda extraction, I have also made a bug fix in the original code i.e. to avoid accidentally applying a tint on a color component that wasn't specified (see the code related to the let initial = 0.0 declaration) in the return

Ok last step, we need to create a function that accepts the FilterTransform type an runs it through the pixel processing loop and then outputs a new NSImage.
Code:
private func processFilterTransform([COLOR="#FF0000"]filter filter: FilterTransform[/COLOR]) -> NSImage?
{
     // Create a 2D pixel Array for dither pixel processing
     guard let pixelArray = self.pixelArray() else 
     {
         return nil
     }

     let width = Int(self.size.width)
     let height = Int(self.size.height)

     // Loop through each pixel and apply dither
     for rowIndex in 0 ..< height
     {
        for columnIndex in 0 ..< width
        {
            let offset = rowIndex * width + columnIndex
            let currentColor = pixelArray[offset]
            let components = Components(pixel: currentColor)
            pixelArray[offset] = [B][COLOR="#FF0000"]filter(components)[/COLOR][/B]
        }
     }

     // Recomposite image from pixelArray
     return NSImage.recompositePixelData(pixelArray, size: self.size)
}
As you can see this is the code that was duplicated, except that we now as input received a function of type FilterTransform which we then apply internally to pixels (highlighted in red)

In the next article, hopefully out later today, I will start to explore the API i.e. how we call this in our code, plus I'll get to some more bits on the logic I applied to the refactoring i.e. are we done, or do we make it smaller?
 
Last edited:

[)roi(]

Executive Member
Joined
Apr 15, 2005
Messages
6,282
Filter Project Final leg - Refactoring: Tint vs Sepia Filter (Part 3)
Ok so now that we applied currying to our filters and removed the duplication, does that mean we're finished?

Nope unfortunately not, we still need to:
  • Check our code for bugs, including implementing mechanisms to allow the compiler to trap errors e.g. incorrect input values
  • Assess the ease of using our API
  • Hide unnecessary implementation details from the visible API with access control (public, private, internal)

Ok let's review how we used to call our filters, and how that looks after the currying refactor.
Before currying:
Code:
let image = NSImage(contentsOfFile: file path)?.filterTint(red: 0.5, blue: 0.5)
After currying:
Code:
let image = NSImage(contentsOfFile: file path)?.processFilterTransform(filter: NSImage.Transform.tint(red: 1.0, green: 0.1, blue: 0.1))
Whew that's a long line of code, and what's worse we've changed the API, meaning any code that was using this API would now also have to be updated. So how do we fix this?
We build a wrapper function that operates the same as the original one:
Code:
public func tint(red red: Double = 0.0, green: Double = 0.0, blue: Double = 0.0) -> NSImage?
{
    let filter = Transform.tint(red: red, green: green, blue: blue)
    return processFilterTransform(filter: filter)
}
What's more the we can now mark all the currying functions as private i.e. hiding the implementation from our public API.

Ok all the parameters should only accept values between -1.0 to 1.0; but if you look at the code; there is nothing preventing higher or lower values, by not preventing this we could end up with unexpected results or even worse a crash.

Here's the code reworked for safety and I've implemented another parameter called threshold which allows us to control which pixel we're going to affect, for example: a value of 0.3 mean than any component value below 0.3 will not be affected by the filter, which e.g. will allow us with a brightness filter to avoid turning the black colors into greys.

Code:
private extension NSImage.Transform
{
    // Color Tint
    static func tint(
        red red: Double,
        green: Double,
        blue: Double,
        threshold: Double) -> FilterTransform
    {
        // Default / Initial value
        let initial = 0.0

        return { (components: Components) -> Pixel in
            guard components.red >= threshold
                && components.green >= threshold
                && components.blue >= threshold else
            {
                return components.toPixel()
            }

            let redAdd = red > initial ? (1.0 - components.red) * red : components.red * red
            let greenAdd = green > initial ? (1.0 - components.green) * green : components.green * green
            let blueAdd = blue > initial ? (1.0 - components.blue) * blue : components.blue * blue

            return Pixel(
                red: red == initial ? components.red : components.red + redAdd,
                green: green == initial ? components.green : components.green + greenAdd,
                blue: blue == initial ? components.blue : components.blue + blueAdd,
                alpha: components.alpha)
        }
    }
}
The guard in this example is used to ensure the filter is not applied to pixel values below a nominated value.

Then on the wrapped function we also want to validate that all input values are within scope:
Code:
// MARK: - Tint Filter -
public extension NSImage
{
    public func filterTint(
        red red: Double = 0.0,
        green: Double = 0.0,
        blue: Double = 0.0,
        threshold: Double = 0) -> NSImage?
    {
        precondition(red.isWithinLimits() && green.isWithinLimits() &&
            blue.isWithinLimits() && threshold.isWithinLimits(),
            "Input values out of scope -1.0 to 1.0")

        let filter = Transform.tint(
            red: red,
            green: green,
            blue: blue,
            threshold: threshold)
        return processFilterTransform(filter: filter)
    }
}
Swift's precondition is a perfect way to warn API users of problems during compilation. The isWithinLimits function is implemented as follows:
Code:
private extension Double {
    func isWithinLimits() -> Bool
    {
        return self >= -1.0 && self <= 1.0
    }
}
This is an extension on Double i.e. we can use this to validate that any Double is with the -1.0 to 1.0 scope, however because its marked as a private, Swift's access control ensure that nothing outside of the current .swift file will see this function i.e. it's only available to the filters, thereby not messing up Double's API for the rest of .swift files or for the user of your API.

That's pretty much wraps up the refactoring for the filters, suffice to say that only our wrapper functions are expose openly as part of our API, so we achieved the following:
  • Removed duplication by using function currying
  • Hidden implementation details by applying access controls: public & private
  • Implemented safeguards with guard and precondition
  • Simplified conjoined types by using typealias
  • Extended Double privately to add validation code for use only in the current .swift file

I have published a full Xcode project on Github, this as mentioned previously covers not only an extended set of simple single pixel filters, but also an example of a more complex dither filter using a pixel error matrix:

Basic filters:
  • Gamma
  • Inversion
  • Black/White (binary)
  • Grayscale
  • Brightness
  • Color Shading
  • Solarization
  • Contrast
  • Tint
  • Sepia

Dither filter algorithms:
  • Atkinson
  • Floyd Steinberg
  • Burkes
  • Sierra
  • Sierra Two Row,
  • Sierra Lite
  • Stucki
  • Jarvis Judice Ninke

This project present a single scrollable window which I randomly load with 1 of 6 pictures
Screen Shot 2016-01-15 at 2.49.23 PM.png

Which I've applied 7 different filter combination:
Code:
        //  Sepia Filter
        let imageFilter1 = image
            .filterSepia(level: 0.34, threshold: 0.00)?
            .border(inset: Helper.borderInset, radius: Helper.borderRadius)

        //  Color Tint Filter
        let imageFilter2 = image
            .filterTint(red: 0.5, blue: 0.5, threshold: 0.01)?
            .border(inset: Helper.borderInset, radius: Helper.borderRadius)

        //  Color Shading Filter
        let imageFilter3 = image
            .filterShading(green: -0.8, blue: 0.9, threshold: 0.01)?
            .border(inset: Helper.borderInset, radius: Helper.borderRadius)

        //  Gamma Filter
        let imageFilter4 = image
            .filterGamma(level: 0.8, threshold: 0.00)?
            .border(inset: Helper.borderInset, radius: Helper.borderRadius)

        //  Atkinson Dither Filter, Binary Filter (Black & White)
        let imageFilter5 = image
            .dither(NSImage.DitherMethod.Atkinson)?
            .filterBinary(level: 0.335, transparent: false)?
            .border(inset: Helper.borderInset, radius: Helper.borderRadius)

        //  Color Solarize Filter
        let imageFilter6 = image
            .filterSolarize(red: 0.2, green: 0.2, blue: 0.1, threshold: 0.01)?
            .border(inset: Helper.borderInset, radius: Helper.borderRadius)

        //  Jarvis Judice Ninke Dither Filter, Binary Filter, Tint Filter
        let imageFilter7 = image
            .dither(NSImage.DitherMethod.JarvisJudiceNinke)?
            .filterBinary(level: 0.98, threshold: 0.0, transparent: true)?
            .filterTint(red: 0.5, blue: 0.5)?
            .border(inset: Helper.borderInset, radius: Helper.borderRadius)

Project Github link https://github.com/AfricanSwift/FilterPlay
 
Last edited:

[)roi(]

Executive Member
Joined
Apr 15, 2005
Messages
6,282
A few examples of images and filters. I suggest you download the project and play with it: changing settings, etc. Let me know if you need any help getting it to compile or run.
<NSImage 0x61800006ac00 Size={600, 931} Reps=(%0A    "<NSCGImageSnapshotRep:0x60800006bd00 cgI.png
<NSImage 0x61800026f9c0 Size={751, 1063} Reps=(%0A    "<NSCGImageSnapshotRep:0x60000006a280 cg.png
<NSImage 0x6180000636c0 Size={531, 700} Reps=(%0A    "<NSCGImageSnapshotRep:0x608000074240 cgI.png
<NSImage 0x600000069780 Size={600, 931} Reps=(%0A    "<NSCGImageSnapshotRep:0x60000006efc0 cgI.png
<NSImage 0x608000076200 Size={480, 691} Reps=(%0A    "<NSCGImageSnapshotRep:0x608000076e40 cgI.png
<NSImage 0x610000071680 Size={480, 623} Reps=(%0A    "<NSCGImageSnapshotRep:0x6000000680c0 cgI.png
 
Last edited:

[)roi(]

Executive Member
Joined
Apr 15, 2005
Messages
6,282
In the Filter project I mentioned that the speed of the filters were quite fast even considering I was't using SIMD or GPU kernels. So in thinking about what could be a good topic for the next article; I thought why not tackle a feature that not many programmers know about or have used in their code:
Plus it would be a chance to see we can improve on the speed on the filters.

Warning: It's going to be more complicated, as we'll be accessing C API, passing Array memory pointers and very likely some pointer arithmetic .

So what does Wikipedia say about SIMD:
Single instruction, multiple data (SIMD), is a class of parallel computers in Flynn's taxonomy. It describes computers with multiple processing elements that perform the same operation on multiple data points simultaneously. Thus, such machines exploit data level parallelism, but not concurrency: there are simultaneous (parallel) computations, but only a single process (instruction) at a given moment. SIMD is particularly applicable to common tasks like adjusting the contrast in a digital image or adjusting the volume of digital audio. Most modern CPU designs include SIMD instructions in order to improve the performance of multimedia use.

So what does Wikipedia say about DSP:
A digital signal processor (DSP) is a specialized microprocessor (or a SIP block), with its architecture optimized for the operational needs of digital signal processing.
220px-Dsp_chip.jpg
The goal of DSPs is usually to measure, filter and/or compress continuous real-world analog signals. Most general-purpose microprocessors can also execute digital signal processing algorithms successfully, but dedicated DSPs usually have better power efficiency thus they are more suitable in portable devices such as mobile phones because of power consumption constraints. DSPs often use special memory architectures that are able to fetch multiple data and/or instructions at the same time.
These are hidden performance gems, that many programmer's just don't exploit; don't be fooled by the all the talk about multimedia and signal processing stuff. Many well know applications like e.g. Microsoft Office exploit this for uses far wider that just media.

Here's an simple example:
Let's say you have an array of Double values, and you want to find the sum of all of the values, how much faster will SIMD / DSP do this versus standard CPU routines:
Screen Shot 2016-01-16 at 11.33.15 PM.png
Note: Time is show in milliseconds

Wait but that can't be true? You're probably wondering just how bad Swift's standard CPU routines are? Actually they tend to run at speeds faster than C code, and in most cases comparable to C++ i.e. fast by normal standards, but clearly as this table shows the parallel processing ability of SIMD and DSP just blow this out the water.

Well what's the catch, why isn't everyone using this?
  • Some just don't know about it.
  • It's far more complicated to use than standard C#, Java or Swift code.
  • It's use as you'll see in the upcoming article requires a lot of finesse, meaning that adapting your applications to this is not always so easy.
  • The difficulty to adapt will mean there's more chance to trip yourself up and your code will be more complex i.e. can be a maintenance concern if you're the only one who understands SIMD and DSP programming.
Many of these issues can however be overcome by creating your own wrapper functions that mirror common internal functions both in name and behaviour (e.g. reduce, map, sort, etc.)

Ok so to conclude this bit;
I will be attempting to improve the performance of the previous filters using SIMD / DSP coding; the application will be built in Xcode with Swift using Apple's aptly named Accelerate Framework.

For those who don't have a Mac but would like to attempt this in their own code;
you can look at the OpenCV (Open Source Computer Vision) framework which provides similar functionality to Apple's Accelerate framework for Window, Linux, Mac, Android and iOS with framework access in C++, C, Python, Java, C#, VB.Net, etc...
 
Last edited:

[)roi(]

Executive Member
Joined
Apr 15, 2005
Messages
6,282
The previous sum results were taken from a test conducted July 2015; so I decided to rerun the test to see if any improvements had been made in the Swift standard libraries:

Steps to reproduce:
  1. Double Array loaded with 100 Million randomly selected numbers in the range 0 to 10 Million
  2. Sum the values and return the result.

All solutions successfully produce the same summed result, however SIMD / DSP (Digital Signal Processing) was even in the smallest of cases 11 times faster; and ridiculously faster than the higher order function: Reduce, and index generator style For in Loop.

I also tried a 1 Billion number array with the random numbers generated from 0 to 100 Million:
  • SIMD / DSP completed that in ~3 seconds, whereas I decided to give up on the Reduce function after 10 minutes.
Here's a Gist link to code for this test. The winning code looks surprisingly too simple, but this was something easy; it going to get worse with a more complex process like the filters.
Code:
var sumDSP = 0.0
vDSP_sveD(array, 1, &sumDSP, vDSP_Length(array.count))

nmaxValueSum Methodtime (ms)sum resultDelta
100,000,00010,000,000DSP63.9510154724121737678700000000.0Fastest
100,000,00010,000,000Reduce37653.8549661636 737678700000000.0~588x
100,000,00010,000,000For in Loop31799.5789647102737678700000000.0~497x
100,000,00010,000,000C-Style For Loop786.953985691071 737678700000000.0~11x
100,000,00010,000,000For in Indices795.915007591248737678700000000.0~11x
100,000,00010,000,000While CStyle Loop3426.88101530075737678700000000.0~49x
100,000,00010,000,000WhileLoop770.466029644012737678700000000.0~11x












Here's another example: sorting data:
Code:
//Load Array with Random Data
let arraySize = 100000000
let largestRandom = 10000000

// This returns an Array of Doubles (100 Million, randomly picked values between 0 to 10 Million)
var array = (0...arraySize).map { _ in
    return Double(arc4random_uniform(UInt32(largestRandom)))
}

Now the code for Swift's standard library sort versus DSP:
Code:
// Swift sort
let swiftSort = array.sort(<)

//Swift sortInPlace
array.sortInPlace(<)

// DSP Sort
vDSP_vsortD(&array2, UInt(arraySize), 1)

nmaxValueSort Methodtime (ms)Delta
100,000,00010,000,000DSP21342.4749970436Fastest
100,000,00010,000,000Swift sort382038.519978523~18x
100,000,00010,000,000Swift sortInPlace372151.732981205~17x






Clearly one of the reasons DSP is faster (not the only one) is because the sort is being done in place; unfortunately the Accelerate framework doesn't provide a sort that isn't in place. Well silly me, Swift's stdlib has an in place sort so we'll compare that; he result was only 3% better than it's immutable copy sort.
 
Last edited:

[)roi(]

Executive Member
Joined
Apr 15, 2005
Messages
6,282
Well so far SIMD / DSP probably looks so easy to use, you might be wondering why we continue with scalar access of our CPUs, that's what I'll try to explain in this post.

So how does SIMD operate:
SIMD is short for Single Instruction Multiple Data, while the term SIMD operations refers to a computing method that enables processing of multiple data sets with a single instruction. That's in contrast with the conventional scalar operation, that uses one instruction to process each individual data set.
image008.jpg
With conventional scalar operations, four add instructions must be executed one after another to obtain the sums as shown in above diagram. Meanwhile, SIMD uses only one add instruction to achieve the same result. As we've seen with the last two DSP examples (sum and sort); in these circumstances SIMD operations yield far higher rates of efficiency than scalar operations.

Sounds too good to be true, what's the catch with SIMD Operations?
Despite the advantage of being able to process multiple data sets per instruction, SIMD operations can only be applied to certain predefined processing patterns. The previous diagram revealed one such pattern where the same add operation is performed for all the supplied data, but that's not always the case as shown in this next diagram:
image010.jpg
SIMD operations cannot be used to process multiple data in different ways in a single step; the above diagram shows a typical scenario where SIMD is ineffective. If you consider the filter's as example, we have a set of multiple pixel data that we want to apply an algorithm to, however we must not forget that a pixel consists of four components:
  • Red
  • Green
  • Blue
  • Alpha

On top of that the algorithm is usually different for each component, hence to take advantage of SIMD for these filters, we would need to find a way to:
  • Separate the components
  • Run each component set through SIMD to adjust the values
  • The adjustment would need to represented as a single operation, so that we don't have to run this through SIMD more than once per component
  • Finally we need to remerge the split components back into pixel data.
image015.jpg
Understandably this is complex, as we will need to maintain each pixel's component positional integrity throughout this process. Now whether all this additional work would nullify any of the SIMD benefits is unsure; that I should be able to confirm once I start tackling this project.

Note:
  • One of the filter types that I built in the previous project was a group of dithering filters; these filters are not easily implemented with SIMD or GPU kernels as they are error diffusion algorithms, meaning that the diffused error rate calculated from its adjacent pixels using a matrix will shifted forward / downwards to the next adjacent set of pixels, hence it's not possible to calculate this error in isolation.
  • Blurring in contrast whilst also using an adjacent pixel matrix is unlike dithering as it requires no error diffusion to be carried forward to the next pixels, making the use of SIMD and GPU kernels feasible for blurring filters; this process of using a matrix to determine the current pixel's adjustment is often called convolution.
 
Last edited:

[)roi(]

Executive Member
Joined
Apr 15, 2005
Messages
6,282
Procedural steps to implement a single pixel filter in SIMD:

As you can hopefully see in the diagram below, the process is quite a bit more complicated than the previous scalar one, in that it involves bit level manipulation:
  • A total of six memory copies are executed the last three being in-situ:
    1. Red, Green and Blue: UInt8 --> Float
    2. Red, Green and Blue: Float --> UInt8).
  • Memory is modified in place a total of six times:
    1. Red, Green and Blue: Color component algorithm
    2. Red, Green and Blue: Floor/ceiling bounds adjustments.
SIMD Filter Process.jpg

Some decisions you might be wondering about:
  1. Why did you convert from UInt8 to Float?
  2. Why did we need to offset the pointer?

Why did you convert from UInt8 to Float?
Basically the space in memory was too limited; UInt8 has a total of 8 bits (1's and 0's). Which means the largest number it can store is 255 as show in the diagram below:
Screen Shot 2016-01-18 at 2.59.01 AM.png
Why that is a problem might not be immediately clear. Let me try to explain; Pixel color components values range from 0 (black) to 255 (white),

If for example I apply a brightness filter of 10%, what I am saying is boost the component values by 10%
  • If the current value was 20, a 10% boost would make it 20 + (20 * 10%) = 22. This doesn't seem to be a problem??
Wait now ask yourself what will happen when you boost a component value of 240 by 50%?
  • 240 + (240 * 50%) = 360; but 360 cannot be stored in a UInt8, as it's can only support a maximum value of 255;
This is what we refer to as an overflow i.e. when the allocated space is too small to accommodate the number we're trying to store. Try to imagine squeezing a size 10 foot into a size 6 shoe. What the CPU would do is to simply chop off everything that doesn't fit, and like the bloody mess that would ensure when you chopped off your toes, and overflow on the computer is never a good result. I'll try to demonstrate why:
Number-->5122561286432168421
240-->0011110000
360-->0101101000
Ok first thing everything in red is in the bloody zone i.e. the bits you would chop off to fit in to the UInt8 allocation space. As you can see 240 fits perfectly (it loses no toes), but 360 is going to lose at least one big toe: the 1 bit that represents 256. After the chop (losing this bit), 360 will now represent only 104 (the sum of the remaining bits). This would mean that by applying a 50% brightness filter your picture would end up being darker than it was i.e. 104 is closer to 0 (black). What should have happened is that we should have increase the value of 240 until we reach the maximum value 255 (white) and then stopped.

Converting to Float helps us avoid this, because Float can store much larger values, which we then later can curb to the bounds: 0 to 255.

Why did we need to offset the pointer?
If you look at the following diagram, you will see that green is to the right of red, and blue is to the right of green. Stated in memory terms we would say:
  • Red is located at the target address (what we call an address pointer)
  • Green is located at the same address pointer + 1
  • Blue is located at the same address pointer + 2
Pointer offset.png
This is the standard way to address data stored in memory i.e. by it's offset from an address in memory. This is not just limited to SIMD, but is the standard way all data is stored, and should help to explain why arrays typically start at index 0.
 
Last edited:

[)roi(]

Executive Member
Joined
Apr 15, 2005
Messages
6,282
Here's a rough example of what this process looks like using SIMD in Swift
The output follows this code, and as you can see the alpha remain unaffected by the 70% boost, while the values for red, green and blue have adjusted as expected.
Code:
// Build an input Array of UInt8 with incremental sets of 4 values
var input = 0.stride(to: 60, by: 10).map {
        (v: Int) -> [UInt8] in return
        (1...4).map { UInt8(v + $0)}
    }.flatMap { $0 }

// Calculate the number of pixels (can also be done by multiplying width * height)
let componentQuantity = input.count / 4

// Create Float arrays for each component
var red = Array(count: componentQuantity, repeatedValue: Float(0))
var green = Array(count: componentQuantity, repeatedValue: Float(0))
var blue = Array(count: componentQuantity, repeatedValue: Float(0))

// Advance input pointer to match component offset
let greenOffset = UnsafePointer<UInt8>(input).advancedBy(1)
let blueOffset = UnsafePointer<UInt8>(input).advancedBy(2)

// Extract Components into separate Float arrays
vDSP_vfltu8(input, 4, &red, 1, UInt(componentQuantity))
vDSP_vfltu8(greenOffset, 4, &green, 1, UInt(componentQuantity))
vDSP_vfltu8(blueOffset, 4, &blue, 1, UInt(componentQuantity))

// Example of boosting values by 70%
let redFactor = 1.7
let greenFactor = 1.7
let blueFactor = 1.7

// Define component adjustment for filter
let redAdjust = [Float(redFactor)]
let greenAdjust = [Float(greenFactor)]
let blueAdjust = [Float(blueFactor)]

// Multiply components by their filter adjustments
vDSP_vsmul(red, 1, redAdjust, &red, 1, UInt(componentQuantity))
vDSP_vsmul(green, 1, greenAdjust, &green, 1, UInt(componentQuantity))
vDSP_vsmul(blue, 1, blueAdjust, &blue, 1, UInt(componentQuantity))

// Floor and Ceiling limit clipping
let lowerLimit = [Float(0.0)]
let upperLimit = [Float(255.0)]
vDSP_vclip(red, 1, lowerLimit, upperLimit, &red, 1, UInt(componentQuantity))
vDSP_vclip(green, 1, lowerLimit, upperLimit, &green, 1, UInt(componentQuantity))
vDSP_vclip(blue, 1, lowerLimit, upperLimit, &blue, 1, UInt(componentQuantity))

// Advance output pointer to match component offset
var greenShift = UnsafeMutablePointer<UInt8>(input).advancedBy(1)
var blueShift = UnsafeMutablePointer<UInt8>(input).advancedBy(2)

// Convert Float components to UInt8 using offset
vDSP_vfixu8(red, 1, &input, 4, UInt(componentQuantity))
vDSP_vfixu8(green, 1, greenShift, 4, UInt(componentQuantity))
vDSP_vfixu8(blue, 1, blueShift, 4, UInt(componentQuantity))
Output:
Code:
----------------------------------------------------------------------------------------------------
input: [1, 2, 3, 4, 11, 12, 13, 14, 21, 22, 23, 24, 31, 32, 33, 34, 41, 42, 43, 44, 51, 52, 53, 54]

// Here we separated the 3 color components 
----------------------------------------------------------------------------------------------------
  red: [1.0, 11.0, 21.0, 31.0, 41.0, 51.0]
green: [2.0, 12.0, 22.0, 32.0, 42.0, 52.0]
 blue: [3.0, 13.0, 23.0, 33.0, 43.0, 53.0]

// Here we apply the 70% boost of the 3 color components, we separate this because some 
// filters allow us to apply different boost % to each component
----------------------------------------------------------------------------------------------------
  red: [1.7, 18.7, 35.7, 52.7, 69.7, 86.7]
green: [3.4, 20.4, 37.4, 54.4, 71.4, 88.4]
 blue: [5.1, 22.1, 39.1, 56.1, 73.1, 90.1]

// Here you can see the 3 color components have changed, but the 4 component(alpha) is the original value
----------------------------------------------------------------------------------------------------
input: [1, 3, 5, 4, 18, 20, 22, 14, 35, 37, 39, 24, 52, 54, 56, 34, 69, 71, 73, 44, 86, 88, 90, 54]
The next step will be to convert one of the previous filter's code to employ this:
 
Last edited:

[)roi(]

Executive Member
Joined
Apr 15, 2005
Messages
6,282
SIMD Filters - Conclusion
Finally found some spare time, so decided to complete a SIMD conversion of the Tint filter.

The results:
ImageSizeDimensionsScalar time(ms)SIMD time(ms)Delta
Yosemite81.2Mb10000 x 700019040.666997432714288.67799043661.33
Tiger11Mb7680 × 432011387.01200485237013.613998889921.62
Mona Lisa8.2Mb2008 × 24782343.379020690921700.622022151951.38
Zork 11.3Mb771 × 1000398.251950740814276.9359946250921.44
Calavera skull243Kb275 × 396112.67995834350672.22896814346311.56
As you can see the improvement is not as substantial as for the sum loop and sort; the reason as I explained in the previous post is due to the 12 SIMD processes we need to execute to implement the filter. i.e. the sort and the sum loop only required one SIMD process. For the complexity alone I'm not sure it's really worth the effort for image processing where all three channels need to be altered separately.

Note:I think we'd probably get a far better results from a multi-threaded implementation of these filters than SIMD; that could be another posting.

Contrast, Brightness, Binary type filters should however be substantially faster in SIMD as all three channels are affected by the same formula i.e. we should only require 4 SIMD processes as opposed to 12.

Anyway here's the code for this:

Code:
import Accelerate
import AppKit

// MARK: - Tint Filter -
public extension NSImage
{
    ///  Convert NSImage To Tint
    ///  - parameter red: amount of red to add or remove -1.0 to 1.0 (default is 0.0)
    ///  - parameter green: amount of green to add or remove -1.0 to 1.0 (default is 0.0)
    ///  - parameter blue: amount of blue to add or remove -1.0 to 1.0 (default is 0.0)
    ///  - returns: New Tint Instance Of NSImage
    private func filterTintDSP(
        red red: Double = 0.0,
        green: Double = 0.0,
        blue: Double = 0.0) -> NSImage?
    {
        // Create a 2D pixel Array for pixel processing
        guard let pixelArray = self.pixelData() else
        {
            return nil
        }

        let tally = UInt(self.size.width * self.size.height)

        // Create Float arrays for each component
        var redPixels = Array(count: Int(tally), repeatedValue: Float(0))
        var greenPixels = Array(count: Int(tally), repeatedValue: Float(0))
        var bluePixels = Array(count: Int(tally), repeatedValue: Float(0))

        // Advanced input pointer to component offset
        let redInputOffset = UnsafePointer<UInt8>(pixelArray)
        let greenInputOffset = UnsafePointer<UInt8>(pixelArray).advancedBy(1)
        let blueInputOffset = UnsafePointer<UInt8>(pixelArray).advancedBy(2)

        // Extract Components into separate Float arrays
        vDSP_vfltu8(redInputOffset, 4, &redPixels, 1, tally)
        vDSP_vfltu8(greenInputOffset, 4, &greenPixels, 1, tally)
        vDSP_vfltu8(blueInputOffset, 4, &bluePixels, 1, tally)

        // Define component adjustments
        let adjust = (red: [Float(red)], green: [Float(green)], blue: [Float(blue)])

        // Multiply components by their adjustments
        vDSP_vsmul(redPixels, 1, adjust.red, &redPixels, 1, tally)
        vDSP_vsmul(greenPixels, 1, adjust.green, &greenPixels, 1, tally)
        vDSP_vsmul(bluePixels, 1, adjust.blue, &bluePixels, 1, tally)

        // Floor and Ceiling limit clipping
        let limit = (lower: [Float(0)], upper: [Float(255)])
        vDSP_vclip(redPixels, 1, limit.lower, limit.upper, &redPixels, 1, tally)
        vDSP_vclip(greenPixels, 1, limit.lower, limit.upper, &greenPixels, 1, tally)
        vDSP_vclip(bluePixels, 1, limit.lower, limit.upper, &bluePixels, 1, tally)

        // Convert Float components to UInt8 and merge
        let greenOutputOffset = UnsafeMutablePointer<UInt8>(pixelArray).advancedBy(1)
        let blueOutputOffset = UnsafeMutablePointer<UInt8>(pixelArray).advancedBy(2)

        vDSP_vfixu8(redPixels, 1, pixelArray, 4, tally)
        vDSP_vfixu8(greenPixels, 1, greenOutputOffset, 4, tally)
        vDSP_vfixu8(bluePixels, 1, blueOutputOffset, 4, tally)

        let pixels = UnsafeMutablePointer<UInt8>(pixelArray)
        // Recomposite image from pixelArray
        return NSImage.recomposite(pixels, size: self.size)
    }
}

If there's any interest for me to demonstrate how to build a GPU kernel filter, let me know; the complexity would be quite similar to SIMD.
 
Last edited:

[)roi(]

Executive Member
Joined
Apr 15, 2005
Messages
6,282
Here's a brief update on Swift since going Open Source in December:

Language Evolution Update:
http://ericasadun.com/2016/01/12/whats-up-in-swift-evolution/

Swift News Letter
https://github.com/pepperdog/TWISt-shout/blob/master/2016/TWISt-shout-2016-01-18.md

Swift 2.2 has been released for both OS X and Linux:
It will be a mostly source-compatible release with Swift 2.1, containing a large number of core improvements (e.g., bug fixes, enhancements to diagnostics, faster generated code) without many significant visible changes to the language itself.

Swift 3 API Design Guidelines
Swift 3 is going to break your code. It’s a given. It was a given when the Swift team launch the Open Source effort in December.
Many of the code that is going to break will warn about deprecation in Swift 2.2, the result is as a result of a global Swiftication of the OS X and iOS frameworks (you can read about this in the Swift 3 API design. All these changes will be fixable by an automatic Swift 3 code update menu option.

The other major parts of the Swift 3 timelines (roughly September) is completion of:
A Few Extras:
 
Last edited:

[)roi(]

Executive Member
Joined
Apr 15, 2005
Messages
6,282
Interesting stuff dude. Thanks for posting.
My pleasure; find its a great way to explore a new language and build my familiarity with it.

If there's anything you'd like explored just let me know.
 

[)roi(]

Executive Member
Joined
Apr 15, 2005
Messages
6,282
Functional Programming
So for today's topic I thought it might be interesting to talk about Functional Programming; specifically what it is, why it's become so popular and which languages support it.

First a bit of history:
The concepts behind Functional Programming is nothing new; its roots go back to lambda calculus, a formal system that was developed in the 1930s to investigate:
  • Computability,
  • Entscheidungsproblem,
  • Function definition,
  • Function application,
  • Recursion
Computability
Computability is the ability to solve a problem in an effective manner.
It is a key topic of the field of computability theory within mathematical logic and the theory of computation within computer science. The computability of a problem is closely linked to the existence of an algorithm to solve the problem. The most widely-studied models of computability are the Turing-computable and μ-recursive functions, and the lambda calculus, all of which have computationally equivalent power. Other forms of computability are studied as well: computability notions weaker than Turing machines are studied in automata theory, while computability notions stronger than Turing machines are studied in the field of hypercomputation.

Entscheidungsproblem or simply "decision problem"
"A decision problem: a way to decide whether a formula is true or provable within a given system"
The Entscheidungsproblem (German, "decision problem") is a famous problem of mathematics. David Hilbert formulated the problem in 1928: Is there an algorithm that will take a formal language, and a logical statement in that language, and that will output "True" or "False", depending on the truth value of the statement? The algorithm does not tell how it reaches the answer, nor prove it, as long as the answer is always correct. In 1936 and 1937, Alonzo Church and Alan Turing showed independently, that there can be no answer to the Entscheidungsproblem. They showed that it is impossible for an algorithm to decide whether arithmetic statements are true or false. For this reason, there can be no solution for the Entscheidungsproblem.

Function Defintion
A function is a special relationship that associates an input to a single output according to some rule.
Functions of various kinds are "the central objects of investigation" in most fields of modern mathematics. There are many ways to describe or represent a function. Some functions may be defined by a formula or algorithm that tells how to compute the output for a given input. Others are given by a picture, called the graph of the function. In science, functions are sometimes defined by a table that gives the outputs for selected inputs. A function could be described implicitly, for example as the inverse to another function or as a solution of a differential equation.
220px-Function_machine2.svg.png

Function Application
A function applied to (some of) its arguments.
Function application is usually depicted by juxtaposing the variable representing the function with its argument encompassed in parentheses. An example of this is something I discussed previously, namely function currying.
8678d9a38b70c18ff730a4326baf87b9.png
can be written like this
936d3616ae1788839acf2cbb7228fad2.png

Recursion
Process of defining a function or calculating by the repeated application of an algorithm.
A function may be partly defined in terms of itself. A familiar example is the Fibonacci number sequence: F(n) = F(n − 1) + F(n − 2). For such a definition to be useful, it must lead to non-recursively defined values, in this case F(0) = 0 and F(1) = 1.
cabe91689f6a1af616ace02827c6e89c.png


A famous recursive function is the Ackermann function, which—unlike the Fibonacci sequence—cannot easily be expressed without recursion. The Ackermann function is a classic example of a recursive function, notable especially because it is not a primitive recursive function. It grows very quickly in value, as does the size of its call tree. The Ackermann function is usually defined as follows:
0ae4053de098cc9554752b190a38bc56.png


Here's a example of Ackermann's function in code:
Code:
func ackerman(m:Int, n:Int) -> Int 
{
    if m == 0 { 
       return n+1
    } else if n == 0 {
      return ackerman(m-1, 1)
    } else {
      return ackerman(m-1, ackerman(m, n-1))
    }
}

TL;DR
In short many functional programming languages can be viewed as elaborations on the lambda calculus, with the following features:
  • Computability: ability to solve a problem in an effective manner.
  • Entscheidungsproblem: "A decision problem", a way to decide whether a formula is true or provable within a given system
  • Function Definition: relationship that associates an input to a single output according to some rule.
  • Function Application: ability to juxtapose the variables representing the function e.g. currying
  • Recursion: calculation by the repeated application of an algorithm

Next article(s) will be less theory, specifically more practical:
  • Benefits: why you would want to do this in your code?
  • Popular Languages that support this, with some comparison examples.
  • Finally, if I find some extra free time I'll try to demonstrate the resolution of a problem the traditional OOP way vs. a Functional Approach.
 
Last edited:

[)roi(]

Executive Member
Joined
Apr 15, 2005
Messages
6,282
Functional Programming...continued
Every function that you typically write has two sets of inputs and two sets of outputs, and we're not talking about a count of the parameters.
That's a little confusing right?



So how did I arrive two input and two outputs?
Let's take a look at the first pair of inputs:
Code:
func square(x: Int) -> Int 
{
    return x * x
}
NOTE: The language doesn't matter, but I've picked Swift, considering that's the premise of this thread; as this would apply to any language with explicit input & output types, for example: Java, C#, ....

Here the input you're used to thinking about is x is a Int, and the output of the function is also something you're also used to: it returns an Int. That's the first set of inputs & outputs. The obvious set. Now let's see an example of the second set of inputs and outputs:

Code:
func processNext() 
{
    guard let message = InboxQueue.popMessage() else { return }
    process(message)
}
According to the syntax, this function appears to take no inputs and returns no output, and yet it's obviously dependant on something, and it's obviously doing something. It has a hidden set of inputs and outputs. The hidden input is the state of the InboxQueue before the popMessage() call, and the hidden outputs are whatever process does, plus the state of InboxQueue after this function completes. The state of InboxQueue is the input of this function. The behaviour of processNext is unknown without knowing that value. And it's output is a mystery too; the result of calling processNext can't easily be understood without considering the state of InboxQueue, or what it actually does.

So the second piece of code has hidden inputs and outputs. It requires things, causes things and affects things, but you could never guess what just by looking at the API, or even at this function.

This design is the very antithesis of Functional Programming.

The official name for hidden inputs and outputs is "side-effects", but we should probably think of this as two things:
  • "side-effects" for the hidden outputs and effects,
  • "indeterminate state" for the hidden inputs.
For the rest of this post and for simplicity; I'll just use "side-effects", but you should remember that it covers both these scenarios.

Side-Effects are like Icebergs.
852486dc2d97d87a6605da50d9a8d722.jpg
the top gives you a false impression of simplicity, all the while it hides its shear complexity under the water

When functions have side-effects, you can look at a function like this:
Code:
func processMessage()
{
   channel.popMessage()
   channel.processMessage()
}
…and just when you think you've got an idea of what it's doing, you end up being totally wrong. There's just no way to know what it requires or what it will do without looking inside.
  • Does it take a message off the channel and process it? Probably.
  • Does it close your channel if some condition is true? Maybe.
  • Does it update a count in the database somewhere? Perhaps.
  • Does it explode if it can't find the logging directory path it was expecting? It might.
When coupled with Cowboy Refactoring, this turns into a veritable spaghetti code experience, following one function to a class, to another function which leads to only more and more hidden complexity...

Aaah... this is about the time I want to pull my hair out.
man-pulling-hair-out-21.jpg

Beneath the surface of this API is a potentially vast block of extra complexity.
To grasp it, you'll only have three options:
  • dive down into the function definition
  • bring the complexity to the surface,
  • or ignore it and hope for the best.
And in the end, ignoring it is usually a titanic mistake.

Revelling in the complexity of object graphs
Wow you should see the object graph of the project I am working on; it's UML diagram is huge... blah.... blah...
Yawning.jpg
Many people when shown a mathematical formula, almost instantly get this glassy-eyed look about them, followed by a subtle Yawn. Almost as if their mind had unwillingly shifted them into neutral: the engine's on, but the wheels ain't turning.
maths.png
As I'm sure you've realised I don't like this style of programming too much; call it the bad side of OOP. OOP has many positive aspects and I use it frequently in my code, however I not a big fan of hidden complexity.

Why you ask?
It's kind of simple; my brain never get the opportunity to shift into neutral until I'm already pulling my hair out. Cognitively it's a mess and certainly nothing to boast about. With Functional Programming the complexity is immediately visible, hence my brain can choose to shift to neutral or not.

Pragmatically this style of OOP is a problem because:
  • It's untestable; can you really be sure you've covered all the eventualities?
  • It's new age spaghetti code; it was never popular in the 70s or 80s, so why does it now make sense.
  • It's difficult to train; it's a huge hurdle when new developers join your team, and even worse when you hand over to maintenance.
  • It's breaks easily; for most of the same reasons it's difficult to test.
  • It's a pain in the derrière to multi thread.
  • .... I could go on, but that would just be me venting...

Now do yourself a favour and go back and look at the first function example in this post; do you immediately understand what it does, where it gets it's input and what output it would return? Yes, this is the essence of Functional Programming:

To create functions that are designed simply: specifically avoiding mutable state, hidden inputs and outputs (side-effects), so your brain can easily make a choice

So final question; if Functional Programming is better than OOP, why does OOP still need to exist?
Well for one you can't design certain aspects of an application without mutable state, for example: UI (user interfaces), NIC (network interface controller), ... are examples of indeterminate mutable state; meaning you need a way to model events, changing states, etc. but that doesn't mean the entire system needs to mutable; hence the drive towards extended language support for Functional Programming.

In my experience the ratios of OOP vs FP in your application will vary, however I typically find that most of the object graph is better modelled as immutable state (FP) i.e. I tend to challenge why something needs to maintain mutability instead of the other way around.

Next post: I'll look at comparing this in a few popular languages, and as I said I'm might even tackle a before and after scenario (assuming I can find a simple enough one)
 
Last edited:

[)roi(]

Executive Member
Joined
Apr 15, 2005
Messages
6,282
Last edited:
Top