The article discusses two Swift versions of the same program, one that uses UnsafeMutablePointer (which is basically direct C-style memory access), and one that uses Swift's built-in Array type. The former runs well, the latter is slow.
If you look at the Objective-C code, BackingBuffer is a C struct containing a C array of Pixels (themselves each a 32-bit C struct).
The problem is that the best analogy to Swift's Array type is not the C array in Objective-C - it's the Foundation class NSArray. For the given use case, most Objective-C developers would eschew NSArrays and classes serving as data bags in favor of fixed-length C arrays containing C structs. C arrays don't perform bounds checking, they require you to manually decide when to allocate, copy, and free the backing memory, they require you to know how much space each piece of data requires and work with the data accordingly, and so forth.
The fact that NSArrays can't hold value types, only references to classes, complicates things a bit, but I doubt the Objective-C code would be anywhere as fast if the backing store was an NSArray holding NSValues or something similar. Comparing the Swift implementation using Arrays to the Objective-C implementation using C arrays is not really fair.
We've essentially been told to use Swift arrays as we would C arrays, so your analogy is flawed. Another premise for Swift is that it should be usable for low level, high performance programming but "safer than C". This is something we don't see in practice.
Furthermore, using UnsafeMutablePointers in Swift makes the program a magnitude larger than the corresponding C program, with no additional gain in perf, readability or safety. It is a last resort, not something you should use all of the time.
It's odd that you said "We've essentially been told ...". Will Apple reject apps using C arrays or something?
In any case, I don't really see the problem here. Every feature of every language has some kind of performance trade off. And it's natural to expect higher level abstractions in higher level languages will have some amount of performance overhead versus lower level abstractions.
In this case, there's no shortage of alternatives, so pick one and get on with it. If it ends up being too slow, go back and look at the alternatives and swap out Swift arrays for C arrays or NSArray or whatever.
No, but we have been told that the performance characteristics of the native Swift array should be the same as an array in C.
Judging from what the compiler team at Apple regularly write in forums and on twitter, it seems that they consider the performance issues for arrays (actually, this happens in other circumstances as well) to be a very real issue.
When you say "swap for a C array", then I suppose that you mean we allocate a chunk of memory like in C, then access it using unsafe methods (as there is no "C array" in Swift). Unfortunately, working with UnsafeMutablePointer and friends is extremely verbose and clumsy, which makes it an extremely bad experience.
Isn't there toll free bridging between a Swift Array and an NSArray? It would make a lot of sense for the performance characteristics of the two being the same if that were the case.
Yeah and swapping to a C array isn't that simple in Swift. It seems easier to just have objective c code and call that from Swift when needed.
Actually, a Swift array is pretty much identical to a C array. You get an unsafe pointer to that array and treat it exactly as it was one. NSArray on the other hand is toll-free bridged to a CFArray.
The current "solution" is to keep the array a Swift array, then in any performance critical code simply cast that into an unsafe pointer and work with that directly as if it was C array.
There's bridging, but it's not toll-free. You have to convert between them, which is pretty slow. Toll-free bridging means that no conversion is necessary, e.g. you can take a CFArray and just treat it like an NSArray and everything still works with no conversion.
Conversion may or may not be slow, depending on the elements involved. Specifically, if you're converting from Array<T> to NSArray where T is a class of @objc protocol type, it's guaranteed to be O(1). Converting from other types (such as Array<Int>) will do an O(n) copy, as it has to bridge each element (e.g. Int would bridge to NSNumber).
Conversely, treating any NSArray as an Array is always just a call to -copyWithZone (which is O(n) for mutable arrays and O(1) for immutable arrays), although the docs say that upon the first element access it type-checks the elements of the array (though presumably converting to [AnyObject] is always free).
The conversion is only supposed to happen once, but there was a bug (don't know if it is fixed yet, it was discovered quite a while ago), which made the Swift <-> ObjC NSDictionary conversion to be performed each time an access was made.
Can you give some more information? This is the first I've heard of that. The only thing that comes to mind that would explain this sort of behavior is if you actually make a copy of the Dictionary first and then access that copy; since it's a copy, the conversion would presumably invoke the normal copy-on-write behavior and make a copy of the backing storage, while leaving the original Dictionary un-converted. In that case, any subsequent copy + access would have to re-convert. And if that's what's going on, then that sounds like expected behavior, not a bug.
An example of what I'm talking about would be something like
let dict = self.someDictionary // dict is a copy, not a reference
let x = dict["foo"] // this would then convert
let y = dict["bar"] // but this wouldn't
If you mean that the `let y = dict["bar"]` would convert as well, then I'm a bit skeptical, because it seems implausible that the conversion code in Dictionary would even be capable of "converting" a native storage to a native storage (i.e. it's reasonable to expect that the conversion code explicitly converts an NSDictionary to a native storage).
I am not sure of the exact behaviour. I only read something Lattner wrote as a comment on very slow behaviour when passing things back and forth into CoreGraphics that relied on bridging Swift arrays/dictionaries into their CF counterparts.
In the debug trace, it appeared that the Swift hash map got bridged over and over again every time it was to be read. Since this was a set of attributes for CoreGraphics graphics rendering, the dict/array conversion ended up costing magnitudes more than what the rendering itself did.
From what I remember, Lattner (might have been someone else on the compiler team) said that such conversion should only have needed to happen once, and not at every access to an element. Something like that.
This issue started to occur when it was no longer possible to directly create NSDictionary/NSArray from Swift.
I may also misremember things. Best would be to ask on the Apple developer forums.
> This issue started to occur when it was no longer possible to directly create NSDictionary/NSArray from Swift.
Huh? You can still create NSDictionary / NSArray in Swift using the exact same API you would in Obj-C. That's always been possible. And in fact Swift even extends NSArray / NSDictionary to support the Swift literal syntax.
> Every feature of every language has some kind of performance trade off.
That's simply untrue (unless meant as a pointless tautology :)). There are plenty off free features and abstractions. From syntactic sugar things, to really advanced stuff like Rust's borrow checker. Even using lambdas in Rust can be cost free. (It almost seems like Rust and LLVM have created the sufficiently smart compiler.)
And even really high level languages like Haskell can do surprisingly great optimizations, so that, say, a pipeline of projections and filters costs the same as just writing the loop by hand.
For a safe array, the usually acceptable overhead is one bounds check per loop/sequence, right?
(Anyways, the article discusses non optimized builds, at which point nothing's guaranteed to be cheap, so he's really asking for more developer support tools.)
Until you can declare a fixed-length Swift array, or a Swift array that isn't copy-on-write by default, or a fixed-length Swift array that doesn't live on the heap, this can't possibly be true, no matter what you think you've been told.
> it should be usable for low level, high performance programming but "safer than C".
With optimizations turned on, the 'Swift []' code comes within 3x of the performance of the ObjC code. Lots of room for improvement, but not abysmal either.
> Furthermore, using UnsafeMutablePointers in Swift makes the program a magnitude larger than the corresponding C program...It is a last resort, not something you should use all of the time.
From what we're told, copy-on-write doesn't happen unless it has to (i.e. the compiler discovered that we use it elsewhere).
In any case, the issue here with debug builds does not seem to be due to memory access nor copy-on-write, but simply because of ARC sprinkling the accesses to the array with release/retain pairs.
The reason it works ok with optimizations turned on, is because Swift will not remove unnecessary release-retain in -Onone.
This was not a great problem in ObjC, since it only affected ObjC method dispatch - which you were unlikely to do in a tight loop. Plus, ObjC ARC actually emits retain/release pairs differently.
The issue with ARC emits was discovered with the very first beta, but as of yet there has been no fix.
The problem here is probably this:
1. In order to perform a local optimization on ARC retain/release, even at -Onone, you need to change the retain/release emits to the style used by ObjC.
2. But that retain/release often incurs an unnecessary autorelease, which would prevent even an optimized Swift build from eliminating certain retain/releases.
In other words, they either get significantly better -Onone performance, or slightly worse -O/-Ounchecked performance.
I disagree. Swift arrays are analogous to both NSArray and C arrays. If they're so extremely slow then I'd consider that a bug in the compiler to hopefully be fixed as Apple improves things. There's no reason this code should be this slow.
The fact that NSArrays can't hold value types doesn't complicate things "a bit," it destroys the analogy entirely. Swift arrays can hold a bunch of references, like NSArray, or it can hold a contiguous chunk of values, like C arrays. The second option may be slow in some circumstances now, but there's no fundamental reason it has to be.
The code isn't quite as terrible if compiled with optimizations on. I like your hypothesis elsewhere in the thread, that the optimization pass is probably removing unneeded array copying that hinder the performance of the debug build.
Semantically, an Array is an automatically resizing, reference-counted, copy-on-write, bounds-checking sequence of elements, none of which I want if I'm building a buffer for pixel values. So there is a significant mismatch between my desired use case and the characteristics of Array. I really do think there should be a fixed-size non-copy-on-write collection type at some point added to the language.
From an operational perspective, you're right, the analogy is quite leaky if you examine it closely.
Note that automatic resizing and reference counting won't hurt you in these cases. Bounds checking can, but eliminating bounds checks by proving that your accesses are always in-bounds at compile time is old hat by this point. Copy-on-write semantics are trickier to make fast, which is why I'm guessing they're the root cause here.
Which he somehow dismisses but is the corresponding thing to an unsafe C array.
I'm not a big fan of Swift but this rant is rather unfounded. Sometimes you have to give up high level abstraction for performance. In Obj-C he was willing to do so by using an C array. In Swift on the other hand he insists on using the high level array implementation.
The code he is trying to port is to Swift is pure C/C++ ( www.handmadehero.org ) , so probably this solution is the easiest, apart from being the most appropriate.
> "I cannot do it because the performance is so flipping terrible in debug mode. Trying to debug your app with optimized code is just a pain."
I've been shipping PC and console games in C++ since 2000. Debug builds have ALWAYS been utterly unusable for us - what we do is run in "release" mode with optimizations, ASSERTs, and debug prints enabled, then when the problem arises, turn off optimizations in a handful of related files and recompile. Then we ship another, "goldmaster" version which strips out the asserts and prints.
He has the solution to his own problem. He just needs to use the unsafe array.
People have been writing quality games for decades using unmanaged memory access. It is not some ticking time bomb that will inevitably blow up your app. It just requires a little more care to get right. Considering this guy is manipulating pixels directly and writing his own blitter, I believe he has the skills to pull it off.
I can understand why he would prefer to use managed code but it's not compatible with his goals and he has a clear alternative. His issue is not Swift's fault, it's his refusal to use the tools that Swift gives him.
In theory Swift's arrays are little more that a c array + length, and this is how we have been encouraged to use them.
The issue here is nothing about "managed code", but about how the Swift compiler behaves without any optimization passes. Currently (and it should be noted that this is a recognized issue), the unoptimized builds are orders of magnitude slower than ObjC because Swift will auto refcount code that in ObjC is either not ref counted at all, or uses manual RC. After optimization, Swift can eliminate many uses and put things on the stack etc, but we are at the mercy of the optimizer for that.
Compare this to the (compararively) predictable behaviour you have in C.
Yes we can in theory use UnsafeMutablePointer everywhere, but in that case it's much more efficient to write the function in C instead.
I would wager that the problem is that Swift arrays aren't just a pointer plus a length, but rather a pointer plus a length plus copy-on-write semantics. You probably need some smart optimizations to eliminate the copies, and I'd put good odds on the performance problems here coming from lots of unnecessary array copies.
I'd have to check in detail, but just looking at the profiler, about half of the time is in the actual setter code and half in bounds checking. One would think that bounds checking is a faster operation than the set if copy-on-write is the culprit? But maybe it does some other magic as well. Something you notice though is that release/retains are everywhere.
The final keyword means that at least one indirection is avoided for each access to the buffer. The speedup is significant. I do wonder if there is a way that the lookup could be cached though if the compiler was a bit smarter.
One significant thing slowing this down is all the access to the the shared mutable buffer. I'm confident it would go quicker if renderWeirdGradient was changed to be a pure function that either returned a new populated buffer or had the buffer as an argument and returned the result.
An even quick fix that would have some improvement would be to make the buffer final so that less indirections are required to read and write from it.
As others have already said the unsafeMutableBuffer may be appropriate for performance critical inner loops.
I've just tried making the buffer final and it results in a speedup of 8 times (not rigourously measured). From ~0.02 to between 0.0045 and 0.0025 on my computer.
Of the many bugs I reported at 8.0 release, not one has been fixed. Basic intern level stuff to do with messed up orientation bugs, broken locale APIs, text rendering and layout issues.
I honestly can't be fucked any more, they can hire some QA with the billions they're earning. They can build all the fancy APIs they want, I'm not touching it until they haven't decided to discard it after a year and some other poor suckers have suffered through their eternal September of shitty buggy newness while fighting with App Store review team every second version for the privilege to make burger flipping money.
> they can hire some QA with the billions they're earning
I don't see how that would help at all. Many of these bugs HAVE been discovered and reported on Radar. If Apple's deadline is September XY for iOS Z, and November XY for OS X 10.Z, then they will ship it, no matter what their QA team says, and no matter how big it is.
iOS 8.0.1 was a QA glitch, but I'd say it was an outlier.
Someone posted this to Hacker News a while ago about how submit bugs and interact with Apple. When I compare them to other software providers I have interacted with (development tools/platforms/libraries/etc), the only team looking worse than Apple is google. It's a coping guide for an abusive relationship.
It usually takes weeks or months for reactions to bug reports. I have three open bugs (one for Xcode, one for iOS, one for OS X). The iOS bug is a regression introduced in the beta and reported in August. It contains screen shots and sample code. No reaction yet.
I filed a bug report for Mac mail about 3 months ago. From the bug tracker UI (which isn’t informative at all) I don’t think the ticket has even been touched.
I strongly object to this notion that I have to give free work to Apple in order to "be a good citizen."
I understand that if I expect stuff to get fixed that my odds are improved if I file a bug report. But I don't owe Apple anything and I'm perfectly within my rights, and still a "good citizen" if I just do my best to work around the problems and let Apple figure stuff out on their own.
Separately I got excellent optimised performance with a final class and this renderWeirdGraphics method which outperforms the Objective-C version by 40% in optimised builds:
func renderWeirdGradient(blueOffset: Int, _ greenOffset: Int) {
let height = buffer.height
let width = buffer.width
buffer.pixels.withUnsafeMutableBufferPointer { pixels->() in
for y in 0..<height {
let row = y * width
for x in 0..<width {
let i = row + x
pixels[i].green = Byte((y + greenOffset) & 0xFF)
pixels[i].blue = Byte((x + blueOffset) & 0xFF)
}
}
}
self.needsDisplay = true
}
I can't help but feel like we continue to just go backwards; Swift doesn't look like they spent much time really thinking about what they wanted to solve and how to solve it. I thought this was an interesting read: http://owensd.io/2014/09/24/swift-experiences.html
I know saying this is a bit terse, but a native Clojure compiler for iOS would have been the best thing Apple could have done -- let someone else with far better language skills who has already done a lot of thinking about values, identity, state and time solve it and then just use the work.
Would the problems outlined there be any better in Clojure?
Type inference: at least in Swift you have types; the author dismisses "click on it in XCode", but that's still better than you get with Clojure.
nils in the Objective C bridging APIs - you're going to have to solve that somehow, even in Clojure. I don't know how Clojure invokes Java code, but you have two choices, neither of them good: either you treat every "FFI" call as returning Option, in which case all your code that calls libraries is littered with Options and you better have a syntax for easily "casting" out of Option (like the one the article complains about), or you allow nils to cross the divide into your nice language and anything that calls the "FFI" has to worry about them.
I don't have a lot of Swift experience, but it looks like a decent incremental improvement, applying some of the lessons of modern language design in a conservative Algol-style syntax that many programmers are happy to use. (I'd've been happier to see Scala support, but I can see why Apple would want their own language). Clojure isn't that (and from Apple's PoV, anyone dedicated enough to use Clojure is probably also dedicated enough to use a non-first-party language).
Remember that clojure is dynamically typed. `nil` is its own type, but in a dynamically typed language, it can still be used anywhere.
I have a slight preference for statically typed languages, but FWIW I find I make a lot fewer type errors in Clojure (and other lisps) than in, say, Javascript.
And for users who can't stand dynamic typing, there's core.typed
I don't think Clojure would have been a sane choice for Apple though. It's too memory hungry for mobile and developers would avoid it because of how different it is from objective c
Clojure has some very nifty features for interacting with class libraries. It has a number of macros for easy access to methods and static methods, for calling a series of methods on an object and so on. There's also `proxy` macro which let's you create a subclass of any class you want, conforming to any number of interfaces, but providing sane defaults for methods you don't need or want to implement. There's also reify, which let's you easily create classes which implement some interfaces without any overhead. There are some features which correspond to interfaces, but are somewhat more flexible (defprotocol, extend-type, reify) and then you get multiple dispatch and more.
In short I found Clojure and ClojureScript to be very easy to fit into any OO framework. It's sometimes hard to convince yourself to use those features, because you know that idiomatic Clojure solution would be better, but other than that I saw no problems at all.
But I'm not a heavy Clojure user, so it could be that I somehow lucked out and not encountered any problems which more experienced users face sometimes.
Don't really see it being a problem; I don't have a lot of experience with UIKit but Clojure interacts fine with Java UI toolkits like Swing. I've also dabbled with the OpenGL Clojure wrappers and found them easy to use.
>really thinking about what they wanted to solve and how to solve it.
What they wanted to solve was how to introduce a new more modern language that worked seamlessly with existing Objective-C libraries. People suggesting alternatives tend to forget the ObjC part of that.
Considering that Apple created Dylan I'd argue that they have all the "language skills" needed. But Dylan was a failure, in part because of politics, but also because most programmers are conservative (I'd say dumb code monkeys, but this isn't exactly true) and don't like their "new" languages and tools to be "too new".
Clojure is a really great language, solving real problems in a nice way. But in language popularity charts I just checked it's far beyond first 20 entries and close to Forth(!) on one side and Erlang in the other in popularity. Clojure (and Erlang, and Forth of course, but also Haskell, OCaml, F#, Smalltalk and Io and many more) is just too new, too unfamiliar, too intimidating for our conservative programmers to consider using. And for Apple that was probably the reason for rolling out a somewhat "normal" language instead of something really good.
For mainstream programmers they are, in that they have features they never saw before. You know, like in "it's something new for me" or something like that.
The fact that objectively some of those languages come from '50s and '60s (like Lisp in '56 and ML in '63 IIRC), which makes them ancient by today's standards, doesn't matter. It's really sad. I devoted a couple of years to learning about and trying to use such languages (see here[1] if you want) but I'm in a very small minority; most young programmers never use anything other than 1-3 core languages they learned; the number of known languages increases with years of experience, but it's still biased towards currently mainstream languages. Which are mostly crap.
Anyway, that's how it is: most programmers are very conservative in their choice of tools they use and feel no need to look for alternative tools to use.
I don't want to spend too much time discussing this, but I read Swift guide[1] and even played with it in Playground, and I'm 97% sure that every single Swift feature is borrowed from currently mainstream languages.
There's a difference between "oh, it's like feature X in C#" and "well, didn't Common Lisp implement Y 30 years ago?". Feature X will be perceived as nothing new, merely catching up; Y will be seen as new, dangerous, cryptic and best avoided. I think Swift designers intentionally packed their language with Xes and added almost no Ys, exactly because they didn't want it to seem as "too new, esoteric, unproven" and so on so forth. This is a sane business decision, by the way, I just happen to dislike it.
I’m not one that gets excited about enforcing that every item in your collection is of type String. Why? Because it does not actually matter. If an Int gets in my array, it’s because I screwed up and likely had very poor testing around the scenario to begin with.
I wish I could paste in the citizen kane slow clap gif.
Me as well. Is it in a congratulatory way or just "ok, now shut up, take your consolation prize and let the people that know what they are talking about speak" ?
Somehow it seems that currently everything happening in the USA is "Dead on Arrival" - including Swift.
This time due to shipping an utterly broken compiler to an unfinished language, without being able to contribute to it. Because if we could contribute to it, Swift might still be soon something that people love to use. But I somehow really don't believe that this will ever truly happen.
The last benchmarks[1] I saw for Swift showed it's generally pretty slow without optimisations on, but various differences in the language from Objective C mean the compiler can optimise it much more, so generally it's faster than Objective C (and actually comparable to pure C at times[2]).
Those benchmarks (part 2) were ludicrously flawed, comparing native integers in Swift with NSNumber objects in Objective-C.
The fact that the standard library qsort() routine, which uses function pointers and pointers to the objects to be sorted in the array has overhead is also obvious, a truly native implementation is still much better[1]
I like how he blew past the suggestion to turn off bounds checking in the comments (which I assume otherwise shouldn't affect the ability to debug), when that looks like almost certainly the reason why the array version is a couple orders of magnitude slower when unoptimized.
Instead of wholly claiming Swift is slow (with your limited expertise in the language, as well as running no optimizations), why don't you reach out to see what you can do to improve performance? Sid you reach out to anyone to see what you can do to improve performance / if you are doing things the Swift way?
This blog post is a reaching out, and the article clearly points out that with optimizations it runs fine, but that you can't always run with optimizations enabled, e.g. when debugging or stepping through sections.
A gentle introduction to Apple marketing: "Swift" doesn't mean "swift" - terms and conditions apply (not to mention the name was stolen from the original Swift language - modifications made to Wiki were swift indeed).
My problem with swift is even more simple than this guy's.
I have a new client build and really wanted to use swift for the compile time safety features. Then I started looking at ios7 compatibility...total fucking joke. After isolating what is still a very popular device, especially by revenue in the case of my client, apple have basically made it impossible use this language as long as people have customers who use an iPhone 4. In this case, for my client, that's fucking heaps of them.
It doesn't have to be this way though, they could prioritise adding support for static libs,then all developers will be able to switch over as we'll be able to use the existing libraries we rely on, but they just don't seem to give even half a shit about making it something people might actually use.
I don't get it, what's the problem with Swift, iOS 7 compatibility and iPhone 4? I've been working on an app for iOS 7+ devices for a few months and everything is working ok. The pains I'm facing with Swift are far from compatibility with iOS (debugging is hell compared to Objective-C).
The only thing that doesn't work in iOS7 (except for the new iOS8 APIs obviously) is frameworks. This can cause an issue on larger projects because the Swift compile can take some time but there are workarounds for during development. Supporting iOS7 and iPhone 4S's is no real problem in Swift.
This frameworks issue is not Swift specific though, it also applies to Objective-C although that has support for incremental builds so it isn't such an issue.
I think the puck is going in the direction of an ever increasing number of people who buy used iPhones. So I don't think that the iPhone 4 is going away soon.
Looking around, I see that a lot of people who can't afford (or don't want to afford) new iPhones are starting to buy used iPhones instead of cheap new phones.
I think Apple's only priority for Swift is to make sure future products using future versions of iOS and OSX are more attractive to buyers than other companies' future products using future versions of Android and Windows. I think Apple is fine with the idea that the best way to target their legacy devices is with their legacy language. They're probably even okay with losing people who don't upgrade quickly as customers altogether. "Let the slow upgraders switch to Android and further fragment their market instead of ours," I can imagine them saying.
I don't get the feeling that Apple is willing to sacrifice any future benefits (including simplicity) in order to improve legacy support. More than any other company I see, they want to encourage customers to abandon the past and buy something new. If Swift doesn't work very well on iOS 7, I can't imagine that ever changing except as a side effect of fixing something for iOS 8+. It's strategic focus, not incompetence.
If you look at the Objective-C code, BackingBuffer is a C struct containing a C array of Pixels (themselves each a 32-bit C struct).
The problem is that the best analogy to Swift's Array type is not the C array in Objective-C - it's the Foundation class NSArray. For the given use case, most Objective-C developers would eschew NSArrays and classes serving as data bags in favor of fixed-length C arrays containing C structs. C arrays don't perform bounds checking, they require you to manually decide when to allocate, copy, and free the backing memory, they require you to know how much space each piece of data requires and work with the data accordingly, and so forth.
The fact that NSArrays can't hold value types, only references to classes, complicates things a bit, but I doubt the Objective-C code would be anywhere as fast if the backing store was an NSArray holding NSValues or something similar. Comparing the Swift implementation using Arrays to the Objective-C implementation using C arrays is not really fair.