How Swift is Swift?

Exactly how Swift is Swift? The language was designed for speed, but looking at comparisons, there are obvious places where code could be further optimized. In this talk, Joseph Lord shares some performance lessons, highlights what to watch out for, and shows us how to get the most from Swift.

Using his own optimization experiments with other people’s code, including comparisons with C and C++, we learn about the recent history of Swift’s performance improvements, and the things you can do to make your Swift as Swift as possible today.

You can read additional commentary on Joseph’s blog.


Designed for Speed (0:13)

Swift is designed to be fast, very fast. Static typing helps, as do value types. There is no point to aliasing unless you actually use the pointer types when it has something to do. Constants and copy-on-write all help. And so on.

I started looking at performance in Swift back in August, and I thought it should be faster. People were experimenting, but compiled code wasn’t running as fast as they expected, compared to other languages. So, I started playing with Swift myself to see what helped performance, and I found a number of things that made it run faster. By looking at other people’s code, it’s slightly clearer where the optimizations can be found. I would look for bottlenecks, and adjust the code to see what helped — all slightly more revealing that working with my own codebase. I looked for situations where developers had problems, and were complaining; I’d look at their solutions, and see what wasn’t performing well.

I’ve been looking at the recent betas of Swift, so most things in this talk are for the lastest Xcode beta at the time I’m speaking — 6.3 beta 2. They will apply to earlier betas, and as a history, in some cases were more significant in the earlier days of Swift. As a mark of Swift’s impressive developmental pace, there are performance tricks that were necessary in Swift 1.1 that are often automatically done by Swift 1.2. Swift 1.2 beta 1 made things substantially faster, particularly for debug builds, and beta 2 basically removed the need for several things that I had been doing before. Good progress!

What to Optimize (2:32)

The ability to statically dispatch, and avoid indirection, allows the inlining of code, which is really important for real performance gains. Much of your optimizing will be about enabling it.

Most applications don’t really need much optimization. Most of the app’s time is spent in API calls, network calls, and blocking library things, so if you’re looking to optimize, those are the first thing to sort out. I won’t be looking at these. The next thing, particularly in Swift, is the build settings you need to setup. Debugging is hugely slower than the release builds, because it doesn’t take any shortcuts. All the data must be there for you to step through and debug properly, so the performance is wildly different.

The next thing you do is profile and measure which parts are slow, and where it really matters. Quite obviously, if you speed something up by a hundred times but it was only running a fraction of a percent of the time, you haven’t done much for your program. The profiler, which you should really get to know in Xcode Instruments if you’re doing anything performance related, won’t allow you to read anything, but it’s worth knowing that it’s there. It’s quite useful for finding where the problems are. For example, you can stop when a call is happening from a function that you hoped would be inlined.

On top of the things I recommend, there are a few other things that will make your Swift code run faster. If you’re going for serious performance, for many applications you will want to step out of your code for a second, and use Metal, Accelerate, or full on parallelism.

Build Settings (4:30)

Debug builds default to -Onone. Even in 1.2, this is typically about a hundred times slower for performance critical code, which means really quite slow. For release builds the default is -O, which is pretty fast. Choosing the -Ounchecked builds will remove any precondition checks (and a few other checks) you might include in your code. Right now, it doesn’t make a huge difference, as unchecked is usually about 10% faster.

New in Swift 1.2 is a setting for whole module optimization, which turns off incremental builds. This slows down the build because, if you enable it, it will build the entire module or application in one big compile, so it’s all in one file. Your builds will get slower, but it does allow inlining and optimization between all those files. With it turned on, you no longer have to move code to the right file to get the best performance, which can make quite a big difference. It will also allow the creation of specific versions of generic functions where they are being used, which can be helpful.

If you look at side by side comparisons of software running in different builds, with the main object changing from a struct to class, you will see that the class builds run significantly slower. In situations where you are using the CPU intensively, optimization becomes really important. To emphasize, in my test scenario, the entirely unoptimized version takes half an hour to reach a point that only takes the optimized version 40 seconds. This should give you a clear idea of just how much difference a debug build makes. The lesson here: if you have a big project, you might want to consider doing debugs in some parts of the code, and not in others.

Make Two Targets (7:30)

To help with this, you can make two targets for your application. In one of them, make a framework target that will be used by the main application. Put the performance-critical code, and all the things it calls into, in that framework. This lets you optimize the framework, run it with whole module optimization, and separate out the remaining code into a debug build calling into it. When you do a release build you can optimize the whole thing. It’s a way of setting up your project to get the best of both worlds; extra hassle, sure, but if you’re really having problems with the slowness of the builds, or you need the debug in other areas of the code, it can be very useful. And if something really didn’t work in Swift, you could always drop into C or C++ in the inner loop.

Is Swift Faster than Objective-C? (8:46)

So, the big question: is C in Swift faster than Objective-C? The question is simple, the answer is not. C is valid Objective-C and it’s fast. NSObject-based code with NSArrays retain and release from ARC and so on, but it will be a lot slower. So, the unhelpful philosopher’s answer: it depends what you mean by Objective-C. If you’re using Swift with structs, it will usually be nearly as fast as C, and quite a lot faster than Objective-C, so for now, let’s say it’s likely to be somewhere between the two.

In comparisons between Swift, C, and C++, I’ve been tweaking other people’s projects online to see how close I can get performance. In most cases, I find that it gets within around 20%, but there are exceptions. Primate Labs, who make Geekbench, published the source code for their Swift performance testing implementations, along with an article detailing the results of a comparison with their C++ benchmark code. After some work with the latest betas, two of the three tests, FFT and Mandelbrot, were near enough equal between Swift and C++. But the third test, GEMM, was four times faster in C++. Part of the reason I believe, is that the test used “fast math”, which is less accurate, may have allowed further optimizations, and is not available in Swift.

One other test was done by David Owens, with a gradient render that performed seven times faster in C. He detailed his results in a series of posts, demonstrating that there definitely are cases where C is a lot faster. Those tests used -O fast optimization in C, but when built with the OS (which is perhaps more typical), C was only two times faster than Swift — still significantly faster, but it is certainly within bounds for the Swift compiler to improve. Indeed, as a general point about Swift, the knowledge the compiler posesses about your code should let it do at least as good a job as C.

Simple Optimizations (11:35)

So, how do you get your Swift to be Swift? I have already mentioned a few. To start with the simple things: use structs where you can, as they will be much faster than objects. If you must use objects, make as many as you can final. That allows much more direct calls into the code, and removes steps of indirection. Use constants with let, which is good for not only optimization, but in my view, also for code design. With most classes, unless you design them really carefully, your methods are often at risk from being overridden by others in ways that don’t meet expectations — especially if it mutates the object and does so in a different way. So final is, I think, good for correctness as well. And structs are good for correctness, as are constants.

Then there are some dangerous optimizations you could do. The operators &+, &-, and &* don’t do overflow checks; you’ll have to manage that yourself, or risk the overflow. With Unmanaged<T>, you can avoid ARC calls which will help with speed, but you need to make sure that the object isn’t released.

What to Avoid (13:39)

A caveat: this only matters for highly performance critical code. For most of the code you write, do what’s appropriate for the code style, and so on. Don’t be a slave to optimization. However, when you come to performance critical parts, some things are very bad things to do! Global classes and static variables can be very slow to access. The compiler can’t optimize away access because it could change from other locations. For performance critical code, avoid function calls that can’t be inlined. Ideally, you don’t want any function calls at all, because they’re quite expensive. And if you can, avoid access via protocols, or to non-final class methods or properties, as well as Objective-C compatible ones.

Checklist (14:49)

Check your build settings. Profile it. Measure! An obvious thing to say perhaps, but set up and measure the critical parts of your code, so you know what changes are occurring, and what affects you’re having, so you can see whether your changes actually help.

And thank you!


For additional commentary on this talk, please see Joseph’s blog. Thank you to Simon Gladman for the original Cellular Automata code used as a basis for some of these performance optimizations.

Q&A (15:15)

Q: How do you make sure functions can be inlined?
Joseph: You have to think about whether the compiler can know where it’s calling to. That means it needs to be in the same file or module, if you have whole module optimization turned on. It’s not via a protocol. There might be some exceptions where it can get away with that, but I don’t know them. If it’s in a class it needs to be final, otherwise it needs to worry about the inheritance. Although in the latest two betas, it will automatically mark things as final if they’re in the same module and internal, or in the same class and private. The compiler can work if it can’t be inherited from. You have to try and determine whether the compiler can know exactly what function it’s going to call. Then, you can check whether it has worked by looking at the profiler, or looking at the SIL code, or looking at the assembler.

Q: Are there things that we have to worry about when we optimize our Swift code, like particular parts of behavior that we can change? And can we test that somehow?
Joseph: I think the behavior only changes for the unchecked case where the preconditions are skipped, but the compiler can assume that they are true. I don’t believe any results will change in the fast case. There may be other differences in the unchecked, but I don’t have a full handle on exactly what it skips out, and what it then does in that case.

Q: Last year you found an issue with a float that had been put into an int, or vice versa, and since then Apple have made big improvement to that. Is this still relevant?
Joseph: Yes, there was a case that I found where, because of the bridging to Foundation classes, there was a typo. Someone had written a signed little seven to a float. I think it worked via converting to NSNumber and then back again, and things like that. I think there might still be some edge cases, but it’s improved in the new beta because it doesn’t automatically cast from Foundation types to Swift types. If you do include Foundation, especially in Swift 1.1, be careful! It changes the type safety because things will cast through NSNumber between different types. That’s slow because it goes to an object, as well as being potentially incorrect in places.

Q: Have you run the same test across multiple Swift versions like 1.0, 1.1, 1.2, and was there a difference?
Joseph: I haven’t done a direct comparison. On the Geekbench tests, there are some results on their website. What I seem to remember is between 1.1 and the first beta, a couple of tests became a couple of times faster. Between beta one and beta two I could remove some of the ugly things I had to do previously. final became automatic in that case, and also, I had been using an UnsafeMutableBufferPointer instead of an array, because it did provide big performance improvements in 1.1. In 1.2 that seems to have largely gone away, and arrays seem faster. I think they were over-checking quite a lot of things. I hope they’re still checking enough, but I think they’ve sped up arrays a lot. It depends at what point in Swift’s development you started from, to see how big a jump it was. If you started without my optimizations, beta 2 compared to 1.1 could be 10 times faster or more, but that gain would be a lot less if you had already optimized in the ugly ways I suggested.



Joseph Lord

Joseph Lord