3D Touch is BIG - it’s the most significant new input method added to the platform since the original iPhone. Alexis discusses implementations for Quick Actions, but focuses on the exciting potential of 3D Touch’s API for accessing raw pressure data. With this powerful new tool you can create innumerable, completely novel user experiences. We examine the limits of the new touch sensors, and wildly speculate on what it could mean for your apps.
My name is Alexis Gallagher (@alexisgallagher) and I’m a freelance iOS developer. I’m really excited to be talking about 3D Touch, because I think it’s one of the most interesting things to come to mobile devices for a long time.
I’m going to go through some of the API for the more standard and sensible ways to use it, but I’m also going to spend a little bit of time talking about aspects that aren’t quite as clear but have a lot of potential.
What’s Exciting About 3D Touch (0:30)
3D Touch is the first new general-purpose input method we’ve had since the release of the original iPhone. There’s been incremental improvements since the original iPhone, for example: GPS replacing A-GPS with iPhone 3G, and the addition of the front-facing camera with the iPhone 4. Other additions have been more specialized: Touch ID on the iPhone 5S, and the barometer introduced with the iPhone 6. These are not very general purpose input methods, but 3D Touch is! It’s as general purpose as capacative touch.
While it’s only available on the 6s and 6s Plus now, it’s probably only a matter of time before it expands to the rest of the iOS lineup. You can imagine using it pervasively, throughout the interface. This is the first time we’ve had a significant advance in a general purpose input method; everything before this was just the pre-3D Touch era.
The 3 Main APIs (1:58)
There are three main new APIs: Home Screen Quick Actions and Peek and Pop are the sensible, important ones. But the third part of the API, Force Properties, is the one that has the mesmerizing secrets. This is like Christmas for iOS developers: it’s an amazing gift and we haven’t unwrapped it yet.
Home Screen Quick Actions (2:41)
Home Screen Quick Actions are a mechanism for launching an app from the home screen with a shortcut to an action you want to perform.
Since 3D Touch wasn’t introduced until after WWDC, there were no sessions devoted to it this year. However, we can imagine what Apple was going for. I would say that home screen quick actions were designed for purposeful launch, when you want to perform a specific action right away.
There’s two kinds of home screen quick actions you can have: static actions and dynamic actions. Static actions are always presented the same way, whereas dynamic actions will be presented differently over time. For instance, creating a new photo is a static item, but viewing a recent photo is a dynamic item, because it’s dependent on the state of the app.
Apple already offers a lot of support for home screen quick actions, currently about two-thirds of their apps. So, make sure you’re supporting them, too, if you want to keep up.
Quick actions appear as a title, a subtitle (optionally), and an image. Apple provides 30 standard system images to choose from, representing standard actions like play, pause, and share. You can also provide an image that will serve as a template, so multi-colored images are out. You may also use a photo, but you’re limited to contact photos only.
The API is similar to other APIs where you don’t present a UIView
, but a structured bit of data that iOS takes care of rendering for you. It’s a bit like UIBarButtonItem
, except here it’s UIApplicationShortcutItem
.
Static quick actions are added to the Info.plist
. All those properties are present here. The dynamic items are assigned as objects on the application object. You create a UIMutableApplicationShortcutItem
, providing the title, the optional subtitle, and the icon. You assign an array of those items into the shortcutItems
property. One other noteworthy property here is type
, a string identifier. You assign it to the item so you can recognize it later. It could be a UUID or any other meaningful string identifier.
You respond to quick actions in the app delegate launch methods, willFinishLaunchingWithOptions:
and didFinishLaunchingWithOptions:
. Quick actions are now contained in the userInfo
dictionary, and we can check the type
string we assigned earlier.
If the app is already running, there’s a new app delegate launch method that may get called, performActionForShortcut:
.
Peek and Pop (9:06)
Peek and pop is essentially a mechanism for faster navigation. You can preview what you’re navigating into – that’s the peek. The first stage is the hint or blur effect the system provides. The second stage is the peek, which is a minimal preview of the content. Then if you keep pushing, you get to the pop, which is basically where you would have gone if you tapped.
The optional flow is you peek and then you swipe, and you get these peek quick actions, which are a lot like the home screen quick actions, a predefined list of things you could do.
The peek is a view controller that you provide. I’ll call it the previewViewController
. And then the pop is the destinationViewController
that you would normally go to. Under peek there’s preview action items, because the view controller that presents the peek is also responsible for declaring the quick actions that might be available on the peek screen. That’s an override of a function that’s now part of the UIViewController
.
Now, how do you provide the preview view controller and the destination view controller to the system? Well, as is often the case in iOS, the answer is a delegate. There’s a new method: registerForPreviewingWithDelegate:
. And this is where you provide an object that will be the UIViewControllerPreviewDelegate
.
Force Properties (14:54)
Force properties is the most mysterious and interesting part of the new 3D Touch API, because if you go to Apple’s documentation, they have a lot about peek and pop and home screen quick actions, but just a few lines about force properties, and it doesn’t tell you what to use them for or what they can do. They can do a lot! Force properties provide you direct access to the force information on a 3D Touch device.
So far Apple has not used this much in their own apps, though the Notes app is one of the few examples. You can draw narrower or thicker strokes based on the pressure of the touch.
I think the fact that there’s no design direction from Apple and few examples means this is an open territory.
I’ve created a view that shows the force information with an orange circle. And one thing you can notice is the granularities of changes in force. And it even does multitouch.
The API on this is old school: it’s pre-iOS 2.0 API. By that I mean the system provides you no gesture recognizers or UI controls for dealing with touch. All you get is these new methods on UITouch and trait collections that tell you if the device is 3D Touch capable at all.
So if you want to interact with this force information, you’ve got to go down and interact with touchesBegan:
, touchesMoved:
, or with the sendEvent:
method on the UIWindow
.
If you want to start using 3D Touch to recognize force everywhere, you can build your own gesture recognizer. It represents a gesture that I am now calling a squeeze, which is a hard squeeze on a force touch device. There’s a threshold of 0.5 for 50%. You can attach that to a view. Then if you squeeze for more than 50% of the maximum possible force, the gesture recognizer triggers.
Measurement Accuracy (18:53)
We’re getting this information on touch force, but what does it mean? How accurate is it really?
Using that component I made, I’m gathering force information and I just mash my finger around for 60 seconds, accumulating force information. I logged it to iTunes file sharing, and analyzed the data. So first thing you notice is the raw force values are in a range from zero to six and 2/3.
How granular is this? What’s the smallest difference that you observe, the resolution? The reported force resolution is very precise. It’s on the order of 0.2% of the total range of force that it reports. If you look at the fractions you get back and then assume that those floating point numbers are actually representations of fractions, we’re seeing 600 possible values. It’s much, much more precise than the information you get from UITouch
radius.
There’s a lot you can do with 600 possible values. That’s more than the possible values going left to right on a slider across the screen. That’s super high resolution.
But of course there’s a difference between precision and accuracy. Maybe it’s giving us these very precise numbers but they’re all nonsense. Is it just noise? How accurate is it? Asking this question began my quest for a touch pedestal.
The Quest for a Touch Pedestal (20:50)
The UIKit API only gives you this force information when you’re also getting capacitive touch information. It’s all piggy backing on top of this API that was designed on UITouch
.
Even though in theory you should be able to detect the force of an inert object, if it’s nonconductive and doesn’t trigger capacitive touch, the API won’t give you that. It will only tell you the force when there’s a capacitive touch to associate with it.
If you want to measure a force accurately, you need a way to apply force to the phone while also triggering the capacitive touch. Basically, you need something like a rigid, weightless finger to activate touch, which you can put your weight on top of. This is what I mean by a touch pedestal. You don’t want to put it on top of your finger because your finger is going to mash around and it’s not stable.
Many things do not work as touch pedestals, as I learned through trial and error: coins, metal foil (at least not consistently), bottles or droplets of water, wet sponges, and cheese. Oddly enough, figs work, but aren’t practical to put weight on top of.
Korean Sausage and Hot Dogs (23:01)
Then I remembered the story about Korean sausages: in South Korea it gets extremely cold but people still want to operate their touch devices, so they go to vending machines and buy sausage snacks. And then they use these sausages as styluses to operate their phones without taking off their gloves.
So I bought some hot dogs, and I found hot dogs do work. It’s just like a finger. I was thinking I’d make a hot-dog-based touch pedestal and then I’ll take my measurements. But annoyingly, a little slice of the hot dog doesn’t work as a touch pedestal. You can try to push on the hot dog with an inert object, but that doesn’t work either.
How Capacitive Touch Works (24:14)
At this point in my quest for a touch pedestal, I was getting frustrated and I come to realize that there’s a question I don’t really have an answer to. Just how does capacitive touch work after all? I’ve been building apps on these devices for four or five years and I realized I didn’t know.
If you want to be very confused, just Google for it because you’ll find at least a page of incorrect or strategically ambiguous explanations. The New York Times will tell you that it depends on the conductive power of the thing touching the screen. They don’t say conductivity because it doesn’t depend on the conductivity of the thing touching the screen. You can activate capacitive touch with a piece of paper between your finger and the screen. So it’s not conductivity, but the capacitance of the object and it could be mediated just by the electric field.
But the bottom line is that to trigger capacitive touch it’s not just the type of material that’s touching the screen, but also the physical shape of the object. So that’s the reason why a bit of hot dog with a wire in it does not trigger capacitive touch. But a bit of hot dog with a wire in it does trigger capacitive touch when you’re touching the wire, because now you’re adding to the capacitance of the object.
Now I’m thinking I’ve got the essence of this: I need to be electrically connected to it myself. You can see this here with a drop of water. A drop of water with a wire I’m touching works, but doesn’t work when not touching. This is great, because now I have a way to trigger capacitive touch without putting any weight load on the device at all. So now I’m on the way to my touch pedestal.
Mapping CGFloats to Newtons of force (26:11)
Next, I take a old oatmeal box and saw off the carton. I take the lid of it and glue the spool from a spool of thread in there and then I drill a hole in it and run a copper wire through the base of the plastic spool. And then I carefully put a drop of water on my iPhone. I turn this around, put it on top of my iPhone, connect the wire to a reservoir of water, and now I’ve got it! This is a touch pedestal: Now I’m triggering capacitive touch but in a very controlled way. The pedestal itself weighs only about 24 grams. And now I can go to town, and I can start taking data.
In the slides is a picture of me measuring a canister full of coffee grounds. You can see a canister full of coffee grounds is 3.65 CGFloats in force. Of course that’s not what we want, right? We want to calibrate this against real units. So bit by bit I go through and add five grams of coffee grounds at a time and take measurements in my lab notebook. Once I’ve gathered this all together, I can see that it actually all makes sense! It’s a nice straight line, like you’d hope it would be. Now I have a mapping of CGFloats to Newtons of force. And it is really quite sensitive, able to register weight as little as four playing cards.
Possibilities for Force Properties (29:17)
What do we do with force properties? I think what’s really exciting is all of the UX experiences they make possible. We have the opportunity to do all kinds of great things: all kind of potential applications present themselves in drawing, music (imagine a wah-wah pedal app), improving text selection, and even wacky easter eggs (a view that’s pushed hard enough could shatter into a million pieces).
I’m sure there’s a lot of cool things possible, and that’s why I think 3D Touch is the most interesting thing to happen in mobile for a while.
Is This Just Right-Click? (30:46)
There’s some people who say, “Well, isn’t this just right-click?”. But with 3D Touch you’ve got 600 degrees of variation on every point in the screen, a lot more than you get with right click.
Resources (31:03)
Apple’s ‘Adopting 3D Touch on iPhone’ and sample code is really good and will helping you quickly get the hang of home screen quick actions, peek, and pop.
My TouchVisualizer repo also has useful sample code with some of the examples I demonstrated.
Q&A (31:27)
Q: There was a story on medium.com a couple of days about some people who also found the scale was linear. The thing that was the most optimal for them was a teaspoon. When they submitted it to the Apple store it was rejected because they didn’t want people to put too much weight on their phones and they thought the spoon looked a little too much like heroin.
Alexis: Someone pointed out the Medium post to me a couple days ago, but I haven’t seen it yet. I tried the spoon and got that to work, too. The problem with the spoon is because it’s not a flat surface, you can’t load it up to half a kilogram. So I think a stable pedestal is best, you really want the force to be going straight down through a point.
Q: I program games for kids and developed a visual synthesizer with a Wacom tablet. It’s pressure-sensitive, and I tried the app out with a lot of kids. One kid was pressing harder and harder to the point where he was pounding on the tablet and broke the stylus. So maybe there should be some kind of indication when the touches exceed a certain threshold?
Alexis: That’s interesting and touches on one of the things I noticed, which is that the range of force values that are revealed to you by the API is less than the total range of force values that the device actually detects. So it appears they’re probably using as a safety valve to stop people from making apps that require you to jump on your phone to trigger it.
Q: Have you measured this data on more than one iPhone? I’m wondering if they’re all calibrated the same.
Alexis: No, I haven’t, but that’s an obvious avenue worth exploring. Another thing would be to check how consistent the data is depending on whether you’re applying the force in the middle of the screen or on the edge. If you’re near the edge of the screen, it’s going to have a harder time bending down because it’s rooted to the frame. The other question here is service life. How many times can you push down on your phone and have it pop back up exactly as much as it did before? If you keep using force touch a lot, does it become less accurate? I’d imagine probably so.
Receive news and updates from Realm straight to your inbox