Almost three months have passed since WWDC’14 already, and the new iPhone will show up next week. We’ve all been hard at work the whole summer, following the announcements Apple made at WWDC. Very cool new stuff as well as major updates of existing apps are coming, and we’re all very excited to have them out later this year and show the world what we’ve been doing lately. Stay tuned!
By the way, here are our selfies with Tim (featuring @benoitsan and @bunuelcubosoto):
This is very unusual from Apple, and we were all lucky to be at the right place & at the right moment ;-). Plus, as a bonus track, the “Making of” of Benoit’s selfie, featuring live Tim Cook:
I won’t review all WWDC stuff (I didn’t even get a chance to dive into half of it yet), but I’ll give some thoughts and feeling about Swift and Metal, then I’ll try to guess where this is going.
Swift. It’s great to see Apple move forward. Apple updated Obj-C quite a few times with very cool features (first there was fast enumeration, then ARC, blocks, etc.) but could not go much further without breaking things, and so they made a new programming language. A reboot. Nobody saw that coming, yet, a posteriori, it’s very much in line with the way Apple has been innovating since the end of the 90's: breaking with the past when it’s becoming a burden to their evolution. Such changes are generally dangerous moves, most companies avoid them (but not Apple), and when it works, it’s amazing. It’s definitely about the people behind it too, the right person at the right place, often a single individual or a very small team that makes a difference. Swift is no exception.
I’ve been playing with Swift for a couple of weeks, implementing a real feature (CloudKit backend) in a new application we’re currently developing. I’m staying with just that feature for now in Swift, as I’m expecting the language to change a lot (it has already changed since June, each new Xcode release would break the code). So far, my feelings are mixed, but I’m still enthusiastic about it. It allows amazing things, but is also disappointing on some aspects, in its current state. It feels like a bit like C++/STL with a way smart(er), concise, and elegant syntax. It’s great & very powerful, but you also get lots of responsibilities. Interaction with Cocoa classes is not that great at this time, casts are often needed (bridging is not transparent, and has hidden performance costs), and there’s also a mismatch of language features with Obj-C, some are even lacking (global constants?).
Code can be smaller (not always), but it can also be (very) hard to read. var, let, optionals, implicitly unwrapped optionals are many new concepts to think about in every day code, and they lack the conceptual simplicity of Objective-C. Optionals are a particular feature I don’t like much at this time, as I find it gets in the way, slowing me down, having a higher cognitive load even for simple code. It’s probably not bad, I just wish it was as transparent as ARC is. In a way like: “2015: boom, optionals are automatically managed by the compiler, like retain/release under ARC”. Swift needs to evolve to be a real improvement over Objective-C in my opinion, that is, make you more productive and keep simple things simple, and my bet is that it will change a lot in the next versions. I remain optimistic about it, as it tackles long-time Obj-C problems (lack of namespaces, header files, lack of generics & operators for algorithm writer, etc.), and it probably just needs feedback to evolve into a great language for every day app developers.
Metal is another example of how Apple is innovating. Why not simply use OpenGL ES? It’s good, it is proven, it is an industry standard, and is used by so many people. Well, it’s not the best you can do with existing hardware. In the past few years on the desktop, OpenGL was trying to play catch-up with Microsoft’s DirectX, where most innovations arrived first. OpenGL was often late to the new-hardware-features party. So much that its own future was even compromised at some point (modernisation of its API vs. maintaining compatibility for existing apps). If you think of it for a second, why would Apple wait for important changes when they can do it all by themselves? More, why would they keep the same graphics foundation as their main competitor if they can do better? (I noticed Google has a compute API now, but did not review it yet). They got the platform, software, hardware, developers, and a very good understanding of what’s needed to move forward. It’s all in their hands. And it’s actually the way other game device makers, Sony, Microsoft, and Nintendo, have been doing it for the past 20 years after all.
Macs are another story though. Macs still depend on third-party GPU hardware, and so we have to see what the next steps will be. A single graphics/compute API would probably make a lot of sense over time (like DirectX: game console / phones / computers). Don’t you feel like OpenGL has been lagging on Macs lately?
I was recently chatting about ARM Macs on Twitter. Jean-Louis Gassée (@gassee) thinks they are coming. In addition to Gassée’s very valid points, there might be additional hints in graphics APIs. Indeed, if you look at Metal, its APIs are very low-level and their scope is to address both graphics and general computation needs. Just like the combination of OpenGL and OpenCL. Why would Apple want to maintain two sets of graphics/compute APIs with similar scopes for iOS and OS X? Metal, in its current form, has a requirement of shared memory between CPU and GPU, which is obviously the case for iOS devices, but not for the Mac, which is still making use of discrete GPUs. The problem with switching away from Intel (and AMD/nVidia) and switching to unified memory systems (only) is maintaining performance. Apple already does it for integrated Intel graphics chip (which is sometimes the only graphics chip), but still needs discrete GPUs for high-end Macs.
The A7 chip (iPhone 5s, iPad Air) demonstrated pretty amazing performance, while being targeted at phones and tablets. Apple could have more cores, higher frequencies, and probably develop its own GPU. Would it meet Intel/discrete chip speed? Apple had that strategy before, in less favorable times. Then, to grow as a computer maker, they needed more speed per watt and Windows compatibility, hence the Intel switch. Apple’s DNA is surely pushing them back to their roots now. And they have matured to handle this stuff all by themselves now. It makes sense both business-wise (entire experience made in Apple, forking away from competition, lower prices) and product-wise (great battery life, single API, etc.) if they can close that performance gap, probably starting with entry level MacBooks and Mac mini / iMacs.
But there’s a risk too, and that’s why it’s not an easy choice for Apple: if they do it, Intel’s innovations in chip-making technology, as well as AMD & nVidia expertise in GPUs will mostly be beyond their reach.
Exciting times ahead!