Hi Kris, tell us a bit about your background and what you do at Starling.
I started as an ad designer in the late 90s for one of Slovakia’s national newspapers. In the early 2000s, I did some PC assembly and network administration and support for clients both on-site and remotely. The experience of supporting networks across the country and some of the difficulties I faced while doing that led me to eventually set up my own software startup. I wanted to change and substantially improve the user experience of how we, at the time, deployed Windows on a larger scale. I think we succeeded; we made the tedious manual process fully automated and hassle free. As the founder of a small shop, I had to do everything from UI concepts, to marketing and legal. The only thing I was not capable of doing was programming, which felt quite frustrating as I really wanted to partake in the creation of the product itself. That’s when I decided it was time to fill this gap and I started to learn C-sharp.
When the iPhone came out, I realised the desktop paradigm was pretty much dead and there was a mobile revolution in progress. For years I was quite frustrated with Microsoft’s approach to software design and quality. I’ve always had strong opinions on what constitutes a good design and user experience, and switching to Apple’s iOS seemed like the logical and inevitable step. I wanted to write my own apps and deliver really good UX to customers. To this day, this is the most fulfilling part of my daily life as a software engineer.
Moving to London, I had a number of programming jobs where I progressively improved my programming skills. Eventually I joined Starling where, besides my core competency of programming, I am able (and expected) to influence how the product looks and works.
When it to comes to Starling, the only thing customers see is the app and there is no fallback media or channel to use, so good UX is of critical importance. In some ways, I consider design superior to programming, which is a fascinating world on its own with its never ending architectural considerations and sometimes strange abstractions.
My ultimate aim is to make the design team happy by delivering exactly what they asked for. Not doing so is often the source of tension between development and design teams and diminishes the value they create by meticulously obsessing over every pixel. So my experience helps me better interpret their specifications and expectations. I also try to give them detailed feedback about technological options and platform limitations so they better understand what is possible. Last but not least, I interact with my fellow iOS dev colleagues. This way I’d like to thank them for tolerating my occasional rants about UX and implicitly unwrapped optionals.
You’ve experienced the iPhone X first-hand — talk us through some of the major changes that stand out to you.
Let’s start with the most obvious change: the screen. One can immediately see the difference as it is not rectangular. Besides the new shape, it is now also using OLED display so it can finally show a perfect black color. It is a true 3x Super Retina HD display, which means contrary to the Plus designated iPhones, it does not downscale the screen resolution from internal 3x retina to a smaller Full HD before the device renders the image on screen — everything is super sharp. It can now display a wider spectrum of colours reaching higher saturation.
The experience was, in a way, shocking. It felt like I was not even looking at screen (i.e. images under glass), but instead what I saw was the raw, physical material itself in its native color.
The home button (and Touch ID with it) has been removed, so now all the interactions are done with gestures, which will take some getting used to. Most probably not more than 1-2 weeks though.
Another important change is the CPU. It now has a separate dedicated section called Neural Engine. The job of this chip is to analyse the signal coming from TrueDepth camera and illumination system, which is the controversial notch at the top of the phone. Some people might remember MS Kinect; Apple bought the original company behind Kinect and this is basically a much improved miniaturised version in a phone.
Neural engine analyses the signals coming from TrueDepth camera to create a 3D map of your face, which is then used as an authentication mechanism called FaceID. Using neural network algorithms directly on the device and using very fast, dedicated hardware without having to send anything to the cloud helps maintain your privacy while doing do this at very high speed, independently of network connection speeds, which on mobile varies greatly. If it’s you, iOS will simply unlock itself. But we’ll talk about FaceID a bit later in more detail.
The last thing, kind of an honorary mention, is the glass back. Besides being an obvious aesthetic change, it allows electromagnetic fields to reach the induction coil inside the phone, which is the basis of wireless charging.