Day three was all about Kotlin, which was announced as an officially supported language for Android during the keynote. Hadi Hariri from JetBrains gave a Kotlin 101 session in the amphitheatre demonstrating how extension functions and parameters result in precise, easily-readable code. He was followed by Jetbrains’ Lead Language Designer Andrey Breslav, who outlined the long-term vision for Kotlin including Native, a cross-platform tool that allows shared functionality across different platforms.
For something a bit different I attended the session on Understanding Colour with Android’s graphics team lead Romain Guy. Because different devices use different colour spaces such as sRGB, Adobe RGB and P3, colours are inconsistent across devices. For example, #FFEE11 in sRGB is not the same as #FFEE11 in Adobe RGB; instead the equivalent colour in Adobe RGB might be #EEDD22. Android O has new APIs that allow us to detect the colour space and translate accordingly.
Up next was optimising your apps for large screen devices. Given the steady growth of Chromebooks I’m pretty excited about the idea of Android apps also functioning as desktop apps. Most of the advice was good — support both landscape and portrait, resizable windows, and keyboard and mouse input — but the number one thing on the list was to support only API 24+. This is laughable as 24+ comprises only 7% of the Android user base.
We did get a nice surprise at the end of the session: A voucher for 75% off the new Samsung Pro Chromebook when it comes out next month. Google is clearly keen for us to test and optimise our apps for large screens.
The talk on location and sensors re-emphasised the things we already knew: background location polling restricted to “a couple of times per hour” with the possibility of batched updates to make up for the decreased frequency. The other strategies to reduce the impact of location on battery life are to defer completely to Wifi location when user stays connected to the same access point and “smart” geofencing, which achieves a 10x power improvement by implementing a latency of around 2 minutes.
Other cool location improvements are better recognition of when a user transitions from car to walking to bike, etc. and the ability to now detect the difference between car and train travel. They can even tell when you stand up or sit down! These activity transition APIs are a result of Google’s vast neural networks and machine learning.
Android O makes big changes to notifications with a hierarchy that dictates the order and prominence of different types of notifications: Major ongoing, person to person, general and by the way (BTW). It also offers highly granular controls to users to decide exactly which types of notifications they receive.
The most highly anticipated talk of the day was with open source hero Jake Wharton and Christina Lee from Pinterest titled: Life is great and everything will be ok, Kotlin is here. Jake did a deep dive into what he finds most useful about the Kotlin language with lots of code examples and an emphasis on property delegates. Christina then gave a really useful talk on how to convince our teams and stakeholders to adopt Kotlin (Hint: Be enthusiastic and empathetic).
I was fortunate to have a chat with Jake the previous day — I spotted him in a session and slid into the spare seat next to him — and thanked him for all the open source libraries he provides through his work with Square (Retrofit, OkHttp, Picasso, etc.). He was pretty humble and said there were lots of people that contributed, not just him. He’s right: Open source projects are community driven and my goal for this year is to contribute to those libraries in some way, even if it’s just minor bug fixes or documentation updates — it all helps.
It’s been an amazing experience at Google I/O, but the real work starts now to figure out how we can put some of these new features to work.
Check out the Google I/O 2017 YouTube channel to watch any of the talks from this years conference.
Tags: Android, Android app development, Google IO 2017, Kotlin, Machine learning
This post was written by Luke Simpson