Android unit testing – it’s not that hard people!

Unit testing under Android seems to be a big topic of discussion. Each weeks brings new blog posts, screencasts, tweets and presentations telling people how to handle the “difficulties” of unit testing under Android.  Searching the Android Weekly newsletter for testing gives 6 pages of results!

I’m going to go out on a limb and make some bold claims, starting with this:

People are over complicating testing

There is no reason testing needs to be so difficult.  You shouldn’t need dependency injection frameworks, mocking toolkits and UI automation APIs.  You shouldn’t need much more than AndroidJUnitRunner + InstrumentationRegistry and even that is probably overkill.

At its core a unit test needs to:

  1. Set up the pre-conditions
  2. Send in input
  3. Assert the output is correct

If you are having a hard time doing these 3 things, the problem isn’t that you need a mocking toolkit or UI automation.  My second bold claim is:

If an app is hard to test, the problem is architectural.

The reason people struggle to unit test under Android is they put too much code in places where:

  1. Its hard to setup the pre-conditions – eg, inside an Activity or Service, which means the app needs to be running.
  2. You can’t easily send input – eg, The user needs to type something or click somewhere.
  3. It’s hard to get at the output – eg, The UI changes or an animation plays

The solution to this is not mock objects or UI automation.  The solution is to refactor the code.  Extract out the complex logic, and put it where it’s isolated, reusable and testable.

For example, let’s say we have a UI with some complex validation rules.  Certain combinations of fields need to be filled out.  There are dependencies between the fields.  The rules change based on the date and whats stored in the device DB.  The easiest place to put the validation code is in a click listener on the submit button.  However we won’t be able to test.  We don’t have control over the pre-conditions, sending input is hard as is asserting the output.

It also happens we are adding more responsibilities into our Activity – suddenly it displays the UI AND implements validation – a god object in the making.  The validation code is not isolated from the rest of the app.  Its also not reuseable.  Maintenance and reasoning about the code is going to be harder.

Our issue with testing has thrown light on deeper architectural issues.

A good solution is to push the validation logic deeper into app, away from the UI.  Perhaps there is a domain object, or a POJO the validation can live on? Do we have a Presenter in an MVP architecture?  A service layer the validation code could be added to?  Ultimately though, it doesn’t need to be much harder than an object with a method:


List<ValidationError> validateOurComplexForm(String inputOne, int inputTwo, Date currentDate, boolean someValueFromTheDB)

Suddenly the pre-conditions are easy: just construct our object. Inputs are simple: The method params. Asserting the output is straightforward.

Finally by refactoring to make our testing easy, there have been some happy side effects: Our code is now much easier to read and reason about.  We have a better separation of concerns and the Activity is not growing into a god class.  The validation code is reusable by other parts of our app.  Rather than going down a rabbit hole of complexity by adding  libraries to help with testing, everything is instead simplified.

Further reading and acknowledgements

The Philosophical Hacker has a great series or articles on testing android apps, diving into these issues in much more detail.  I don’t necessarily agree with all his conclusions (Mock objects and dependency injection! ) but the early articles analysing the cause of the problem ( Introduction, Part1 and Part2 ) are great.

Ken Scambler’s blog To Kill a Mockingtest has been quite influential in how I think about testing and architecture.

If you would like to know more about the application architecture I use and how testing fits into it I have a presentation on the subject.

Lastly, thanks to Chiu-Ki Chan for the stimulating discussion on twitter, that prompted me to write this!

Building the TensorFlow android example app on Mac OS

The past year has been a really interesting time for AI. There have been a number of breakthroughs, with AI techniques finally leading to things like improved image recognition, better sentence understanding and conversational assistants finally finding their way into commercial products.

One interesting development is googles release of TensorFlow – their library for building AI systems. TensorFlow contains Python and C++ components to make it easy to implement AI techniques like neural nets and run them across a wide range of hardware.

The TensoFlow codebase includes a fun Android project which runs the Inception5h model using it to recgnise whatever the phones camera sees.

Sample app detecting bananas

If you want to try the app out, I’ve hosted an APK here

The Inception5H model is trained using the imagenet data and can recignise a list of 1000 different objects. Google also provide versions of the Inception5h model which are suitable for training with your own image data – so, with a very fast computer, a good training data set and a lot of patience, you could train it to identify whatever you like: Insects, different types of sneakers, pokemon cards, etc

Building the TensorFlow Android example app on Mac OS

Unfortunately building the example Android app is not a straightforward process. TensorFlow uses a build system called Bazel and has a number of other dependenceis that the typical Android developer does not have installed. To build the TensorFlow Android example app, you need to build the complete TensorFlow system from source – it’s not available as a library you can just drop into an Android project. The app itself is also built using Bazel and not the standard android build tools.

These instructions assume that you already have a working Android development environment setup.

These instructions are valid as of June 2016.

Install Homebrew on Mac OS X

If you don’t have homebrew installed, the first step is to install it:

$ /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"

Use brew to install bazel and swig

Once you have got homebrew, you can use it to install the Bazel build system and a tool called Swig, which is used for generating language wrappers:

$ brew install bazel swig

Install python dependencies

A number of python dependencies are also needed. These can be installed using the easy_install tool:

$ sudo easy_install -U six
$ sudo easy_install -U numpy
$ sudo easy_install wheel

Clone the TensorFlow git repo

Now the moment you have all been waiting for! It’s time to get tensorflow:

$ git clone https://github.com/tensorflow/tensorflow

Configure TensorFlow Build

The first step is to configure TensorFlow by running ./configure in the TensorFlow root dir

$ ./configure

I just said no to all the questions! They mainly relate to using a GPU to train models – something that we don’t need to do if we want to use a pre-existing model.

Once configure is complete, you need to edit the WORKSPACE file in in the TensorFlow root dir to setup your android SDK

# Uncomment and update the paths in these entries to build the Android demo.
android_sdk_repository(
name = "androidsdk",
api_level = 23,
build_tools_version = "23.0.1",
# Replace with path to Android SDK on your system
path = "/Users/luke/android-sdk/",
)

android_ndk_repository(
name=”androidndk”,
path=”/Users/luke/android-ndk/android-ndk-r10e/”,
api_level=21)

One thing to note is that I am using NDK e10e. This is NOT the latest version of the NDK. There is currently an open bug in TensorFlow which causes the build to fail with the message:
no such package '@androidndk//': Could not read RELEASE.TXT in Android NDK
It seems that the TensorFlow build system is looking for RELEASE.TXT to detect the Android NDK – which is no longer present in newer versions of the NDK.

Download inception5h model

Everything should now be setup to build and run TensorFlow. However for the Android example app we need to get the inception5h model, which is not checked into the TensorFlow repo:

$ wget https://storage.googleapis.com/download.tensorflow.org/models/inception5h.zip -O /tmp/inception5h.zip
$ unzip /tmp/inception5h.zip -d tensorflow/examples/android/assets/

Build the example

We are finally ready to build the Android example app. From the root TensorFlow directory run:

$ bazel build //tensorflow/examples/android:tensorflow_demo

This will build TensorFlow and the android example app which uses it. If the build completes successfully, the TensorFlow directory will contain a bazel-bin/tensorflow/examples/android directory, which contains amongst other things an APK file suitable for installing on your device.

Recognise things!

The app is fun to play with, but you might find some limitations in its abilities:

The sample app recognising my face as a bowtie

The list of things it can recognize is in tensorflow/examples/android/assets/imagenet_comp_graph_label_strings.txt. You will notice that there are a lot of things you’re not likely to find in your house or office – hundreds of different breeds of dogs, aircraft carriers, the space shuttle and airships. It also doesn’t have the ability to recognise human faces, so if you point it at a person it’s most likely to detect a ‘bowtie’. Apparently our eyes and nose form roughly the same shape. I suggest pointing it at coffee cups, various bits of fruit and wall clocks – all of which it recognises well.

Remote presentation to Mobile Refresh Wellington

Last week GDG wellington and Cocaheads Wellington ran a joint event called Mobile Refresh. The idea is that with Google IO and WWDC both over, It was time to gather the community for a short conference. By all accounts the event was a great success, with 162 people attending.

I was lucky enough to be invited to present on the Australian War Memorial, Visitor Audio Experience project. Unfortunately traveling to Wellington proved to be unfeasable, so instead I presented remotely via google hangouts:

IMG_8593

Presenting remotely was a nerve wracking experience! At the start of the session we opened with a video chat, so I could see the audience. However once the slides were up, all I could see was keynote. I had no idea if things were working at their end, if the audience was listening or even if there was anyone there! I had this idea in my head, that perhaps everyone had decided to go out for beer, and I was just a laptop in a cupboard, presenting to an empty wall. Towards the end of my presentation I play a video of the app and touch wall working together. Once the video finished I was relieved to hear applause coming through my headphones – It turns out there were still people there, and yes they were paying attention!

I think that presenting remotely provides a great opportunity both to conference organisers and presenters. As a presenter you get the opportunity to connect with people outside of your local community. For conference organisers you can tap into a global pool of talent and pull in presenters with a really wide range of backgrounds. However presenting remotely is very different from being in the room and it is something that’s going to need practise to get right. With a bit investment though, you can get a really great payoff!

Melbourne Geek Night Presentation Slides

Last night I presented the Australian War Memorial Visitor Audio Experience to the “Melbourne Geek Night

CgUOxCLUMAEHZlg (1)

Thanks @zinzibianca for the great photo!

The presentation seemed to go well, and they were a fantastic audience – lots of interesting questions.  There were also two other very intresting presentations, once by @simonlawry ( simonlawry.com ) on human centered design and one by @jfriedlaender ( redguava.com.au ) about the way his company works.  Joel’s presentation can be viewed on youtube.  It was really great to listen to Joel talk, as it really brought together a whole bunch of trends I have seen across the tech industry into a cohesive whole.

For those that are interested, two versions of my slides are available:

  1. The presentation slides – AWMVAE Presentation individual widscreen 2.pdf
  2. The presentations slides along with my presenter notes – AWMVAE Presentation individual widescreen 2 with presenter notes.pdf

Australian War Memorial Presentation Tonight

Tonight I will be giving a presentation to the Melbourne Mobile meetup, on the Australian War Memorial Visitor Audio Experience project.  It was a really interesting project to be involved with and I hope it can provide some inspiration for other developers out there.  I’m going to be going into detail on Indoor positioning and the Audio Engine I built along with Art Processors.  I’m also going to be talking about a neat method for device communication we invented using the Nexus 5 Camera!

The meetup starts at 6:30 at the York Butter Factory, in Melbourne.

A great tip for structuring large projects

Link

When developing large Android projects, one annoyance is that the layouts for all screens need to go into the single /layouts directory, all images into the various /drawable-xxx directories, etc.  For a big app this leads to resource directories cluttered with files.  For example the current app I’m working on has 44 xml files in the /layouts directory alone.  This makes it very hard to find things.

The google developer experts blog has a great tip on using gradle with multiple resource folders:  https://medium.com/google-developer-experts/android-project-structure-alternative-way-29ce766682f0#.k5h2lx5n6

The key part is using multiple source sets in your build.gradle:

sourceSets {
    main {
        res.srcDirs = [
                'src/main/res-main',
                'src/main/res-screen/about',
                'src/main/res-screen/chat',
                'src/main/res-screen/event-detail',
                'src/main/res-screen/event-list',
                'src/main/res-screen/home',
                'src/main/res-screen/login',
        ]
    }
}

For larger projects this seems like a great way of doing things, and I will definitely be making use of it in the future.

Two important Android N links

Aside

Today google released the first android N developer preview – taking quite a few of us by surprise!  Previous versions of the dev previews have arrived later in the year.

One very interesting change, is you can now receive preview images via OTA updates:  https://www.google.com/android/beta?u=0 .  This is sure to be a great convenience for developers who want to test their apps under the preview.

Secondly, as always commonsware has a great write up on the changes in this dev preview.  With past versions of android Commonsware has provided some of the best dev focused summaries for the developer previews.

Screencast – A simple, scalable app architecture with Android Annotations

Two years ago at Yow Connected 2014 I gave a presentation on the architecture I have been using to develop android applications, titled “A simple, scalable app architecture with Android Annotations”.  Last night I got to repeat the presentation at our local google developers group meeting.

Over the past few years there has been a lot of discussion amongst the android community about software architecture.  The conversation has definitely moved along from where it was in 2014.  However despite a number of alternatives being proposed such as CLEAN and MVP, I found the architecture I presented still holds up very well.

 

A screencast of my presentation can be viewed on youtube.

The slides along are avaliabile here: Application Archetecture with Android Annotations 2 and a version with speaker notes is also available here:  Application Archetecture with Android Annotations 2.key

WWDC keynote wrapup

As is usual around WWDC an endless amount of ink has been spilled criticising, praising and analysing the various announcements. As an Android developer its only natural for me to view what has been announced through an Android prism. There are two bits of commentary which I thought summed up things up from the Android point of view pretty well. Firstly Chris Lacy, the developer of Action Launcher and Link Bubble posted this very good summary to google plus. One nice point I think he makes is:

It’s easy (and mostly accurate) to point at a great many of the features announced and say “Android had them first and Apple are playing catchup”. This misses the point on a few levels. Firstly, Apple’s M.O. is to be the best, not necessarily the first. Also, as of the day iOS 8 releases publicly, it doesn’t really matter that Android users have had feature X for Y years previously. iOS devices now have these features, and their users are going to be delighted.

Geek.com seemed to have its finger on the pulse of the bigger strategic direction of what apple is trying to do in terms of the war against google:

Smartphone fanboys will spend the next few days arguing about whether or not Apple did anything interesting yesterday. Third party keyboards? Welcome to 2011, Apple. Calling from my laptop or tablet? Do you even AirDroid, bro? How about Google Voice? The conversation isn’t all that interesting, and usually ends in slinging insults or moving goalposts, and that’s because the larger point is being missed. Apple isn’t just taking the best parts of Google and sewing it into iOS. Let’s be honest, each of the smartphone OS designers have been taking cues from one another for years now, and for the most part that is a good thing for everyone. What Apple did yesterday was a lot more deliberate, and a lot more targeted. Apple took to the stage with a single goal in mind, to categorically replace the need for Google in your life.