Developing Android Libraries

At some point as a developer, you’ll consider writing a library. Maybe you’ve invented a cool new way to perform a specific task and want to share it with the world, or perhaps you simply want to reuse your code in an elegant way. Whatever the motivation, writing a library is tough work. You’ll face questions about the build system, the publishing process, Android vs. Java compatibility, and more. In this talk from Øredev 2015, Emanuele Zattin will show you the best practices to write libraries both in Java or C/C++. You’ll discuss API design, CI techniques, and performance considerations to provide you with the right tools for the job.


This is an update to a previous talk, with all new slides and some updates to the content. We’re leaving the old one up for posterity, but now’s a great chance to get up to date on writing Android libraries if you missed the first version.


Introduction (0:00)

My name is Emanuele Zattin and I work at Realm. Most developers will end up writing a library at some point, but why? What’s the point of a library?

Why would you write a library? (0:31)

  • Modularity

The first reason is modularity. You want to be able to split your code into smaller units, and that brings many advantages. Smaller units are easier to think about and they’re easier to test. In general, they tend to have a higher quality in the end.

  • Reusability

A consequence of this is reusability. Once you have smaller units, you can actually use them in several places in your app or even across several of your apps.

  • Shareability

Shareability is also a consequence of reusability. If it’s useful for you, it can be useful for sombody else. just share it and actually people will give you feedback, the quality will improve, and you’ll get your name out there.

Why would you write an Android library? (1:45)

  • UI

If you do anything related to UI, you will have to write an Android Library. You will have to put in resources and assets and all these kind of things, and there’s no way to pack those nicely into a JAR.

  • Looper/Handler

If you depend on anything in the Android framework, in particular Looper and Handler, you’re looking at an Android library.

  • Sensors

If you use sensors in any way then yes, there’s no way to package your library as a JAR.

  • Native code

This is not immediate, but JAR actually doesn’t have a standard way to package native files in it. You also don’t want to put Windows binary inside your Android..

There are a lot more reasons as well, but if you depend on anything that’s very Android-related, of course you have to write an Android Library. If you have the good luck to not have to depend on any of these, then go ahead and make your library a JAR, and make it available not only for Android developers, but for Java developers as well.

Challenge 1: How to start an Android library project? (3:04)

That’s an easy question to answer: Android Studio should do it for you. You can add an Application module, create a new Application project, and create a Library module, but…of course you cannot create a new library project.

A quick workaround to this of course is to just create a new Application project and add a new Library module, remove the Application module and you’re done. That’s a quick hack.

Another way to do this is through command line:

android create lib-project -t 1 -k org.øredev.awesomelib -p . -g -v 1.3.0

The Android command comes with many sub-commands and options that allow you to do all kinds of things. These commands will actually help you create a new Library project, where you can specify you want to use Gradle as opposed to Ant, which version of the Android plug-in you want to use, and which target and so on.

Challenge 2: API Design (4:34)

So you’re developing a library; now you’re officially an API designer! That can get complicated, because it brings up so many discussions and you run the risk of stalling because of these discussions about what’s the right thing to do. you have to define what a good API is. There’s no single answer that can answer all your questions, but there are some guidelines. (I take no credit for the following tips by the way, they come from Joshua Bloch.

  • Easy to learn

This seems natural.

  • Easy to use, even without documentation

This means that your method names, class names, and argument names all have to make sense and be consistent. They shouldn’t deceive your user in any way.

  • Hard to misuse

Once you publish your library, you have no idea of how it’s going to be used. This is hard to answer beforehand, but you can see sometimes patterns when you’re coding and say “Okay, people might use this this way, I’ll just document it that you have to use it this way.” However, some people just don’t read documentation. So try to enforce it if you can.

  • Easy to read and maintain code that uses it

You’re writing the library because you want to solve a problem. Don’t create new ones.

  • Sufficiently powerful to satisfy requirements

You have to know what your requirements are, and your library has to be powerful enough to fulfill them. Choose your requirements wisely.

  • Easy to extend

This is not as important, but it’s a nice-to-have. If people want to add some functionality, they shouldn’t necessarily need to fiddle with your library, they can just extend it with something else.

  • Appropriate to audience

You cannot make everybody happy. You are trying to solve a problem and you’re developing a library, which is your opinion on how to solve the problem. Don’t try to please everybody. Just do what you think is right and be consistent about it.

As I mentioned, all of these tips came from Joshua Bloch, from his Effective Java 2 book. Buy that book and have it by your side all the time. It will save your life, but more importantly, it will help you win some discussions with your colleagues. He also has a great and very relevant presentation on YouTube called “How To Design A Good API and Why it Matters”.

An Example: We have to choose how to save some information. Which way would you go, A or B?

A java person.setName("Emanuele"); person.save(); or B java db.beginTransaction(); person.setName("Emanuele"); db.commitTransaction();

Do you just have the save method or do you actually make explicit transactions? A is more compact of course. It’s just a save method, it’s nice and short. A also feels familiar. It’s just a save method, what can possibly go wrong? You know exactly what it does.

B is more expressive. There’s no magic going on behind the scenes; you know there’s a transaction going on. Probably in that save method, there will have to be a transaction, so you have to choose if you want to expose that kind of abstraction to your user or not. With B, you can save more than one object in the same transaction. If you have the save method, then you have to call the save method for every single object, and transactions are not cheap. Both opening and completing a transaction can be quite expensive operations. If you can bench some operations, that’s actually a good thing. B also allows you to work it with zero copy. If you have a save method, in the end, it means that the information has to be first stored in memory somewhere and then copied. If you want to provide some zero-copy functionality, then of course, that has to be considered as well.

Challenge 3: Testing (9:56)

Testing is super important for applications, but it’s even more important for libraries. You have no idea of how people are going to use it, so you have to try with all kinds parameters of how to count the combinations, etc.

One solution to testing is to write a unit test. Writing a unit test for a library is exactly the same as writing a test for an application, but as a suggestion, try to go for JUnit 4 over JUnit 3. That’s because JUnit 4 allows for parametric tests. It’s actually possible with JUnit 3, but it’s pretty cumbersome and you have write a lot of boilerplate code. Prefer JUnit 4 if you can use it.

Another solution is to automate as much as you can. Testing is tedious, you forget to do it before committing, and people get mad about it. Then a bug slips in and you find that out running the unit tests, so try to automate as much as possible.

I’m a bit of a CI freak, and a Jenkins freak in paraticular. Another suggestion is to let Jenkins become your best friend. Let it automate everything it can for you. I have nothing against TravisCI or CircleCI, but when you try to get more advanced with more personalized workflows, I think Jenkins is the best option. Jenkins offers more than a thousand plugins right now, so it can be quite intimidating when you first start. Here’s my quick top five of Jenkins plugins to install:

  • Job Config History

This one is very important. You can give everybody permission to fiddle with the Jenkins server, and that can be scary if you cannot track everything and roll back. This plugin allows you to see all the states of the configuration. If somebody messes with something, you can point fingers and roll back.

  • Git

This one is a no-brainer, Of course you have to install the Git plugin.

  • Android Emulator

This actually has a deceiving name because it does a lot more than just managing the emulator. It allows you to run monkey tests and present them nicely in the end. It’s a very nice plugin by Christopher Orr.

  • Matrix Job

This allows you to define some axis and parameters. Familiar to app developers, probably, screen resolution is a big factor, so you can have an axis where you test the different screen resolutions and the other axis with what API level it’s running on, and so on. It takes some time, but you have a great coverage of all the possibilities.

  • JUnit

With this, you’ll be able to track what’s going on and see the history. You can see the graph or unit test always going up, of course.

The last two mentioned here actually come pre-loaded with Jenkins, but don’t forget to use them in your jobs.

A final solution to the testing problem is to write some sample apps. That might be a bit weird to think about, but there are some good reasons. First of all, they actually act as additional integration tests. You have your unit test that tests the single methods and so on, but actually the application tests the library as a whole. This also gives you an opportunity to showcase how to use your library. Your users will be able to go and use the sample app as a reference for best practices on how to use the library.

Of course, it also helps you to validate your principles. You started your library for a reason, and you wanted to solve a problem in a specific way. If you write a simple app and try to use the library, you may realize you didn’t do as great of a job as you thought. It’s always a good idea to validate what your propositions were.

Challenge 4: JAR or AAR? (15:07)

What kind of artifact to generate? You may just depend on the Android framework, or you could, in theory, just produce a JAR. Even if you have native code, there are ways to actually include them in a JAR and the Gradle plugin seems to do a good job.

How do you decide what to do? It all boils down to a question: Do you want to support Eclipse?

This solution is to use AAR. It’s what Google offers now, and it’s very flexible. It allows you to do API splits very easily, which means that if you use native code in your library, your users will be able to create smaller APKs for the different CP architectures. It has a bunch of good advantages.

Challenge 5: Where to publish? (16:31)

You wrote your library, it’s amazing, and everybody’s happy. Now it’s time to make it public. Where to publish it? If you’ve never tried to publish something to Maven Central, trust me: it’s super painful.

The solution is to use Bintray It’s a free service for open source libraries, and it comes with a nice UI so you can track how the downloads go. It’s really easy to use. Bintray is actually the repository behind JCenter, which is a superset of Maven Central. If any of these are in Maven Central, it’s also in JCenter.

But when you publish a JAR, an AAR in a repo you have to create a couple of extra packages. The first one is a JAR for sources, and this is the way to do it in Gradle. You just create a new task of type JAR and it’s a one-liner. You just point it to the source directory:

task androidSourcesJar(type: Jar) {
	from android.sourceSets.main.java.srcDirs
}

JavaDoc is a little bit more complicated, but not so much. Here we go through all the variants, and normally it would be just release and debug, but you can have more if you like. For each of those, you create a task of type JavaDoc and you specify the parameters. The only weird thing is actually that ext.androidJar part, where you actually get the class path of the Android project, so you can actually reference whatever you use from the Android library and they will not appear as broken links. You add that in the end in the class path. You also want to exclude your BuildConfig and R.java. Nobody is interested in those:

android.libraryVariants.all { variant ->
	task("javadoc${variant.name.capitalize()}", type: Javadoc) {
		description "Generated Javadoc for $variant.name."
		group 'Docs'
		source = variant.javaCompile.source
		ext.androidJar = files(project.android.getBootClasspath())
		classpath = files(variant.javaCompile.classpath.files) +
					ext.androidJar
		exclude '**/BuildConfig.java'
		exclude '**/R.java'
	}
}

Bintray also provides a Gradle plugin, which is pretty nice, but it’s a bit cumbersome to use, especially if you’re not super familiar with Gradle. My suggestion is to start doing releases manually using the website. That will soon become annoying and tedious, and then you will be motivated enough to learn Gradle and to use this plugin.

Challenge 6: Reflection is slow (19:27)

In the Android world, reflection is slow. Painfully slow, actually. Java seems actually to have solved the problem, but in Android it’s still quite an issue. Sometimes, reflection is necessary to make life easier for your user, and that means that you have to generate some code for them and allow them write less boilerplate code. The way to do this is through annotation processing.

Annotation processing comes with some pros and cons. One pro is that it allows you to write new Java files. Given some annotations, you can follow which class, field, or method this annotation has been applied on, and then you can make decisions based on that and generate new code. It happens at compilation time, so all the penalty is paid when you compile them up at runtime when it really matters for the end user.

However, one problem is that it doesn’t allow you to modify existing code, so you can only add new Java files. Also, the API is not exactly friendly, and the documentation is a bit fiddly. It’s not super easy to use, but there are quite a few libraries using it right now, so you can actually get inspiration from there.

Some very cool libraries use it. Dagger, of course, but also Butter Knife from Jake, AutoValue/AutoParcel, and we’re using it as well.

The problem here is actually that the Android framework does not provide you with the package you actually need to do all this. javax.annotation.processing is not included, so, what do you do?

You have to create two Java sub-projects. The first one is for the annotations, and the second one is for the annotations processor. You want them separat because you will need the actual annotations from both the annotation processor and from your library itself. We also have to change the way we generate our JavaDoc, because now we also want to add the documentation for the annotations and now they’re inside another module.

Luckily, it’s just a one-liner in Gradle. You add the folder of your annotation.Be careful, the missing = is not a typo, it’s actually important. source = means that it’s an assignment, so it actually gives the value, whereas source without the equal is actually a method call. Be careful about that:

android.libraryVariants.all { variant ->
	task("javadoc${variant.name.capitalize()}", type: Javadoc) {
		description "Generated Javadoc for $variant.name."
		group 'Docs'
		source = variant.javaCompile.source
		source "../annotations/src/main/java" // <-- Remember this!
		ext.androidJar = files(project.android.getBootClasspath())
		classpath = files(variant.javaCompile.classpath.files) +
					ext.androidJar
		exclude '**/BuildConfig.java'
		exclude '**/R.java'
	}
}

Challenge 7: Sometimes annotation processing is not enough (23:43)

It’s a challenge because of the fact that you cannot modify existing code, you can only create new code, and sometimes you really need to change some existing code. The solution to that is bytecode weaving or bytecode manipulation.

The advantage of this is that it allows you to modify existing code. Actually, the API is much easier compared to annotation processing, and it’s actually more similar to Java reflection, so it’s something your familiar with.

However, you really need to know what you’re doing. You’re going behind the back of the compiler, and you are changing bytecode, so if you mess up with something, there’s no safety net. You have to be careful about what you’re doing because it might look weird in the debugger. If you modify some code and add instructions, and then somebody steps through that code, you’re actually going through code and then you add some lines and the debugger keeps going on. It’s a bit weird, so be aware of that.

My suggestion is to try to do as much as possible with annotation processing, and in the end just change those two or three places with the bytecode manipulation, in a way that doesn’t look weird in the debugger.

There are several tools that allow you to do this such as ASM, AspectJ, and Javassist. There’s a lot of work revolving around Javassist and Android. The repo for this doesn’t contain any code, it’s just a list of projects that use dependency injection as a showcase.

As an example, sometimes annotation processing is used to generate proxy classes. Classes that fake to be the class that the user defined, but actually override some behavior. If you do that, it means that you’re overriding methods, and it means that you have some limitations. You cannot use public fields because you cannot intercept calls to fields normally, so you cannot proxy those. Also, since you’re overriding methods, but there’s no way for you to know exactly what these methods do, you need to enforce some rules about the naming of these methods, especially getters and setters. Furthermore, no custom methods are allowed because you don’t know what it’s doing. You might override it, but then you have nothing. A consequence of this is that you cannot implement interfaces because you cannot add custom methods.

However, you still want to allow stuff like this:

class Person {
	public String name;
	public int age;

	public void toString() {
		return name + "is" + age + " years old";
	}
}

You have some public fields here, and you have a toString. You want to be able to do things like this, but if you have a proxy class, this is not possible. How, then?

Here is a combination of annotation processing and bytecode weaving. The first step is to add custom accessors in your proxy class. It’s a PersonProxy that extends Person, and you create these strange names:

class PersonProxy extends Person {
	public String name$Getter() {
		// data access
	}

	public void name$Setter(String value) {
		// data access
	}

	// Rest of the accessors...
}

Actually, the first step to bytecode weaving is to add the same getters and setters in the actual class. You’re adding methods, and all they do is actually to do whatever a normal getter and setter will do.

class Person {
	public String name;

	public void toString() {
		return "The name is " + name;
	}

	public String name$Getter() {
		return this.name
	}
}

Then, the second step will be to do this:

class Person {
	public String name;

	public void toString() {
		return "The name is " + name$Getter();
	}

	public String name$Getter() {
		return this.name
	}
}

Substitute any axis of the field you’re interested in, to the actual method you generated. That means that now if a user calls toString(), it would actually call name$Getter, which will actually be intercepted by the proxy, and that will do all the data access that you want to do.

Challenge 8: Native Code (29:13)

Sometimes you need native code, and when you do, you may be in trouble. The bad news is that the Android plugin doesn’t currently allow you use to NDK. It comes up with a big warning saying, “Hey, this is not supported, and we’re working on it!” It is true, they are working on it; here’s an experimental plugin for it., and it seems to be working pretty well.

To use it, you add these dependencies. The difference is the “experimental” part there and the version. That’s the latest version I know of, as of two days ago. And also the “model” part in the package name:

buildscript {
	repositories {
		jcenter()
	}
	dependencies {
		classpath 'com.android.tools.build:gradle-experimental:0.2.0'
	}
}

apply plugin: 'com.android.model.application'

The DSL is also slightly different. Now, everything is enclosed into a model closure, and you have to use properties and not methods, so you have to remember the equal sign. There a bunch of rules, but it’s not too difficult to convert the Gradle file to use new syntax:

model { // <-- This!
	android {
		compileSdkVersion = 22 // Now it's a property, not a method
		buildToolsVersion = "22.0.1" // Same here and the rest of the example

		defaultConfig.with { // Use the with method
			applicationID = "com.example.user.myapplication"
			minSdkVersion.apiLevel = 15 // Use the apiLevel property
			targetSdkVersion.apiLevel = 22 // Same here
			versionCode = 1
			versionName = "1.0"
		}
	}
}

Take a look here for more information. It seems to be working pretty good!


Emanuele Zattin

Emanuele Zattin

Emanuele is a Java, CI and tooling specialist at Realm. Previously, during the good ol’ golden years at Nokia, he helped the whole company unify under one CI system (Jenkins) and switch their version control system to Git. He's an active member and speaker within the Jenkins community where he contributes to the core and maintains several plugins.