ProGuard, a code optimization and obfuscation tool provided as part of the Android SDK, can be a double edge sword — it presents bootstrapping challenges but when applied correctly, provides tremendous benefits! At Crashlytics we’ve spent a lot of time leveraging the power of ProGuard to develop lightweight libraries to help app developers ship awesome products — in particular, we use these four features in our day-to-day development.

Shrinking

As your codebase grows and becomes more full featured, it’s important to keep a small binary in mind. Reducing the size of the APK can be extremely advantageous, since large binaries are much less likely to be installed in poor network conditions or on older less powerful devices.

It’s been well publicized among the developer community that the Dalvik Virtual Machine is memory limited to 64k methods. With this restriction, ProGuard can help provide a buffer as you consider which measures to take to reduce your code size. Removing unused code, which likely exists in a 64k method project, enables your development team to work on features unimpeded by technical limitations, while refactoring or external class loading is considered.

Even though shrinking is advantageous, simply identifying unused code to remove is a good practice. By using the printusage flag in Proguard.cfg, your configuration file, ProGuard will list the unused code to allow for proper code maintenance and cleanup.

 1 # This is a configuration file for ProGuard.
 2 # http://proguard.sourceforge.net/index.html#manual/usage.html
 3
 4 -dontusemixedcaseclassnames
 5 -dontskipnonpubliclibraryclasses
 6 -verbose
 7
 8 -printseeds seeds.txt
 9 -printusage unused.txt
10 -printmapping mapping.txt

Obfuscating

With tools available to extract the contents of APK’s, deodex, and read the class files, it’s important to obfuscate to protect the proprietary aspects of your codebase. ProGuard generates a mapping file that allows you to map the stack traces of obfuscated code to actual methods.

Original code:

 1 package com.example.app;
 2
 3 public class Data
 4 {
 5    public final static int RESULT_ERROR = -1;
 6    public final static int RESULT_UNKNOWN = 0;
 7    public final static int RESULT_SUCCESS = 1;
 8
 9    private final int mId;
10    private final int mResult;
11    private final String mMessage;
12
13    public Data(int id, int result, String message) {
14       mId = id;
15       mResult = result;
16       mMessage = message;
17    }
18
19    public int getId() {
20       return mId;
21    }
22
23    public int getResult() {
24       return mResult;
25    }
26
27    public String getMessage() {
28       return mMessage;
29    }
30 }

Code obfuscated by ProGuard:

 1 package com.example.app;
 2
 3 public class a
 4 {
 5    private final int a;
 6    private final int b;
 7    private final String c;
 8
 9    public a(int paramInt1, int paramInt2, String paramString)
10    {
11       this.a = paramInt1;
12       this.b = paramInt2;
13       this.c = paramString;
14    }
15
16    public int a()
17    {
18       return this.a;
19    }
20
21    public int b()
22    {
23       return this.b;
24    }
25
26    public String c()
27    {
28       return this.c;
29    }
30 }

By automatically collecting the mapping files on build, Crashlytics streamlines the deobfuscation process of your code and intelligently prioritize stack traces to make your debugging process effortless.

Repackaging

Repackaging allows ProGuard to take externals jars and class files and move them to a single container with a common java package location:

 1 # com.example.networking contains the networking level
 2 # com.example.database contains the persistence level
 3 # repackage low level services into common package for simplicity
 4 -repackageclasses "com.example.internal"
 5
 6 # com.example.public contains public interfaces
 7 # ignore these in repacking
 8 -keep public class com.example.public.* {
 9    public *;
10 }

For those of you building libraries, repackaging is extremely helpful if you choose to show a simple interface to third party developers while keeping a maintainable and well structured project hierarchy in the source repository. This can also be useful in organizing lower level packages while exposing well defined interfaces!

Optimizing

Optimizing works on compiled classes to implement many small optimizations based on the java version. By default the proguard-android.txt that ships with the Android tools has optimizations turned off, but the proguard-android-optimize.txt has the presets if needed.

 1 # Optimizations: If you don't want to optimize, use the
 2 # proguard-android.txt configuration file instead of this one, which
 3 # turns off the optimization flags.  Adding optimization introduces
 4 # certain risks, since for example not all optimizations performed by
 5 # ProGuard works on all versions of Dalvik.  The following flags turn
 6 # off various optimizations known to have issues, but the list may not
 7 # be complete or up to date. (The "arithmetic" optimization can be
 8 # used if you are only targeting Android 2.0 or later.)  Make sure you
 9 # test thoroughly if you go this route.
10
11 -optimizations !code/simplification/arithmetic,!code/simplification/cast,!field/*,!class/merging/*
12 -optimizationpasses 5
13 -allowaccessmodification
14 -dontpreverify

Optimizations provide performance improvements for language operations. However, there are known incompatibility issues with various Dalvik versions, so we encourage a thorough review of the code base and device target demographic before enabling.

Beyond leveraging these four core features of ProGuard, we crafted several strategies for those of you looking to build lightweight apps/libraries and optimize your interaction with ProGuard.

Improving Build Times

Adding ProGuard to the build process can slow down build time, so it’s important to minimize the amount of code ProGuard needs to examine. This is vital when considering third party libraries, like Crashlytics, that have already been processed by ProGuard — it’s just a waste of CPU to reprocess with ProGuard again, and it’s much slower!

We thought it would be valuable to estimate the improvement in build times when preprocessed, third party libraries are ignored in ProGuard. Using the Crashlytics library as an example, we conducted numerous runs with internal test apps across various sizes. We found that build times improved by up to 5% when the Crashlytics package is ignored. But that’s just one library that is already ultra-lightweight. Imagine the build time improvements for apps leveraging additional libraries — it can be tremendous.

To avoid processing a library that may have been preprocessed, simply add the following to Proguard.cfg:

1 -libraryjars libs
2 -keep class com.crashlytics.** { *; }

As obfuscation is usually done for security, if using an open source library, there may be no reason to obfuscate it. By following a similar pattern as listed above, processing can be further reduced ultimately improving build time. The Android support library is a great example:

1 -libraryjars libs
2 -keep class android.support.v4.app.** { *; }
3 -keep interface android.support.v4.app.** { *; }

Reflection

Using reflection in Android is highly discouraged for many well known reasons, including performance and instability in changing APIs, however, it can be quite useful for Unit Testing. Common use includes changing the scope of methods to set test data or mock objects. If you’re using ProGuard during development build to obfuscate, it’s important to understand that when a method or class name is changed, string representations of them are not. When designing testable interfaces, if tests are run on a device using a build that has been processed by ProGuard, this will cause method not found exceptions.

Library development

Additional complexity is introduced when developing libraries that are processed with ProGuard, both when they are distributed and when the app developer runs the process. When the code is obfuscated twice, it is much more challenging to track down bugs as two mapping.txt files would be required to de-obfuscate the stack trace. To avoid processing these libraries with ProGuard a second time, be sure to follow our steps in section above on improving build times!

For those of you building libraries, you may have encountered more challenges with ProGuard because in any sufficiently complex project, the possibility for custom ProGuard rules exists. We recommend not requiring custom ProGuard rules because the library can break if a different set of rules is applied after a custom set. If custom rules are required, be sure that those using your library include any custom ProGuard rules in their own config file. This will ensure compatibility between your library and the app!

1  # Custom Rules
2 -keep class com.example.mylibrary.** { *; }

Ever since Crashlytics was born, we’ve made it our mission to make developers’ lives easy. We hope that these strategies will help you build the next groundbreaking Android app/library and perhaps in the process, make a dent in the universe ;)

External Resources


View More

Gingerbread-2

At Crashlytics, we‘re constantly exploring ways to help developers build the most stable apps. With this in mind, we recently began researching common reasons why Android apps crash. We were especially curious to see any trends in crashes originating from the Android Support Library, given that it is one of the most widely-used libraries in Android applications.

We found that about 4% of the 100 million crashes that we analyzed were related to the Support Libraries. Digging deeper, our research showed that the overwhelming majority of those crashes are caused by a small handful of recurring, preventable errors. Based on this analysis, we’ve identified the best practices for using the Support Libraries that are commonly overlooked and three key ways to increase stability.

1. AsyncTasks and Configuration Changes

AsyncTasks are used to perform background operations and optionally update the UI after completion. Using AsyncTasks and handling configuration changes is a common source of bugs. If the fragment is detached from its activity while your AsyncTask is running, and you attempt to access that activity, your application will crash with a call stack that looks like:

java.lang.IllegalStateException: Fragment MyFragment not attached to Activity
 at android.support.v4.app.Fragment.getResources(Fragment.java:551)
 at android.support.v4.app.Fragment.getString(Fragment.java:573)

In the above stack trace, the fragment is relying on a valid activity to access the application’s resources. One way to prevent this crash from happening is to retain the AsyncTask across configuration changes.

This can be done using a RetainedFragment that executes the AsyncTask and notifies listeners about the status of the AsyncTask operations. For more information, see the FragmentRetainInstance.java sample.

2. Safely Performing Fragment Transactions

Fragment transactions are used to add, remove, or replace fragments in an activity. Most of the time, fragment transactions are performed in the activity’s onCreate() method or in response to a user interaction. However, we’ve seen cases where fragment transactions were committed when resuming an activity. When this happens, your application may crash with the following:

java.lang.IllegalStateException: Can not perform this action after onSaveInstanceState
 at android.support.v4.app.FragmentManagerImpl.checkStateLoss(FragmentManager.java:1327)
 at android.support.v4.app.FragmentManagerImpl.enqueueAction(FragmentManager:1338)
 at android.support.v4.app.BackStackRecord.commitInternal(BackStackRecord.java:595)
 at android.support.v4.app.BackStackRecord.commit(BackStackRecord:574)
 at android.support.v4.app.DialogFragment.show(DialogFragment:127)

Whenever a FragmentActivity is placed in the background, its FragmentManagerImpl’s mStateSaved flag is set to true. This flag is used to check whether there could be state loss. If the flag is true when attempting to commit a transaction, the above IllegalStateException is thrown. To prevent state loss, fragment transactions cannot be committed after onSaveInstanceState() is called. The reason this crash may occur is that there are some cases in which onResume() is called before the flag is set back to false, when the state is restored.

To prevent this kind of crash, avoid committing fragment transactions in the activity’s onResume() method. Instead, use onResumeFragment(), which is the recommended approach to interact with fragments in their proper state.

3. Managing the Cursor Lifecycle

A CursorAdapter makes it easy to bind data from a Cursor to a ListView object. However, if the cursor becomes invalid and we attempt to update the UI, the following crash occurs:

java.lang.IllegalStateException: this should only be called when the cursor is valid
 at android.support.v4.widget.CursorAdapter.getView(CursorAdapter.java:245)
 at android.widget.HeaderViewListAdapter.getView(HeaderViewListAdapter.java:253)

This exception is thrown if the CursorAdapter’s mDataValid field is set to false, which happens when:

- the cursor is set to null

- a requery operation on the cursor failed

- onInvalidated() is called on the data

One reason this may occur is if you’re using both CursorLoader and startManagingCursor() to manage your cursor. startManagingCursor() has been deprecated in favor of CursorLoader. If you are working with fragments, be sure to use CursorLoader to manage the cursor lifecycle and remove all references to startManagingCursor() and stopManagingCursor().

Summary

By implementing these three guidelines, the chances of the Support Library throwing a fatal exception will be greatly diminished. Fewer crashes lead to happier customers, better ratings, and a more successful app!

Crashlytics for Android reports uncaught exceptions thrown by the Support Library or anywhere else in your app. Add our Android SDK to your app and see what other crashes you’ve been missing!

 


View More

Crashlytics Plugin for Gradle

(Note: Xavier Ducrohet, tech lead for the Android SDK at Google, helped with portions of this post. Thanks, Xavier!)

Since our launch of Crashlytics for Android, it’s been our mission to leverage our infrastructure, along with the tools you use everyday, to make developing apps as easy as possible. We’re always on the lookout for the best ways to integrate with your existing workflow.

When Google announced at I/O 2013 that they would be backing Gradle as a build system for Android development, we embarked on a ground-up approach to integrate Gradle into our supported build systems.

Introducing Gradle

Gradle was introduced as a new build system to provide a more efficient and powerful way to build mobile apps. Backed by the Groovy language, Gradle is flexible, enabling the specification of not just what a build is doing but how. The Android Gradle plugin comes with built-in support for creating multiple flavors and build types for your project. However, this simple support adds more complexity to the actual build process under-the-hood.

Building for Simplicity

It’s not uncommon for a build script to specify pre-processing work (e.g., generating data) and post-processing work (e.g., pushing files to a staging machine). With Ant and Maven, our build tools automatically manage ProGuard mapping files so that we can tell you the exact line of code that causes your app to crash.

We set out to make Gradle just as easy to use. Instead of having to explicitly add Crashlytics tasks for every combination of build type and flavor, wouldn’t it be much easier if you could just write “apply ‘preprocessor’” and be done with it?

We’ve got you covered. We made sure that using the Crashlytics plugin for Gradle was as easy as possible for our users!

Getting Started

We built an example plugin to demonstrate automatic task insertion.  To begin, we first defined a plugin in Groovy code:

1 package com.crashlytics.examples.gradle
2
3 import org.gradle.api.Project
4 import org.gradle.api.Plugin
5
6 class PreprocessorPlugin implements Plugin {
7 	void apply(Project project) {

Since this plugin depends on the Gradle Android plugin, we added:

1 	project.configure(project) {
2 		if(it.hasProperty("android")) {

The Android Gradle plugin has several individual build tasks for a flavor and build type (e.g., the compileDebug task) that do not exist at the time a project is being configured. These tasks are added dynamically, so we listened to them as they were added to make our task a dependency.

1 		tasks.whenTaskAdded { theTask ->

Update: Xavier Ducrohet recommends using the following, which iterates over the list of variants (i.e. build type and flavor combinations) and simplifies the process of getting the path to an AndroidMainifest.xml:

1 android.applicationVariants.all { variant -> ...

We then added a task to this project. As an example of how to add a task, here’s how to print the size of the AndroidManifest.xml (for the release build of a project with no flavors):

 1 			if("compileRelease".toString().equals(theTask.name.toString())) {
 2 				def yourTaskName = "helloManifestRelease"
 3 				project.task(yourTaskName) << {
 4 				description = 'Outputs the manifest file size'
 5 				def manifest = new File("build/manifests/release/AndroidManifest.xml")
 6 				logger.warn("Hello World! Manifest Size: " + manifest.length())
 7 			}
 8 			theTask.dependsOn(yourTaskName)
 9 			def processTask = "processReleaseResources"
10 			project.(yourTaskName.toString()).dependsOn(processTask)
11 		}}}}}}

Supporting Multiple Flavors  

The creation of a single pre-processing task for a specific flavor and build type is simple. However, we wanted to add a task for each flavor and build type (note: Xavier’s update above also solves this). Instead of using only one build type and flavor, we iterated over all of them, as seen in the snippet below:

1 		// Returns an empty list if the plugin only has the default flavor.
2 		// But we still need something to iterate over, so let’s make an empty flavor.
3 		def projectFlavorNames = project.("android").productFlavors.collect { it.name }
4 		projectFlavorNames = projectFlavorNames.size() != 0 ? projectFlavorNames : [""]
5 		project.("android").buildTypes.all { build ->
6 		def buildName = build.name
7 		// . . .

What if you need to make a pre-processing task to read your Android manifest before compilation? Here’s how we did it:

First, we referenced the correct path to the Android manifest file (e.g., “build/manifests/FlavorName/BuildName/AndroidManifest.xml”). Gradle creates one manifest for each combination of flavors and builds — that means that each combination has a different AndroidManifest in a different location:

1 		def flavorPath
2 		for (flavorName in projectFlavorNames) {
3 			if (!"".equals(flavorName)) {
4 				flavorPath = "${flavorName}/${buildName}"
5 			} else {
6 			// If we are working with the empty flavor, there’s no second folder.
7 				flavorPath = "${buildName}"
8 			}
9 			def manifestPath = "build/manifests/${flavorPath}/AndroidManifest.xml"

Next, we identified the names of all of the compilation tasks; there is one compilation task for each flavor and build type combination.

The Android Gradle plugin uses camelCase notation for their tasks (e.g., compileFlavorNameBuildName):

1 			def taskAffix
2 			if (!"".equals(flavorName)) {
3 				taskAffix = "${flavorName.capitalize()}${buildName.capitalize()}"
4 			} else {
5 				// If we are working with the empty flavor, there’s no second affix.
6 				taskAffix = "${buildName.capitalize()}"
7 			}
8 			def compileTask = "compile${taskAffix}".toString()

Finally, we made all of these compilation tasks dependent on a new pre-processing task. Once we determined the name of a specific flavor and build type, we hooked onto it (like we did before).

 1 			if(compileTask.equals(theTask.name.toString())) {
 2 				def yourTaskName = "helloManifest${taskAffix}"
 3 				project.task(yourTaskName) << {
 4 				description = 'Outputs the manifest file size'
 5 				def manifest = new File(manifestPath)
 6 				logger.warn("Hello World! Manifest Size: " + manifest.length())
 7 			}
 8 			theTask.dependsOn(yourTaskName)
 9 			def processTask = "process${taskAffix}Resources"
10 			project.(yourTaskName.toString()).dependsOn(processTask)
11 		}}}}}}}}

Implementing the Gradle Plugin

That’s not all — if you’re looking to create your own Gradle processor plugin, head on over to GitHub and use our example to get started! This example also includes a few tests to verify that you’ve correctly built your plugin.

The team is really proud of the functionality provided through our Gradle plugin and how easy it is for developers to use. We’re excited to see how Gradle evolves and are continuing to make improvements to ensure that Crashlytics fits seamlessly into your workflow.

We’re already hard at work on even more functionality. Stay tuned for what’s next!


View More

logs
Since our launch one year ago, Crashlytics has set the bar for the most informative crash reports on mobile. Above and beyond stack traces, RAM usage, and disk utilization, we’ve sought to provide all the critical data-points that developers need to pinpoint and fix issues – device orientation, battery state, even whether the device was being held up to the ear! And we’re never satisfied.

A treasure-trove of data lies in an app’s logs and there’s no better way to debug a problem than by knowing exactly what happened leading up to the critical moment. Capturing logging data has been our number-one customer request for months and our number-one concern. We care deeply about security and end-user privacy: collecting logging data opens the door to substantial risks. We wanted to begin the path down the road to building a Splunk for Mobile.

I’m excited to announce that after focusing our R&D efforts, we think we’ve cracked it, and I wanted to share some details on our approach.

Privacy, Performance

The easiest way to deliver logging would be to capture and redirect all output from NSLog(), but this is also the easiest way to infringe user privacy. Many apps don’t take the care they should in scrubbing log lines of personally-identifiable information: names, email addresses, even passwords often appear in URLs or internal settings that might commonly get logged. Sending this data, even encrypted over SSL, would be dangerous and in-breach of most privacy policies.

Instead, we’ve chosen to introduce completely distinct logging functionality called CLSLog(), so it’s explicit what data will be collected and transmitted with Crashlytics reports.

We also took the opportunity to make some performance improvements – in our benchmarks, CLSLog() is 10X faster than NSLog() under the same conditions. Using CLSLog() could not be easier – it’s a drop-in replacement:

1 OBJC_EXTERN void CLSLog(NSString *format, ...); // Log messages to be sent with crash reports
1 NSLog(@"Detected Higgs Boson with mass %f!!", [boson mass]);
2 CLSLog(@"Detected Higgs Boson with mass %f!!", [boson mass]);

Options, Options, Options

Of course, in many case you might want your log messages to also output to the system log, or show up in Xcode’s console. For these cases, we’ve also provided CLSNSLog(), which records the output and then passes it along to NSLog():

1 OBJC_EXTERN void CLSNSLog(NSString *format, ...); // Log messages to be sent with crash reports as well as to NSLog()

But what if both could happen? In development builds, it would be ideal for everything to pass-thru to Xcode’s console so debugging was as easy as possible. In release builds, though, that’s nothing but overhead — it would be great to take advantage of the blinding speed of our 100% in-memory implementation of CLSLog().

We’ve got you covered:

 1 /**
 2 *
 3 * The CLS_LOG macro provides as easy way to gather more information in your log messages that are
 4 * sent with your crash data. CLS_LOG prepends your custom log message with the function name and
 5 * line number where the macro was used. If your app was built with the DEBUG preprocessor macro
 6 * defined CLS_LOG uses the CLSNSLog function which forwards your log message to NSLog and CLSLog.
 7 * If the DEBUG preprocessor macro is not defined CLS_LOG uses CLSLog only, for a ~10X speed-up.
 8 *
 9 * Example output:
10 * -[AppDelegate login:] line 134 $ login start
11 *
12 **/
13 #ifdef DEBUG
14 #define CLS_LOG(__FORMAT__, ...) CLSNSLog((@"%s line %d $ " __FORMAT__), __PRETTY_FUNCTION__, __LINE__, ##__VA_ARGS__)
15 #else
16 #define CLS_LOG(__FORMAT__, ...) CLSLog((@"%s line %d $ " __FORMAT__), __PRETTY_FUNCTION__, __LINE__, ##__VA_ARGS__)
17 #endif

In Debug builds, CLS_LOG() will pass-thru to NSLog, but in Release builds, it will be as fast as possible:

1 CLS_LOG(@"Higgs-Boson detected! Bailing out... %@", attributesDict);

Network Efficient

We’ve designed our custom logging functionality from the ground-up to respect your end-users network connections and your app’s performance. Since it’s implementation is entirely in-process, it’s blazingly fast with no IPC overhead. It also accepts as much data as you choose to throw at it: CLSLog() maintains an auto-scrolling 64kb buffer of your log data, which is more than enough to record what happened in the moments leading up to a crash without exploding your app’s memory requirements or your end-users cellular data plan. Believe it or not, it’s even more memory-efficient than it sounds – our advanced architecture doesn’t even require holding all 64kb in RAM!

That’s Not All…

Viewing logging information is a whole other story. Rather than explain it, I’d encourage you to head over to our SDK Overview and see for yourself! We’re hard at work on additional SDK functionality and have much more to talk about in the coming weeks – stay tuned!

 


View More

TL;DR: 31 lines of Rack middleware leverage Redis for highly-performant and flexible response caching.

As Crashlytics has scaled, we’ve always been on the lookout for ways to drastically reduce the load on our systems. We recently brought production Redis servers online for some basic analytics tracking and we’ve been extremely pleased with their performance and stability. This weekend, it was time to give them something a bit more load-intensive to chew on.

The vast majority – roughly 90% – of inbound traffic to our servers is destined for the same place. Our client-side SDK, embedded in apps on hundreds of millions of devices worldwide, periodically loads configuration settings that power many of our advanced features. These settings vary by app and app version, but are otherwise identical across devices – a prime candidate for caching.

There are countless built-in and third-party techniques for Rails caching, but we sought something simple that could leverage the infrastructure we already had. Wouldn’t it be great if we could specify a cache duration in any Rails action and it would “just work”?

1 cache_response_for 10.minutes

Rack Middleware to the Rescue

One of the most powerful features of Rack-based Rails is middleware – functionality you can inject into the request processing logic to adjust how it is handled. This will let us check Redis for a cached response or fall-through to the standard Rails action.

 1 class RackRedisCache
 2   def initialize(rails)
 3     @rails = rails
 4   end
 5
 6   def call(env)
 7     cache_key = "rack::redis-cache::#{env['ORIGINAL_FULLPATH']}"
 8
 9     data = REDIS.hgetall(cache_key)
10     if data['status'] && data['body']
11       Rails.logger.info "Completed #{data['status'].to_i} from Redis cache"
12       [data['status'].to_i, JSON.parse(data['headers']), [data['body']]]
13     else
14       @rails.call(env).tap do |response|
15         response_status, response_headers, response_body = *response
16         response_cache_duration = response_headers.delete('Rack-Cache-Response-For').to_i
17
18         if response_cache_duration > 0
19           REDIS.hmset(cache_key,
20             'status', response_status,
21             'headers', response_headers.to_json,
22             'body', response_body.body
23           )
24
25           REDIS.expire(cache_key, response_cache_duration)
26           Rails.logger.info "Cached response to Redis for #{response_cache_duration} seconds."
27         end
28       end
29     end
30   end
31 end

A response in Rails consists of 3 components – the HTTP status, HTTP headers, and of course, the response body. For clarity, we store these under separate keys within a Hash in Redis, JSON-encoding the headers to convert them into a string.

If the cache key is not present, the middleware falls-through to calling the action, and then checking an internal header value to determine whether the action desires its response be cached. The final critical line leverages Redis’ key expiration functionality to ensure the cache is only valid for a given amount of time. It couldn’t get much simpler.

Implementing our DSL

To tie it all together, the ApplicationController needs a simple implementation of cache_response_for that sets the header appropriately:

1 def cache_response_for(duration)
2   headers['Rack-Cache-Response-For'] = duration
3 end

Boom. It was really that easy.

Impact?

This implementation took us only about an hour to develop and deploy, and the effects were immediate. Only 4% of these requests now fall-through to Rails, CPU usage on our API servers has plummeted, and total queries to our MongoDB cluster are down 78%. An hour well-spent. Our Redis cluster also doesn’t sweat its increased responsibility: its CPU usage is up just marginally!

Join Us!

Interested in working on these and other high-scale challenges?  We’re hiring!  Give us a shout at jobs@crashlytics.com. You can stay up to date with all our progress on Twitter, and Facebook.

 


View More

Building Backbone.js apps for scale

We had a blast at last night’s Backbone.js MeetUp – it’s great to see such a thriving community here in Boston and to share a few of the insights we’ve had here at Crashlytics about building scalable applications with Backbone.js. The slides from our talk are up on SlideShare for viewing.

We’re looking forward to the next MeetUp and continuing to work with the Boston Backbone community – if you have any feedback or want to get in touch, leave a comment below!

Check out the slides here.


View More