Learn from app leaders: How Doodle redesigned their app using Fabric & Firebase

By Todd Burner, Developer Advocate

Learn from app leaders header

In this new series, we feature customers that have used our platform in an innovative way. For this installment, we chatted with the app team at Doodle who used the Fabric and Firebase platforms together to redesign their app to be more user-centric. If you want to participate in this series, please email support@fabric.io.

Recently, we sat down with Alexander Thiele who is a senior Android engineer at Doodle, a company that helps you find the best date and time to meet other people. As early adopters of Fabric’s Crashlytics and Firebase Remote Config, his team has expert familiarity with our platforms. The focus of our conversation was on how they redesigned their mobile app using analytics and crash data from their Fabric and Firebase dashboards.

Doodle logo.png


Q. How did you approach the redesign?

“The redesign is a complete overhaul of our app. We started by updating our onboarding flow to help people understand the best ways to use Doodle. We wanted to show users how they can poll each other to quickly find the best meeting time. We divided the redesign into three phases, first improving stability with Fabric’s Crashlytics, then A/B testing our poll creation feature with Firebase Remote Config, and finally measuring the results of our tests and production rollout by monitoring our app metrics in Fabric and Firebase analytics."


Phase 1: Finding tricky crashes

The team at Doodle wanted to understand how stability was impacting their app quality. That’s why they first focused on getting their crash-free user rate as close to 100% as possible.

Fabric’s Crashlytics helped them track crashes and prioritize them so they could improve their crash-free user rate. One feature they found particularly useful is adding logs and keys to crash reports.

Q. How did Fabric’s Crashlytics help you improve your app stability? 

“Crashlytics saved us a ton of time by surfacing crashes and helping us pinpoint their cause. I remember one really rare crash which we couldn't find the source of ourselves. We also don't have many crashes so we were really eager to find it. We then started to log everything that could be related to this crash, like page visits and the current internal database size. We also recorded those instances when our database couldn’t find something. After a few releases with custom logs, we found the bug. It happened in a really rare case where the user went to specific screens and used some specific features. Without custom logs, we wouldn’t have been able to find this bug.”

The team at Doodle also logs all crashes that they manually catch in the code to Crashlytics as non-fatals. This gives them more insight about what's going on in the app. By taking advantage of these unique Crashlytics features, Doodle has been able to move faster and add new features into production with less anxiety.


Phase 2: User-centric design

 The second phase of the app redesign was focused on updating the user experience and design. The goal of this phase was to refresh the look and feel of the app and introduce streamlined flows so users could accomplish their tasks faster (and with fewer screens/steps).

Q. What types of UI changes did you make in the redesign?

“The changes we made during this stage included everything from changing the color palette to introducing new screens and adding new app functionality. By monitoring the 7 day retention metrics in Fabric and Firebase, we saw that some new users didn’t understand the concept of Doodle immediately - so they didn’t return to our app. That’s why we changed the whole onboarding process to make Doodle easier to understand and use from the first time it’s installed.”

doodle app screen.png

Q. How did you test your changes?

“We used Firebase Remote Config. We tested our user onboarding and the flow users go through when creating a poll. We tried 4 different kinds of flows, which we tested using Remote Config. In the end, the data showed that one flow resulted in more polls being created than the others. Our key performance indicator for the A/B test was the numbers of polls created by users, and we tracked this KPI with Google Analytics for Firebase.“

Q. Did you use Remote Config for other things? 

“We also used Remote Config to test feature switches. For example a few months ago, we implemented banner ads on our scheduling screen and enabled them through Remote Config. We noticed that these ads didn’t perform well so we turned them off easily with Remote Config. Then, we tried inserting native ads into a few other places in our app. Through Remote Config, we were able to discover the right placement for ads in our app without disrupting our users or requiring them to update their app to see the changes.”

By tracking crashes and non-fatals with Fabric and deploying changes with Firebase Remote Config, the team at Doodle didn’t have to depend on the app store release cycles to understand their users and update their app accordingly. They could see user behavior change in real-time and make appropriate changes to their app before problems arose.


Phase 3: Measuring and going forward with Firebase and Fabric

The team at Doodle plans to keep using both the Fabric and Firebase platforms to monitor and improve their app - and display their metrics throughout every stage! 

Q. Now that the redesign is live, what dashboards do you find yourself using the most?

 “Our most important metrics are how many polls a user creates and how many people participate in a poll. We monitor these in the Fabric events dashboard and in Google Analytics for Firebase by logging events.

I’m also a big fan of the new TV Mode for Fabric, we have a big conference room and we put up our dashboard on the TV during launches so the whole team can see how we’re doing. The new Crashlytics dashboard looks nice too, especially device and OS filtering. We keep an eye on most of the dashboards daily.”

Q. What Fabric and Firebase features do you plan to adopt next?

“Over the next few weeks, we have plans to adopt Firebase Dynamic Links and to set up more in-depth Fabric custom events. By using Dynamic Links, we’ll be able to make it even easier to share polls. For example, our users will be able to invite other people to participate in polls via SMS and deep link right to the relevant app screen (even if the people they are inviting to the poll haven’t installed the app yet). We’ll track more events, like content views, to understand where our users find value in our app.”

Q. What advice do you have for other app teams who are considering redesigning their app?

“Two things: test ideas constantly and put your app users first. By combining Crashlytics’ real-time crash reporting with the ability to deploy remote changes to a subset of users through Firebase Remote Config, you can learn how valuable a new feature is, identify potential issues, and take action immediately.”

Q. Can you share some results of the redesign?

“This redesign greatly improved our in-app poll creation process so users could create polls faster and more easily. We measured the success of this redesign by looking at our daily active users (DAUs) in Fabric and our retention numbers in Firebase/Fabric, which have risen beyond our expectations!”

How to monitor your app retention in Fabric

By Shobhit Chugh, Product Manager

How-to-monitor-retention-in-Fabric

A common misconception in the mobile world is that number of app downloads is the strongest indicator of success. But what if you have a ton of users, yet they rarely interact with your app? What if people download your app and then churn the next day? Looking at your total number of app users or installs in isolation doesn’t paint an accurate picture of your app’s health - you also need to pay attention to retention. Retention helps you understand how often people return to your app. It’s important to measure retention because if your hard-earned users aren’t sticking around and regularly engaging with your app, you cannot build a sustainable mobile business.

In this blog post, we’ll show you how to track your retention over time through Fabric’s new retention page (which is part of our new dashboard).

 

Measuring retention from three angles

To give you a holistic view of how strong your app retention is, we focus on three things: active users, activity segments, and new user retention.

Let’s review how each angle helps you better understand retention.

1. Fluctuations in active users

The first sign of how well you’re retaining users is the number of active users you have on a daily, weekly, and monthly basis - and if these numbers are trending up or down over time.

When you navigate to the retention page, you’ll see these metrics in the top two graphs:

  • Daily active users (DAUs - how many people have had at least one session with your app today)

  • Weekly active users (WAUs - how many people have had at least one session with your app in the last seven days)

  • Monthly active users (MAUs - how many people have had at least one session with your app in the last 30 days)

The pulsating DAUs graph gives you real-time insight into how many people have used your app so far today, compared to this time last week. The second graph provides an additional lens by highlighting changes in weekly and monthly active users.

Steady, consistent growth in active users is a good signal that your retention is strong.

 

2. Changes in activity segments

The middle section of the retention page is centered around activity segments. Based on session data, activity segments groups users into buckets, ranging from inactive users (people who have not launched your app in more than a week) all the way to high activity users (people who have used your app almost every single day in the past seven days).

Activity segments provide a deeper look at your retention by revealing how engaged your active users are, how many are at risk of abandoning your app, and how people flow from one segment to another.

This graph can tell you a few interesting things about retention. First off, look at how people are transitioning between states. For example, if you see a large and healthy flow of users moving from “low activity” to “medium or high activity”, their engagement level is changing in a positive way - meaning that retention is improving.

Another interesting thing to monitor is the correlation between the number of new users and the growth in each segment. For instance, if you’re earning thousands of new users every week, but you’re only seeing a corresponding bump in the low activity segment - this means your new users are not deeply committed to your app. In this case, consider improving your onboarding flow to showcase the value of your app to new users.

Pro Tip: Move the slider at the bottom of this graph to see your activity segments at different times during the last 30 days. You can also use this slider to compare how active your users are during the weekday versus the weekend.

 

3. New user retention rate

Finally, the last graph on the retention page shows you what percent of new users are continuing to interact with your app after one day, seven days, and thirty days. This graph helps you see whether or not new users are still active after their first session at key time intervals. For instance, the day one metric means that X% of people who installed and used your app for the first time yesterday, also used it today.

The higher these percentages are, the stronger your app retention is because it means that a large amount of new users are turning into loyal, habitual users. If we notice any irregularities (i.e. an unusual increase or decrease in your new user retention), we’ll flag it so you can dig into what happened on that day.

 

From understanding retention to improving it

Fabric’s new retention page helps you measure retention from three different angles: active users (how many people are using my app?), activity segments (how engaged are my users?), and new user retention rate (how often do new users come back to my app?).

Armed with this insight, you’ll develop a baseline understanding of your retention, be able to recognize when it becomes a problem, and act quickly to combat churn. 

If you’re already a Fabric customer, click here to check out your retention page.

If you’re not currently a Fabric customer, get started by signing up and installing Crashlytics.

Migrating to Druid: how we improved the accuracy of our stability metrics

by Max Lord, Software Engineer

Stability metrics are one of the most critical parts of Crashlytics because they show you which issues are having the biggest impact on your apps. We know that you rely on this data to prioritize your time and make key decisions about what to fix, so our job is to ensure these metrics are as accurate as possible.  

In an effort to strengthen the reliability of these numbers, we spent the last few months overhauling the system that gathers and calculates the stability metrics that power Crashlytics. Now, all of our stability metrics are being served out of a system built on Druid. Since the migration has ended, we wanted to step back, reflect on how it went, and share some lessons and learnings with the rest of the engineering community.

Why migrate?

In the very early days of Crashlytics, we simply wrote every crash report we received to a Mongo database. Once we were processing thousands of crashes per second, that database couldn't keep up. We developed a bespoke system based on Apache Storm and Cassandra that served everyone well for the next few years. This system pre-computed all of the metrics that it would ever need to serve, which meant that end-user requests were always very fast. However, its primary disadvantage was that it was cumbersome for us to develop new features, such as new filtering dimensions. Additionally, we occasionally used sampling and estimation techniques to handle the flood of events from our larger customers, but these estimation techniques didn't always work perfectly for everyone.

We wanted to improve the accuracy of metrics for all of our customers, and introduce a richer set of features on our dashboard.  However, we were approaching the limits of what we could build with our current architecture.  Any solution we invented would be restricted to pre-computing metrics and subject to sampling and estimation. This was our cue to explore other options.

Discovering Druid

We learned that the analytics start-up MetaMarkets had found themselves in a similar position and the solution that they open-sourced, Druid, looked like a good fit for us as well. Druid belongs to the column-store family of OLAP databases, purpose-built to efficiently aggregate metrics from a large number of data points. Unlike most other analytics-oriented databases, Druid is optimized for very low latency queries. This characteristic makes it ideally suited for serving data to an exploratory, customer-facing dashboard.

We were doubtful that any column store could compete with the speed of serving pre-computed metrics from Cassandra, but our experimentation demonstrated that Druid's performance is phenomenal. After spending a bit of time tweaking our schema and cluster configuration, we were easily able to achieve latencies comparable to (and sometimes even better than!) our prior system.  We were satisfied that this technology would unlock an immense amount of flexibility and scale, so our next challenge was to swap it in without destabilizing the dashboard for our existing customers.

Migrating safely

As with all major migrations, we had to come up with a plan to keep the firehose of crash reports running while still serving up all of our existing dashboard requests. We didn’t want errors or discrepancies to impact our customers so we enlisted a tool by Github called Scientist. With Scientist, we were able to run all of the metrics requests that support our dashboard through Druid, issuing the exact same query to both the old system and the new system, and comparing the results.  We expected to see a few discrepancies, but we were excited to see that when there were differences, Druid generally produced more accurate results. This gave us the confidence that Druid would provide the functionality we needed, but we still needed to scale it up to support all of our dashboard traffic.  

To insulate our customers from a potential failure as we tuned it to support all of our traffic, we implemented a library called Trial.  This gave us an automatic fallback to the old system. After running this for a few weeks we were able to gradually scale up and cut over all of our traffic to the new system.

How we use Druid for Crashlytics

On busy days, Crashlytics can receive well over a billion crash reports from mobile devices all over the world. Our crash processing pipeline processes most crashes within seconds, and developers love that they can see those events on their dashboards in very close to real time.

To introduce a minimum of additional processing time, we make extensive use of Druid's real-time ingestion capabilities. Our pipeline publishes every processed crash event to a Kafka cluster that facilitates fanout to a number of other systems in Fabric that consume crash events. We use a Heron topology to stream events to Druid through a library called Tranquility. Part of the Druid cluster called the "indexing service" receives each event and can immediately service queries over that data. This path enables us to serve an accurate, minute by minute picture of events for each app for the last few hours.  

However, calculating metrics over a week or months of data requires a different approach. To accomplish this, Druid periodically moves data from its indexing service to another part of the cluster made up of "historical" nodes. Historical nodes store immutable chunks of highly compressed, indexed data called "segments" in Druid parlance and are optimized to service and cache queries against them. In our cluster, we move data to the historical nodes every six hours. Druid knows how to combine data from both types of nodes, so a query for a week of data may scan 27 of these segments plus the very latest one currently being built in the indexing service.

The results

Our Druid based system now allows us to ingest 100% of the events we receive, so we are happy to report that we are no longer sampling crash data from any of our customers.  The result is more accurate metrics that you can trust to triage stability issues, no matter how widely installed your app is.

While nothing is more important to us than working to ensure you have the most reliable information possible, we also strive to iterate and improve the Crashlytics experience. In addition to helping us improve accuracy, Druid has unlocked an unprecedented degree of flexibility and richness in what we can show you about the stability issues impacting your users. Since the migration, you may have noticed a steady stream of design tweaks, new features, and performance enhancements on our dashboard. For example, here are a few heavily-requested features that we’ve recently rolled out:  

  • You can now view issues across multiple versions of your app at the same time
  • You can view individual issue metrics for any time range
  • You can now filter your issues by device model and operating system

This is just the beginning. We're looking forward to what else we can build to help developers ship stable apps to their customers.

P.S. We're building a mobile platform to help teams create bold new app experiences. Want to join us? Check out our open positions!

Get Crashlytics