Skip to content

How mobile app teams use data effectively – Part III

New acquisitions in the mobile content space are announced weekly, and while there’s rhetoric that tells the public and investors the combined data provides the company an asset that will return significant value, it’s not always clear what that entails.

In our previous post, we covered what mobile companies are doing with acquired datasets to gain strategic advantages, and how they were going about integrating data. Today we’re going to tell you how the teams are successfully leveraging the data for their specific goals.

Prerequisite: Getting buy-in.

While data utilization can provide upsides applying to various disciplines, the vision and the buy-in need to come from all teams.

First, big data is expensive and hard. If the outcome isn’t valued at the outset, the large time and capital project will likely be shelved before getting off the ground. We’ve seen multiple instances when a measurement project gets halted because a developer’s explorations into a KPI dashboard create a forecasted $30K monthly price tag. No reasonable business will allow costs like this to balloon without it applied to a plan to increase the bottom line.

Second, the data needs to be trusted. A popular data science maxim is “garbage in, garbage out.” If the data isn’t trustworthy, there’s no chance the outputs will be useful. The number of product and marketing teams operating with completely different datasets is astounding. It’s very hard to set a unified priority when half the team doesn’t believe the measurement method.

Next, it needs to make people’s jobs easier, and actually help them achieve goals. We’ve learned to actually be beneficial, data has to be easily available, understandable, and actionable. Applied correctly data is a tool to make team’s lives easier and help them achieve their goals and drive forward KPIs. Applied incorrectly it becomes a burden and will be ignored.

Management: Setting the direction

Core to managing is being to measure and monitor the health of the business. As the company grows to multiple products, divisions, and studios the metrics and measurements become greater in number and more complex in understanding. The datasets can provide clarity or confusion, depending on the trust and ease of access.

What we see working:

Justification for deploying capital: data can provide predictors into the effective profit and cost centers of the business. Where should you invest? Where should you pull back? For small businesses, this is can be at the app, campaign, or feature level. For larger organizations, this can be studio, property, or team-level.

Benchmarking and setting priorities: When you’re a single app it might be tough to understand where your largest opportunity lies. So you’ll ask around for standard CPMs or CPIs. But having a portfolio of comparisons can help you compare and set priorities taking advantage of a property’s idiosyncrasy. Does an app or studio’s high retention rate mean you can extend the window on measuring return on ad spend? Does a particularly low CPI mean you should increase budget, or does a low retention rate means you should investigate customer retention?

What we see not working:

Inaccessibility: if the data isn’t readily accessible, management won’t be waiting around for it to show up to make a decision.

Conflicting datasets: If management can’t get a reliable, repeatable number from their business, they won’t know which direction is most accurate. The number of times I’ve heard different teams use different platforms to measure success leads me to believe this is a pervasive problem.

Needing in-depth, industry-specific understanding to make a decision: I remember years ago having a conversation with the CEO of a game company where I was strongly suggesting he invest technical resources to combat lost revenue potential from ad-serving discrepancies. I was not successful. If the executive needs to understand the dynamics of ad serving to make a call on deploying resources, it’s not likely to succeed. I should’ve kept it simple: ad-serving is broken: you stand to increase revenue 30% by investing in a fix.

 

Marketing: Getting more users

In the mobile app space, marketers are responsible for increasing the number of users downloading the app. Their job is to pay less for new users than those users will make for the business. As such, the bulk of their data needs is for getting an (accurate as possible) source of truth for measurement and getting an (accurate as realistic) method for predicting outcomes of their efforts.

What we see working:

Prioritizing budgets: Virtually every marketing department has some level of this exercise: measure the results of the marketing budget, adjust prices and spend according to what is working. Ideally, this means measuring the profit (ROAS) on a campaign and scale spending to maximize return but for most companies, this isn’t quite so simple. In the cases where companies are struggling for accurate LTV predictions, they may use early measurement (day 7 ROAS), and companies who are struggling for measurement may rely on proxies (day 1 actions).

Getting signals early: The faster you can adjust a campaign the less money you’ll waste and the more margin you’ll accrue. I was talking to a large game publisher and they shared it took them an average of 14 days to take action on an unprofitable campaign. This isn’t hard to imagine, if your success-metric is looking at day-10 ROAS, you may need to wait 4 additional days to accrue enough users to make an informed decision. However, this timeline could be shortened by including day-1 KPIs with campaigns that could help predict users’ performance.

Making decisions with incomplete data: In the words of one of our clients “first, get it directionally correct.” Since it’s difficult, if not impossible to get 100% accuracy with measurements – and predictions will never be perfect—it’s often better to use indicators in the absence of actuals. For example, when trying to make user-value estimations with unreliable user-level IAP data, relying on an “intent to buy” event might give you enough direction to make a decision.

What’s not working:

Ignoring important metrics: We regularly read about the leaders in the industry using a team of data scientists to create wonderfully accurate LTV models and automated campaign adjustments. But for every one company that boasts about their UA process, there are at least ten trying to figure out scalable UA. While everyone needs to start somewhere, the excuse of “we’re not sure the ROI on this channel” will significantly hinder your ability to scale profit.

Broad swath metrics: The reality is different channels will bring very different users. Preload installs perform very differently from highly targeted similar-app users. I’ve heard more than once a campaign that targeted lower CPIs found the new users were using a VPN, which yielded LTVs at 1/10th the usual average.

Product & Monetization: Getting more out of the users

Product and monetization teams are responsible for giving the user the best experience and, in return, earning the maximum revenue from each user of the app. This is achieved by changes and iterations to the app that focuses on increasing user engagement, maximizing retention, and increasing the LTV of each user.

What we see working:

Enablement to uncover insights: Time and time again we see product teams trying to drive change without the ability to fully understand current user behavior. How can you reduce user churn without understanding where users are leaving the app? How can you prioritize the next feature without understanding which features are being used most frequently? How do you price a subscription without knowing the value of an ad-supported user? What’s the best way to drive an increase in IAP conversion if you don’t understand the user’s buying behavior?

Guessing and checking: Product teams usually have a guess on how users will react to a change or a new feature, but you’ll never be sure until you measure. Product teams that are creating AB tests then measuring impact are able to drive forward positive change. How could you be expected to accurately guess the formula for designing optimal puzzle complexity without validation?

Finding and focusing on metrics that matter: Revenue/LTV/ARPU are golden metrics but they’re not always available early enough to help. We’ve seen successful product teams use custom KPIs to monitor their apps and users’ health. As an example, one ad-supported app monitors user revenue by app version by ad unit to ensure they catch ad SDK problems early. Another – a dating app – uses customer account sign-ins per install to ensure nothing in the user journey is impacted with new app releases. Each app and team find KPIs they need to monitor app health.

What’s not working

Living in a fire drill: I was talking to a (prospective) customer who manages weather apps and he said “the most important thing in our user journey is a customer put their location into the app – otherwise they wouldn’t actually get the weather.” Using his data, I showed him that 40% of his users were NOT sharing a location. He responded: “that’s the single biggest problem we have as a company. I’m going to address that as soon as we can figure out why the app keeps crashing.” A product team that’s constantly reacting to bigger challenges isn’t going to be able to effectively plan for the future. A winning lottery ticket is useless to someone in a gunfight.

Not having the ability to drive change: Another challenge we see is when product teams can’t actually make changes once they identify problems or growth opportunities. If the development team is backed up for six months, what chance will a product team have on implementing learnings following an AB test?

Live Ops: Dynamic user experiences

Simply put, live ops is the practice of dynamically changing functions in the app – without a new release. Most commonly it’s being used for ongoing tweaks and creating dynamic content for users. In the past, live ops were for product teams that numbered in the triple digits. Not so anymore. The increasing presence of remote configuration tools means live ops is a strategy that’s becoming more widely used to drive engagement and profit for smaller teams.

What we see working:

Start small: it can be daunting to think about making your app a dynamic experience. Keeping a static app going can be tough enough. So start small: dynamically delay when a user sees their first ad, then move up to an option to change how often users see an ad, before you know it you’ll have dynamic controls to make more sophisticated changes at the user level.

Aim for big changes: ask anyone who’s had experience AB testing, the most likely outcome is no outcome. Getting confidence in test outcomes requires significant changes in user behavior. Think less “red vs. blue” button” and more “$.99 vs. $1.99.”

Investigate, validate and protect: investigate a hypothesis, test the change, and then go back to protect the revenue and optimize the user’s experience. A great example is Visual Blasters who discovered that giving away a paid feature for free would actually increase revenue and retention.

What’s not working:

Overly complicated ambitions: quite simply an overly complicated plan to institute a dynamic experience is a recipe for never actually getting it done. We’ve worked with customers who’ve been saying “we’re going to get to it in the next sprint” for years.

Bad data: I can’t tell you how many AB tests we’ve seen run relying on corrupted or incorrect data, only to have the outcomes highly suspect and ultimately ignored. If you’re not confident in your data, you won’t be confident in the outcome of the changes you make.

Incremental wastes of time: We see AB tests running months only to fail in confidently predicting an optimal outcome. These are large apps with millions of users – isn’t there something better to test than something that probably won’t matter?

 

In closing

Data is the key to success in the management, growth, and iteration of mobile apps. And while data can be daunting, expensive, and complicated it’s a clear need for team enablement. Using these strategies will help you best leverage a successful growth strategy.