2024-01-10 9 min read Engineering Manager

What Mobile Engineers Actually Use Analytics For

After years of building mobile apps, I’ve found that the most valuable analytics use cases are often the least obvious at first glance. The events that matter most are usually not the ones your analytics SDK gives you out of the box. They’re the ones that help you understand silent failures, quantify technical tradeoffs, and recover data your backend model never thought it would need.

That’s the engineering perspective on mobile analytics: not just “what converted?” but “what actually happened on the device?”, “how big is this issue really?”, and “can this data unblock a product decision that otherwise looks impossible?”

Debugging What Doesn’t Look Like a Bug

Some of the nastiest mobile bugs never show up in crash logs or error monitoring. They’re silent failures that only analytics can catch.

The Overeager Universal Link

We had users getting stuck during login on mobile web. The flow should have been simple: user browses mWeb, taps “login,” completes auth in Safari, continues browsing. Except some users never completed.

No crashes. No server errors. The login page worked fine in testing.

The problem? Our Universal Links configuration was too broad. Users who had the app installed were getting hijacked mid-login. iOS saw the auth URL, matched it against our association file, and threw them into the native app. But the app didn’t know what to do with a login callback meant for mWeb. Silent failure.

We only found it because we tracked every Universal Link opening:

class DeeplinkHandler {
    func handleUniversalLink(_ url: URL) {
        Analytics.track(
            "deeplink_opened", 
            [
                "path": url.path,
                "supported": isUrlSupported(url)
                ...
            ]
        )
    }

    private func isUrlSupported(_ url: URL) -> Bool {
        let supportedPaths = [
            "/cart",
            "/checkout",
            "/payment",
            "/payment/result",
            "/product",
            "/category",
            ...
        ]
        supportedPaths.contains {
            url.path == $0 || url.path.hasPrefix($0 + "/")
        }
    }
}

When we queried the data, we found ~4,000 users per month hitting deeplinks with supported: false. These were login flows that the app was intercepting but shouldn’t have been. The fix was excluding auth paths from the Universal Links association file.

Without that event, we’d never have known. The bug wouldn’t show up anywhere else.

API Errors at Scale

Your backend team isn’t checking every log line. They can’t. But you can track client-side API failures and surface patterns they’d never see.

func handleAPIResponse<T>(
    _ response: Result<T, APIError>, endpoint: String
) {
    switch response {
    case .failure(let error):
        analytics.track(
            "api_error", 
            [
                "endpoint": ...,
                "error_type": ...,
                "error_code": ...,
                "device_model": ...,
                "os_version": ...,
                "app_version": ...,
                "network_type": ...
            ]
        )
    case .success:
        // ...
    }
}

Real example: Sign In With Apple started failing for a subset of users. Backend logs showed nothing unusual. But client-side tracking revealed the failures clustered on specific iOS versions after an Apple security update changed token validation behavior. Scale matters. When you can show “this affects 2,300 users per day” instead of “we got a bug report”, prioritization conversations go differently.

Of course, logging these failures in a dedicated observability or error-monitoring tool (like Sentry i.e.) is the natural default. But analytics becomes especially useful when you need to connect those failures with business context: which funnel step they broke, whether they hit first-time users or power users, which campaigns or entry points were affected, or whether the error killed conversion entirely. That kind of correlation is often much easier when the failure event lives next to the rest of your product events.

Device-Specific Issues

Mobile fragmentation is real, even on iOS. Analytics answer questions that would otherwise require guesswork:

  • “Should we drop iOS 15 support?” -> Check what percentage of active users are still on it.
  • “How many users are on older devices that don’t support our new features?” -> Real number, not assumption.
  • “This issue only happens on iPad. How big is the impact?” -> Exact user count.
// Track device context with every session
Analytics.setUserProperties([
    "device_model": UIDevice.current.model,
    "device_identifier": deviceIdentifier(),
    "os_version": UIDevice.current.systemVersion,
    "is_low_power_mode": ProcessInfo.processInfo.isLowPowerModeEnabled,
    "preferred_language": Locale.preferredLanguages.first ?? "unknown",
    "app_version": Bundle.main.appVersion
])

Validating Design Decisions

Designers have opinions. Data has answers. But you need to track the right things.

UI Attractiveness vs. Actual Conversion

A promotional banner looks great in Figma. But does it work? Track both impression and interaction:

class PromoBannerView: UIView {    
    override func didMoveToWindow() {
        super.didMoveToWindow()
        if viewActuallyVisible && !hasTrackedImpression {
            analytics.track(
                "promo_banner_viewed", 
                [
                    "banner_id": ...,
                    "position": ...,
                    "variant": ...
                ]
            )
            hasTrackedImpression = true
        }
    }
    
    @objc func onTap() {
        analytics.track(
            "promo_banner_clicked", 
            [
                "banner_id": ...,
                "position": ...,
                "variant": ...,
                "time_to_click_ms": ...
            ]
        )
    }
}

Conversion rate = clicks / views. Simple, but reveals things like:

  • Banner in position 3 has 2x higher CTR than position 1 (users scroll past the first one)
  • Image-based banner converts worse than text-based (users treat it as an ad and ignore it)

Technical Decision Making

“Should we rebuild this in native or keep the WebView?” Without data, everyone has an opinion. With data, it’s just a decision.

WebView vs. Native: Actual Performance

We had a WebView-based checkout flow that “felt slow.” Some engineers wanted to rewrite it in native. Others said it was fine.

Instead of debating, we measured. We tracked Core Web Vitals (LCP, FCP) from the WebView and compared against native screen load times for similar complexity screens.

The data showed WebView LCP at 4.8s vs native at 400ms. The decision was obvious.

But here’s the thing: on another screen, the difference was 1.2s vs 0.9s. Not worth a rewrite. Without measurement, we might have spent weeks rebuilding something that didn’t need it.

Monitoring Critical Paths

Some features can’t fail quietly. Push notifications, payments, core user flows — you need to know immediately when something breaks.

final class PushNotificationHandler {
    private var receivedNotifications: [String: Date] = [:]
    
    func didReceiveRemoteNotification(
        _ userInfo: [AnyHashable: Any], 
        applicationState: UIApplication.State
    ) {
        Analytics.track(
            "push_received", [
                "notification_id": ...,
                "type": ...,
                "app_state": ...
            ]
        )
    }
    
    func didOpenNotification(_ userInfo: [AnyHashable: Any]) {        
        Analytics.track(
            "push_opened", 
            [
                "notification_id": ...,
                "type": ...,
                "time_to_open_ms": ...
            ]
        )
    }
}

Set up alerts on these metrics. If push open rate drops 50% overnight, you want to know before the marketing team asks why their campaign flopped.

Analytics as a Data Source

Sometimes analytics data becomes the only source of truth for building actual features.

This is very much a Product Engineering domain. You stop thinking only in terms of your narrow mobile surface area and start asking a broader question: if the backend data model makes a feature impossible today, can we still unblock it by seeding the missing information from mobile and web analytics? If your instrumentation was done well enough beforehand, sometimes the answer is yes.

I.e.: We needed to run a promotion, free delivery for a specific user segment. The catch: the segment was defined by behavior that backend never tracked. We needed users who had posted offers exclusively from mobile, never from web.

Backend had no idea which platform each offer came from. That information only existed in analytics events, where we (thankfully) tracked offer_posted with a platform parameter.

The solution was exporting analytics data to seed the backend:

SELECT user_id
FROM analytics.events
WHERE event_name = 'offer_posted'
GROUP BY user_id
HAVING SUM(CASE WHEN platform = 'web' THEN 1 ELSE 0 END) = 0
   AND SUM(CASE WHEN platform = 'ios' THEN 1 ELSE 0 END) > 0

This query gave us the eligible users. We exported it, loaded into the promotion service, and the feature worked.

Not ideal architecture, but sometimes analytics is the only place where certain data lives. When that happens, it’s often better to use it deliberately than to conclude the feature is impossible. Good analytics instrumentation gives you one more escape hatch when the primary model is incomplete.

Why Your Team Needs This Skill

In most organizations, analytics lives with a dedicated team. You file a ticket, wait days or weeks, and get a dashboard that answers yesterday’s question.

That doesn’t work for engineering problems. When you’re debugging a production issue or evaluating a technical approach, you need answers in hours, not weeks.

The practical reality:

  • Analysts aren’t embedded in your squad
  • Their backlog has 47 items ahead of yours
  • By the time you get the data, you’ve already shipped (or not shipped) based on gut feeling

What works better:

  • Engineers who can write basic SQL queries against your analytics warehouse (which is now very well supported by the AI-Assisted Engineering)
  • Predefined events that capture enough context to be useful later
  • Direct BigQuery or Looker access for the mobile team

This doesn’t replace analysts. They’re still essential for complex analysis, experiment design, and business metrics. But for engineering decisions, being able to query data yourself changes everything.

Track More Than You Think You Need

Track aggressively, even if the data seems useless today. I’ve lost count of how many times we needed data that didn’t exist because “we didn’t think we’d need it.” Storage is cheap. You can always delete later.

Baseline tracking for any mobile app:

  • Screen views with timing
  • All API errors with full context
  • Deeplink opens with source
  • Push notification lifecycle
  • WebView loads with performance metrics
  • App lifecycle events (foreground, background, terminate)

Format doesn’t matter as much as existence. Messy data you can query beats clean data you don’t have.

The Tooling Stack

What actually works:

Built-in dashboards:

  • App Store / Google Play for store-level metrics and crash reports

Analytics platforms:

  • Firebase Analytics / Mixpanel / Amplitude / PostHog for event tracking
  • Choose based on your company’s existing stack, not features

Custom analysis:

  • BigQuery or similar warehouse for raw event access
  • Looker Studio (formerly Data Studio) for shareable dashboards
  • SQL skills on your team

The specific tools matter less than having the full pipeline: events → warehouse → queryable.

Summary

Mobile analytics absolutely can help prove ROI, and in some organizations that will be one of its main jobs. But from a mobile product engineer’s perspective, its day-to-day value is often different: making better engineering decisions, exposing bugs that traditional monitoring misses, and giving you data that can unblock product work when the core systems don’t have the right shape yet.

Teams that build this capability shorten the feedback loop. They can prioritize faster, argue less about technical tradeoffs, and solve real problems instead of the ones that only seem important from intuition.