2024-01-10 8 min read Engineering Manager

What Mobile Engineers Actually Use Analytics For

Most discussions about mobile analytics focus on funnels, retention curves, and A/B test results. That’s the product manager’s view. From an engineering perspective, analytics may solve a completely different set of problems.

Over the years building mobile apps, I’ve found that the most valuable analytics use cases have less to do with business metrics. They’re about finding bugs that don’t throw errors, making technical decisions with actual data, and knowing when something breaks before users complain.

Debugging What Doesn’t Look Like a Bug

Some of the nastiest mobile bugs never show up in crash logs or error monitoring. They’re silent failures that only analytics can catch.

The Overeager Universal Link

We had users getting stuck during login on mobile web. The flow should have been simple: user browses mWeb, taps “login,” completes auth in Safari, continues browsing. Except some users never completed.

No crashes. No server errors. The login page worked fine in testing.

The problem? Our Universal Links configuration was too broad. Users who had the app installed were getting hijacked mid-login. iOS saw the auth URL, matched it against our association file, and threw them into the native app. But the app didn’t know what to do with a login callback meant for mWeb. Silent failure.

We only found it because we tracked every Universal Link opening:

class DeeplinkHandler {
    func handleUniversalLink(_ url: URL) {
        Analytics.track(
            "deeplink_opened", 
            [
                "path": url.path,
                "supported": isUrlSupported(url)
                ...
            ]
        )
    }

    private func isUrlSupported(_ url: URL) -> Bool {
        let supportedPaths = [
            "/cart",
            "/checkout",
            "/payment",
            "/payment/result",
            "/product",
            "/category",
            ...
        ]
        supportedPaths.contains {
            url.path == $0 || url.path.hasPrefix($0 + "/")
        }
    }
}

When we queried the data, we found ~4,000 users per month hitting deeplinks with supported: false. These were login flows that the app was intercepting but shouldn’t have been. The fix was excluding auth paths from the Universal Links association file.

Without that event, we’d never have known. The bug wouldn’t show up anywhere else.

API Errors at Scale

Your backend team isn’t checking every log line. They can’t. But you can track client-side API failures and surface patterns they’d never see.

func handleAPIResponse<T>(
    _ response: Result<T, APIError>, endpoint: String
) {
    switch response {
    case .failure(let error):
        analytics.track(
            "api_error", 
            [
                "endpoint": ...,
                "error_type": ...,
                "error_code": ...,
                "device_model": ...,
                "os_version": ...,
                "app_version": ...,
                "network_type": ...
            ]
        )
    case .success:
        // ...
    }
}

Real example: Sign In With Apple started failing for a subset of users. Backend logs showed nothing unusual. But client-side tracking revealed the failures clustered on specific iOS versions after an Apple security update changed token validation behavior.

Scale matters. When you can show “this affects 2,300 users per day” instead of “we got a bug report”, prioritization conversations go differently.

Device-Specific Issues

Mobile fragmentation is real, even on iOS. Analytics answer questions that would otherwise require guesswork:

  • “Should we drop iOS 15 support?” -> Check what percentage of active users are still on it.
  • “How many users are on older devices that don’t support our new features?” -> Real number, not assumption.
  • “This issue only happens on iPad. How big is the impact?” -> Exact user count.
// Track device context with every session
Analytics.setUserProperties([
    "device_model": UIDevice.current.model,
    "device_identifier": deviceIdentifier(),
    "os_version": UIDevice.current.systemVersion,
    "is_low_power_mode": ProcessInfo.processInfo.isLowPowerModeEnabled,
    "preferred_language": Locale.preferredLanguages.first ?? "unknown",
    "app_version": Bundle.main.appVersion
])

Validating Design Decisions

Designers have opinions. Data has answers. But you need to track the right things.

UI Attractiveness vs. Actual Conversion

A promotional banner looks great in Figma. But does it work? Track both impression and interaction:

class PromoBannerView: UIView {    
    override func didMoveToWindow() {
        super.didMoveToWindow()
        if viewActuallyVisible && !hasTrackedImpression {
            analytics.track(
                "promo_banner_viewed", 
                [
                    "banner_id": ...,
                    "position": ...,
                    "variant": ...
                ]
            )
            hasTrackedImpression = true
        }
    }
    
    @objc func onTap() {
        analytics.track(
            "promo_banner_clicked", 
            [
                "banner_id": ...,
                "position": ...,
                "variant": ...,
                "time_to_click_ms": ...
            ]
        )
    }
}

Conversion rate = clicks / views. Simple, but reveals things like:

  • Banner in position 3 has 2x higher CTR than position 1 (users scroll past the first one)
  • Image-based banner converts worse than text-based (users treat it as an ad and ignore it)

Technical Decision Making

“Should we rebuild this in native or keep the WebView?” Without data, everyone has an opinion. With data, it’s just a decision.

WebView vs. Native: Actual Performance

We had a WebView-based checkout flow that “felt slow.” Some engineers wanted to rewrite it in native. Others said it was fine.

Instead of debating, we measured. We tracked Core Web Vitals (LCP, FCP) from the WebView and compared against native screen load times for similar complexity screens.

The data showed WebView LCP at 4.8s vs native at 400ms. The decision was obvious.

But here’s the thing: on another screen, the difference was 1.2s vs 0.9s. Not worth a rewrite. Without measurement, we might have spent weeks rebuilding something that didn’t need it.

Monitoring Critical Paths

Some features can’t fail quietly. Push notifications, payments, core user flows — you need to know immediately when something breaks.

final class PushNotificationHandler {
    private var receivedNotifications: [String: Date] = [:]
    
    func didReceiveRemoteNotification(
        _ userInfo: [AnyHashable: Any], 
        applicationState: UIApplication.State
    ) {
        Analytics.track(
            "push_received", [
                "notification_id": ...,
                "type": ...,
                "app_state": ...
            ]
        )
    }
    
    func didOpenNotification(_ userInfo: [AnyHashable: Any]) {        
        Analytics.track(
            "push_opened", 
            [
                "notification_id": ...,
                "type": ...,
                "time_to_open_ms": ...
            ]
        )
    }
}

Set up alerts on these metrics. If push open rate drops 50% overnight, you want to know before the marketing team asks why their campaign flopped.

Analytics as a Data Source

Sometimes analytics data becomes the only source of truth for building actual features.

We needed to run a promotion, free delivery for a specific user segment. The catch: the segment was defined by behavior that backend never tracked. We needed users who had posted offers exclusively from mobile, never from web.

Backend had no idea which platform each offer came from. That information only existed in analytics events, where we (thankfully) tracked offer_posted with a platform parameter.

The solution was exporting analytics data to seed the backend:

SELECT DISTINCT user_id
FROM analytics.events
WHERE event_name = 'offer_posted'
GROUP BY user_id
HAVING COUNT(CASE WHEN platform = 'web' THEN 1 END) = 0
   AND COUNT(CASE WHEN platform = 'ios' THEN 1 END) > 0

This query gave us the eligible users. We exported it, loaded into the promotion service, and the feature worked.

Not ideal architecture, but sometimes analytics is the only place where certain data lives. When that happens, it’s better to use it than to say “we can’t build this.”

Why Your Team Needs This Skill

In most organizations, analytics lives with a dedicated team. You file a ticket, wait days or weeks, and get a dashboard that answers yesterday’s question.

That doesn’t work for engineering problems. When you’re debugging a production issue or evaluating a technical approach, you need answers in hours, not weeks.

The practical reality:

  • Analysts aren’t embedded in your squad
  • Their backlog has 47 items ahead of yours
  • By the time you get the data, you’ve already shipped (or not shipped) based on gut feeling

What works better:

  • Engineers who can write basic SQL queries against your analytics warehouse (which is now very well supported by the AI-Assisted Engineering)
  • Predefined events that capture enough context to be useful later
  • Direct BigQuery or Looker access for the mobile team

This doesn’t replace analysts. They’re still essential for complex analysis, experiment design, and business metrics. But for engineering decisions, being able to query data yourself changes everything.

Track More Than You Think You Need

Track aggressively, even if the data seems useless today. I’ve lost count of how many times we needed data that didn’t exist because “we didn’t think we’d need it.” Storage is cheap. You can always delete later.

Baseline tracking for any mobile app:

  • Screen views with timing
  • All API errors with full context
  • Deeplink opens with source
  • Push notification lifecycle
  • WebView loads with performance metrics
  • App lifecycle events (foreground, background, terminate)

Format doesn’t matter as much as existence. Messy data you can query beats clean data you don’t have.

The Tooling Stack

What actually works:

Built-in dashboards:

  • App Store / Google Play for store-level metrics and crash reports

Analytics platforms:

  • Firebase Analytics / Mixpanel / Amplitude for event tracking
  • Choose based on your company’s existing stack, not features

Custom analysis:

  • BigQuery or similar warehouse for raw event access
  • Looker Studio (formerly Data Studio) for shareable dashboards
  • SQL skills on your team

The specific tools matter less than having the full pipeline: events → warehouse → queryable.

Summary

Mobile analytics isn’t about proving ROI to stakeholders. It’s about making engineering decisions with data instead of opinions, finding bugs that hide from traditional monitoring, and knowing your app’s actual behavior in the wild.

The teams that build this capability ship better products. Not because they’re smarter, but because they see what’s actually happening.