Hire an Android development team that owns the Play Console release train, the OEM device matrix, and the foldable roadmap


Most apps that earn an Android-only pod look the same on the analytics dashboard: Android is somewhere between 88 and 99 percent of sessions, the install base skews toward LATAM, India, Africa, parts of Eastern Europe, or an OEM partnership where the app ships pre-installed; the lowest-end device producing five percent of sessions is a 4 GB RAM Samsung A-series or a 2 GB Android Go handset; FCM push delivery on Xiaomi MIUI is sitting somewhere below 80 percent and nobody on the team owns the OEM-skin parity log. The right shape for that program is not a generalist mobile pod that splits attention between two stores; it is a single-platform Android pod that owns the full surface area, end to end.

Siblings Software has staffed Android programs for US, Canadian, European, and Latin-American buyers since 2014. The default Android pod is six seats: an Android tech lead, two to three senior Kotlin and Jetpack Compose engineers (one of whom carries the platform-depth work — foreground services, R8, baseline profiles, BLE, Play Integrity), a dedicated QA automation engineer running a real OEM device lab, and a part-time mobile DevOps engineer on Fastlane supply, Play App Signing, and the staged-rollout policy. If you only need one or two senior engineers inside an existing Android squad, the Android staff augmentation route is faster and lighter. If iOS and Android are both meaningful, the mobile pod that runs both stores together is the right page; if a Windows or back-office surface is part of the same product, the broader app development team umbrella is the right page.

This page is for buyers who already know the answer is single-platform Android. The Android app development service is the higher-level service page; the Kotlin development team is for the buyers who want a Kotlin pod across Android, backend, and multiplatform. This page is narrower than both: it is the Android product pod.

Reviewed by Javier Uanini, Founder and CEO, Siblings Software — ten-plus years staffing mobile and Android engagements across LATAM consumer fintech, OEM pre-install programmes, and B2B managed-tablet fleets. Last reviewed 14 May 2026.

Composition of a single-platform Android development pod: an Android tech lead, two senior Kotlin and Jetpack Compose engineers, a flex second-screen specialist for Wear OS / Auto / Foldables / TV when in scope, a dedicated QA automation engineer running an OEM device lab covering Pixel, Samsung One UI, Xiaomi MIUI and HyperOS, Oppo ColorOS, Vivo Funtouch, Motorola and Realme, and a part-time mobile DevOps engineer on Fastlane supply and Play App Signing. Pricing aligned to MEMORY dedicated-team brackets, USD 22K to 36K per month for the steady-state shape.

Talk to a delivery lead

What an Android pod actually owns — and what it deliberately does not

An Android pod is narrower than the dual-platform mobile pod and deeper than a single-stack Kotlin team. Narrower because it does not run the iOS surface, the App Store Connect organisation, the TestFlight programme, or a coordinated release train across two stores. Deeper because it refuses to treat OEM-skin variance, Play Console policy churn, and Android-form-factor sprawl as someone else's problem.

The ownership covers eight artefacts that get cited in every kickoff. The product surface — Compose-driven UI, Material 3, dark mode, dynamic colour, accessibility (TalkBack, large text, high-contrast), and the empty-state inventory the merchandising team will draw from without filing a one-off ticket. The native depth — foreground services classified correctly for Android 14 and 15, Doze and App Standby strategies, BLE pairing on Android 12+ permission models, the Play Integrity verdict integrated into your fraud-decision pipeline, biometric unlock through BiometricPrompt with Class 3 fallbacks, secure storage on EncryptedSharedPreferences, and the small but unforgiving differences between AOSP behavior and OEM-skin behavior. The Play Console release train — one Fastlane supply lane, one signing setup with Play App Signing, one CI matrix that runs unit + Espresso + macrobenchmark on every PR, one staged-rollout policy at 5 / 10 / 25 / 50 / 100 percent, a kill-switch feature flag wired before the release branch is cut, and an in-app update prompt ready for the critical hotfix path. The OEM device matrix — cut from your real-user analytics, refreshed once a quarter, including a real Android Go handset because the engineering team almost never has one. The store-rejection runbook — the document the on-call engineer reads when the Play Console review email lands. The crash and performance budgets — crash-free sessions above 99.5 percent, ANR rate under 0.47 percent, p95 cold-start under 2.0 seconds on the lowest-end device producing five percent of sessions. The OEM-skin parity log — FCM delivery, push priority, foreground-service kill behavior, and notification-grouping differences across MIUI, HyperOS, ColorOS, Funtouch, EMUI, and stock OEM-skinned Motorola. And the backend contract — versioned, schema-migrated, deeplink-aware. The pod does not own the backend itself; it owns the contract with whoever does.

The standards we treat as load-bearing on every engagement: the Android developer documentation, platform guides, and behavior-change calendars are the contract every Android submission is judged against; the Jetpack Compose documentation is the source of truth for the UI layer the pod ships; and the Google Play Console release management documentation is the contract the staged-rollout policy is built against. Internal wikis are not allowed to disagree with these without a written exception.

If you do not see those eight artefacts produced inside the first three sprints, the engagement is shipping screens, not running an Android program. Most Android outsourcing pods buy you the first artefact and call it a team. The bill arrives, a screen ships, and three months later the rejection lands at 5pm on a Friday and nobody owns the resubmission.

When Android-only beats the dual-platform mobile pod — an opinionated take

The Android-only pod is not a universal answer; the dual-platform mobile pod is right for most consumer apps in the US and Western Europe. The Android pod wins inside a narrow band: when the install base genuinely is Android, when the program has Android-specific surface area that an iOS engineer cannot honestly help with, or when the OEM partnership and the regulatory context make iOS irrelevant.

Android-first installed base

Latin America averages around 85 percent Android share by sessions; India, Africa, much of South-East Asia, and parts of Eastern Europe sit higher. If you sell a consumer fintech in Mexico, a ride-hailing app in São Paulo, a food-delivery app in Lagos, or a microcredit app in Bogotá, the iOS surface is a marketing afterthought and the engineering depth your customers need is on Android. Hiring a dual-platform pod for a 96-percent-Android product is paying for an iOS engineer to re-implement the screens nobody is opening.

OEM partnership or pre-install programme

The product ships pre-installed on a manufacturer's catalogue (Xiaomi GetApps, Samsung Galaxy Store, Motorola, Vivo, OEM telecom bundles), the integration calendar is dictated by the OEM, certification requires the OEM-specific build, and the surface is a deeply skinned Android. The pod that wins this work has lived through MIUI / HyperOS quirks, Samsung Knox, and the device-launch certification process. iOS is not in the conversation.

B2B Android-only fleet

Commercial drivers carrying corporate-issued Android handsets, retail POS staff on managed Android tablets, field-service technicians on rugged Android (Zebra, Honeywell, Datalogic), healthcare clinicians on Samsung tablet programmes, kiosk operators on Android-based hardware, automotive integrators on Android Automotive OS. The procurement team chose Android once; the pod should match that choice rather than maintain an iOS surface that nobody on payroll uses.

Foldables, tablets, Auto, TV, embedded Android

The product earns a Foldable-aware adaptive layout, a large-screen tablet experience, an Android Auto template, an Android TV / Google TV surface, or an Android XR experiment. Each of these is platform-specific by design; iOS is genuinely irrelevant, and a generalist mobile pod will under-resource the work because half the team has no native context for it. The Android pod treats each of these as a sized roadmap item with a written brief, not a "we could just" line in a sprint planning meeting.

The honest summary: roughly two out of three Android-only pods we sign sit in one of those four bands. The other third are buyers who tried a dual-platform pod, watched the iOS budget consume the deep Android work, and came back asking for a single-platform team that actually finishes the foreground-service rewrite. We will be honest with you on the discovery call about which side of the line your roadmap actually sits on; we will not pitch you an Android-only pod when the dual-platform pod is the right shape.

Pod composition — six seats sized for the Android program

Composition is opinionated. The seats below cover roughly nine in ten of the Android programs we sign on this page. The seats are the ones vendors cut first to win the deal. We refuse to remove them; they are how the next Play Console rejection arrives.

Android tech lead

Owns the target SDK roadmap aligned to the Play deadline calendar, the rejection runbook, the crash and ANR budget gate, the architecture decision records, the OEM-skin parity log, and the hard authority to refuse a "small" feature that would burn the modernisation calendar. Reads the Play Console release notes and the AOSP behavior-change log so the team does not learn about a foreground-service tightening from a rejection email. Writes the modernisation memo in week one and updates it when a new platform version earns a change.

Two to three Android engineers (Kotlin + Compose, native depth)

The default split is two senior engineers carrying Kotlin, Coroutines, Flow, Compose, the data layer (Room, DataStore), and the parent app surface; one of the two leans into platform depth (foreground services, Doze, BLE, Play Integrity, R8 + baseline profiles). When the roadmap earns a second-screen surface (Wear OS, Foldable, Android Auto, Android TV, ChromeOS, Android XR), a third Android engineer with the matching specialism joins the pod — under a written brief, not on a hunch.

Dedicated QA automation engineer with OEM device lab

Owns the device matrix, the release-blocking smoke suite, the regression suite on the device lab, the macrobenchmark on the lowest-end Android Go handset, the accessibility audit (TalkBack, large text, high-contrast, Switch Access where relevant), the synthetic Push Probe that calls FCM end-to-end across MIUI / HyperOS / ColorOS / Funtouch every day, and the device-lab refresh once a quarter from the analytics. Without this seat, store rejections and one-star reviews on Xiaomi handsets are the QA process.

Part-time mobile DevOps / release engineer

Two to three days a week. Owns Fastlane supply lanes, Gradle and Bitrise (or self-hosted runners on EAS / GitHub Actions), Play upload keys, Play App Signing enrollment, managed publishing, the staged-rollout policy at 5 / 10 / 25 / 50 / 100 percent, the kill-switch feature flag, the in-app update prompt, the runbook for a Play Console policy escalation, and the deliverability dashboards the marketing team reads before a campaign. Carved out as a dedicated fractional seat rather than collapsed into the tech lead, because the tech lead's calendar gets eaten by target SDK calls when the release train hiccups.

When your stack leans heavier on a specific Kotlin specialism (KMP shared modules, server-side Kotlin via Ktor, Spring on the JVM), we blend bench engineers from our Kotlin development team into the pod. The Android pod stays product-shaped; the Kotlin team is for the buyers who want a Kotlin discipline across multiple surfaces.

Who hires an Android-only pod

The buyer profiles below cover roughly nine in ten conversations on this page. If you recognise yourself in one, the next call is usually about the OEM device matrix, the target SDK calendar, and the Play Integrity rollout, not about CVs.

LATAM, India, Africa, and APAC consumer Android teams

Fintech wallets in Mexico and Brazil, ride-hailing and food-delivery apps from Bogotá to Nairobi, microcredit and savings apps for unbanked customers, healthcare apps reaching rural users on a 4 GB RAM phone, gig-labour platforms whose entire workforce is on Android. The buyer wants engineers who understand that the lowest-end device producing 5 percent of sessions is not a Pixel and that MIUI is a real OEM-skin variance, not a configuration toggle.

OEM partners shipping pre-installed apps

Telco bundles, manufacturer-curated catalogues (GetApps, Galaxy Store, Vivo App Store, Oppo App Market), automotive integrators bundling onboard apps, kiosk operators with custom Android builds, smart-home device makers shipping a companion. The pod that wins this work has been through Knox certification, a Samsung pre-load review, or an Xiaomi GMS-region launch, and reads the OEM-specific submission policies the same way it reads the Play Console.

Retail, hospitality, and field-service operators on managed Android

POS staff on Samsung Galaxy Tab Active, hotel front-desk operators on Lenovo tablets, warehouse staff on Zebra rugged handhelds, hospital clinicians on Samsung A-series with MDM-locked profiles, restaurant kitchens on Android-based KDS hardware. The procurement team chose Android years ago and is not switching. The pod respects the device estate the operator already runs and ships against MDM (Android Enterprise, Knox, Workspace ONE) constraints.

Automotive, embedded, and form-factor specialists

Android Automotive OS integrators, vehicle-companion apps, fleet-tracking dashboards inside in-cab tablets, Android TV / Google TV channel apps, Android XR experimenters. The buyer wants senior Android engineers who have shipped a CarPlay-equivalent template review, an Android TV channel certification, or a window-size-classes Foldable redesign, not engineers reading the documentation for the first time during sprint one.

Real hiring scenarios we handle every quarter

Eight scenarios cover most of the Android pods we sign on this page. The shape of the pod, the first sprint goal, and the headline number we agree to ship against differ by scenario, not by stack.

Rescue an unstable Android app

Crash-free sessions are below 98.5 percent, ANR rate is over 1 percent on Samsung A-series handsets, the Play Store rating has slipped under 3.5, the last five releases all needed a hotfix in the first 48 hours, and the previous vendor handed back a Gradle build that only one laptop in the office can produce. The first three sprints produce a crash budget plan, a Play Console release pipeline the pod owns, an Espresso + macrobenchmark regression suite on the device lab, and a written rejection runbook. Feature work resumes only after the bleed stops.

Modernise from Java to Kotlin and Compose

The codebase is mostly Java with the older XML view system, the test harness is Robolectric-only, the dependency injection is Dagger 2 written in 2018, and the Compose adoption is zero. Sprint zero produces the modernisation memo — an honest assessment of the modules to migrate first, the modules to leave on XML for now, and the order in which Hilt, Compose, and Navigation Component move in. Migration runs as a percentage of every sprint rather than a one-shot rewrite that would freeze the feature roadmap. Most Java-to-Kotlin programs land between four and nine sprints depending on the codebase size.

Hit a Play target SDK deadline

Play has published the next target API level cutoff, your app is two majors behind, and the marketing team did not budget for the work. The pod produces the deprecation register, the breaking-behavior register (foreground service classification, photo picker, partial media access, background location, exact alarms, package visibility), the regression suite on real devices already running the new platform, and the order in which feature work pauses. We have hit Play target SDK deadlines on engagements that started six weeks before the cutoff.

Ship a Foldable or large-screen tablet experience

The product is going onto Galaxy Z Fold, Pixel Fold, the Pixel Tablet, or a fleet of B2B Samsung tablets, and the existing layout was built phone-first. The pod adopts window size classes, the canonical layouts (list-detail, supporting pane, feed), an adaptive navigation rail, screenshot-test baselines, and a written quality bar (Play's large-screen quality tier expectations). Foldables additionally earn a postured-state handler — the half-folded camera handoff, the dual-pane email reading mode — that is sized inside a one-page brief.

Wear OS, Android Auto, or Android XR companion

The roadmap earns a companion. The pod requires the one-page brief: surface, MVP feature, parent-app fallback when surface is offline or unpaired, launch-week marketing message, owner. Wear OS companions are built in Compose for Wear with Health Services and paired through the Wearable Data Layer; Android Auto follows the driver-distraction templates without improvisation; Android XR is treated as an experiment with its own sprint zero. We will refuse to start the work without the brief signed.

Performance fix on Android Go

The lowest-end device producing 5 percent of sessions is a 2 GB RAM Android Go handset; cold start is 4-to-6 seconds; the APK is over 80 MB; R8 has not been re-tuned since 2022; baseline profiles are absent. The pod ships R8 retuning, baseline-profile generation against the parent screen, dynamic feature modules, image-format migration to AVIF or WebP, layout flattening, and macrobenchmark regression on a real Go handset gating every PR. Cold start drops below 2.0 seconds; install size halves; the one-star reviews stop arriving from low-tier OEMs.

Play Store policy compliance and App Bundle migration

The app is shipped as an APK on Play, dynamic feature modules are not in use, the data-safety form is two years old, the permissions declaration drifted, the Personal Loan policy or Family Policy or Health Apps policy applies and the team has not re-read it since the last update. The pod migrates to the Android App Bundle, regenerates the data-safety form from a YAML source-of-truth that lives in the repo, redrafts the permissions declaration, and walks the policy-specific requirements in writing before the next submission.

Play Integrity attestation rollout for fraud

The fraud team needs a stronger device-trust signal for high-value flows (lending decisions, account creation under regulatory scrutiny, payment authorisation). The pod gates only the high-value action, integrates the Integrity verdict into the existing fraud-decision pipeline alongside behavioural signals, and instruments a side channel for legitimate users on rooted phones, custom ROMs, or Huawei devices without GMS. Done well, false-positive lockouts stay below 0.1 percent of attestation calls without weakening the fraud decision.

How a Siblings Android pod actually ships — the Play Console release train

Android teams that ship reliably share one boring trait: the release pipeline runs whether or not the urgency dial is at maximum. Internal builds go to Play internal testing every Monday without anyone touching Android Studio. Closed testing cuts every two weeks for 50 to 200 testers. Open testing is reserved for the riskiest releases. Production submission lands inside an agreed release window. Staged rollout halts and rolls back if the crash-free or ANR gate breaks. The on-call rotation knows what the rejection runbook says before it has to read it.

Play Console release pipeline owned by a Siblings Android pod. Feature branch commit feeds a CI build that runs lint, unit and Espresso tests, macrobenchmarks and R8 shrink, then a Fastlane supply lane signs an Android App Bundle with Play App Signing and uploads it. The bundle moves through Play internal testing weekly, into closed testing every two weeks for 50 to 200 testers, into open testing when the release is risky, and finally into production with a staged rollout at 5, 10, 25, 50 and 100 percent. Crash-free sessions and ANR rate gates pause the next stage; a kill-switch feature flag and a rollback build sit on the same release branch from the day the branch is cut, and an in-app update prompt is wired for the critical hotfix path.

Release-train cadence

Internal testing weekly — promoted automatically from CI, no manual touch points. Closed testing every two weeks (50 to 200 testers, real Android handsets, Espresso + macrobenchmark green on the device lab before promotion). Open testing for the risky releases only (Play store-listing experiments, target SDK migrations, large refactors). Production submission inside the agreed release window with managed publishing on. Staged rollout at 5 / 10 / 25 / 50 / 100 percent, halt-and-rollback if crash-free or ANR drops, kill-switch flag and rollback build on the same release branch from day one.

Crash, ANR, and on-call

Crash-free sessions above 99.5 percent on the latest two releases is the default budget. ANR rate under 0.47 percent (the Play vitals threshold) on every supported handset. p95 cold start under 2.0 seconds on the lowest-end Android Go device producing 5 percent of sessions. The pod runs a 24/5 on-call rotation through major release windows and a written paging policy the rest of the time. We refuse to be paged on code we did not write or a release pipeline we do not own.

The device matrix sits behind the release train. The QA automation engineer cuts it from your real-user analytics, refreshes it quarterly, and runs the regression on real OEM hardware before any release leaves closed testing.

Android device-matrix coverage map split into five tiers and three test environments. Reference Pixels carry stock Android behavior; Samsung One UI carries the largest combined OEM share with Galaxy S and Galaxy A handsets plus foldables when in scope; the volume row carries Xiaomi MIUI and HyperOS, Oppo ColorOS, Vivo Funtouch, Motorola, and Realme handsets that dominate Latin America, India, Africa, and Southeast Asia, including the bottom-tier 4 GB RAM device the install base actually carries; an Android Go reference handset catches the cold-start regression nobody on the team owns a real device for; the form-factor row carries foldables, large-screen tablets, Wear OS, Android Auto and Android TV when the product earns those surfaces. Three columns split coverage between CI emulators, an in-house device lab, and a managed cloud lab on BrowserStack or Sauce Labs.

Store-rejection runbook

The runbook covers Play's most-cited grounds: Background Location, Permissions Declaration, Foreground Service classification on Android 14 and 15, Personal Loan policy, Health Apps policy, Family Policy, Repetitive Content, Deceptive Behavior. On a rejection, the pod reads reviewer notes inside the first business hour, drafts the response same-day, ships the second build off a pre-built release branch, and resubmits inside 24 hours when the policy fix is small enough. When the rejection points at a real policy issue we will not paper over (Personal Loan disclosures, data-safety honesty, deceptive UX), we will say so on the same call.

Internal testing track and store-listing tests

Internal testing is the boring weekly rhythm; Play store-listing experiments are the loop that earns conversion. The pod owns the full loop: hypothesis, asset production, custom store-listing draft, test launch, weekly readout, written stop criterion sitting in the same git repo as the code, and the keep-or-kill decision recorded next to the data. The most expensive store-listing tests we have inherited from prior vendors were the ones with no written stop criterion.

Onboarding (3 weeks to first deploy)

Discovery 3–5 days (read-only repo walk, analytics audit, OEM device-matrix proposal, written modernisation memo, target SDK calendar). Team assembly 5–10 days (paired technical sessions; tech-lead candidates run a live Play Console rejection review on an anonymised sample, not a coding kata). Sprint zero (week 2–3): CI access, Play Console roles, Play App Signing enrollment, internal testing track wired, Fastlane supply lanes merged. Sprint one (week 3–4): first quick wins land — cold-start regression closed, weekly internal testing building, data-safety form drafted from YAML.

Target SDK and modernisation reviews

The pod runs a quarterly target SDK and modernisation review that reads the platform owner's published deprecation calendar (not the marketing schedule). The review produces an honest list of upcoming behavior changes that touch your code and a budget allocation as a constant percentage of every sprint. The buyers we have inherited from a prior vendor were the buyers who treated modernisation as optional.

What 2025–2026 changed for Android programs

An Android pod that signed in early 2024 and stayed asleep through the rest of the year would be six months behind today. The platform changed across at least five fronts that touch every program we run; if a vendor pitching you an Android team cannot describe each one without checking notes, they are pitching the team they had eighteen months ago. Below is the calendar we read every week and the buyer's-side translation we put in writing during sprint zero.

Play target API level 35 and the August 31, 2025 cutoff

New apps and updates submitted to Play after 31 August 2025 must target API level 35 (Android 15). Pre-existing apps that miss the deadline drop out of Play discovery for new installs on devices running newer Android versions, then disappear from the store entirely after the grace window. The behaviour changes that broke the most Android apps we audited in late 2025: edge-to-edge layouts forced on by default (the status bar and gesture-nav bar now overlap content unless you opt in to insets), elastic over-scroll on lists, the 16 KB page-size requirement on native libraries shipping with NDK r28+, and the new foreground-service-type enforcement. The pod treats the calendar as the regulatory deadline it is and refuses to ship a feature sprint that does not first close out the deprecation register.

Play Integrity Standard tier and verdict caching

Throughout 2024 and into 2025 Google rolled the Play Integrity API into Standard and Classic request tiers, with an explicit per-app daily quota on Classic requests and a strong push toward Standard. The right pattern in 2026: Standard verdicts on cheap, frequent calls (session start, refresh) and Classic verdicts cached for the high-value action (loan decision, payment authorisation, withdrawal). Apps that ship Classic on every screen burn the quota by the third week and lock real users out at peak. The pod owns the cache policy, the side-channel for legitimate users on rooted phones / Huawei devices without GMS, and the integration into the existing fraud-decision pipeline so a single Integrity miss does not lock a real user out.

Jetpack Compose adaptive layouts went GA

Compose Material 3 Adaptive shipped its 1.0 line in 2024 and stabilised through 2025, which finally retired the home-grown window-size-class plumbing every Foldable-aware codebase carried. The canonical layouts — ListDetailPaneScaffold, SupportingPaneScaffold, NavigationSuiteScaffold — now cover the four-pane Foldable, the dual-pane large tablet, and the navigation-rail desktop variant out of the box. Apps that still ship a hand-rolled adaptive harness in 2026 are paying maintenance interest on a problem the framework solved. Migration is a sprint of focused work per surface, not a quarter-long rewrite.

Predictive back, edge-to-edge, and the polished-back gesture

Predictive back (the gesture that previews the destination during a back-swipe before the user commits) became the default on Android 15. Apps that intercept back without registering a callback in the Activity get a janky flash; apps with custom navigation that did not opt into the new API see broken animations on Pixels and Samsung One UI. The pod owns the BackHandler audit on every screen the user can navigate away from, the predictive-back animation for the first-class destinations, and the QA matrix to confirm the flow on Galaxy Z Fold and Pixel 8 Pro before any production rollout.

AI coding assistants now sit inside almost every Android repo

Of the Android engagements we ran in late 2025 and 2026, almost every one shipped with at least one AI coding assistant (Cursor, Claude Code, GitHub Copilot, or Android Studio's own Gemini integration) sitting inside the pull-request loop. The skill the pod actually screens for has shifted from raw output to judgment under AI assistance: rejecting an AI-generated diff that introduces a Compose recomposition leak, writing the macrobenchmark that proves the suggested R8 keep rule actually preserves the call site, refusing the suggestion to widen a permission scope because Gemini did not know about your data-safety form. The pod brings written rules for AI-assisted code review into sprint zero rather than discovering them on the first noisy PR.

Android XR, Wear OS 5, and the form-factor calendar

Android XR launched in late 2024 as a Compose-first surface; the developer preview is on the table for buyers building media, fitness, or productivity surfaces. Wear OS 5 (Android 14 base) shipped in the second half of 2024 with Health Services improvements that broke a handful of older companion apps. We treat each new form factor as a sized roadmap item with a written brief, a launch-week marketing message, and a parent-app fallback for when the surface is offline or unpaired. We will refuse to start the work without the brief signed; that refusal has saved more programs than any single technology decision in the last eighteen months.

For the calendar we keep open every week: Android 15 behavior changes, the Play target API requirement, and the Play Integrity API documentation. Internal wikis are not allowed to disagree with these without a written exception.

Engagement models and what an Android pod costs

Pricing for a dedicated Android pod is monthly and predictable. The numbers below sit inside the broader dedicated development team bracket (USD 12K–60K/month). The Android pod sits in the lower-middle of that band — a touch below the dual-platform mobile pod (which carries an iOS specialist and an extra QA seat for the second store) and substantially below two parallel single-platform pods.

Lean Android pod

USD 16K–24K / month

Four seats. Android tech lead, two Kotlin and Compose engineers, shared QA automation engineer, fractional mobile DevOps. Fits a stable Android org adding a target SDK migration, a Foldable layout pass, or a single-quarter modernisation. Initial 3-month commitment, then month-to-month. 2-week satisfaction guarantee on every seat.

Steady-state Android pod

USD 22K–36K / month

Six seats. Android tech lead, two senior Kotlin / Compose engineers (one carrying platform depth), one second-screen specialist when the roadmap earns it, dedicated QA automation engineer with the OEM device lab, part-time mobile DevOps. The default shape and the one most of our Android pods live in. Owns the Play Console release train, the rejection runbook, the device matrix, and a 24/5 on-call rotation through major release windows.

Heavy Android pod

USD 36K–48K / month

Seven to eight seats. Adds a second platform-depth engineer (for fintech with Play Integrity + biometric flows, IoT companion with BLE depth, automotive with Auto + Automotive OS), a designer or fractional mobile architect, and an on-call rotation that covers a managed pre-install programme. Used where the native depth on Android is genuinely a full-time roadmap.

A 2-week satisfaction guarantee covers every seat. Scaling down requires 30 days' notice; scaling up takes one to two weeks per role. Project-based engagements (a 13-to-16-week Android launch program, a target SDK migration, a Compose modernisation) typically run between USD 30K and USD 95K depending on scope. For a single specialist embedded inside your existing rituals, the Android staff augmentation route runs USD 4K–9K/month per engineer.

How this compares to in-house, freelancers, agencies, and the dual-platform mobile pod

The conversation below is the one we have on most Android-only discovery calls. None of these are wrong shapes universally. Each one is right inside its band and bad outside it.

Hiring senior Android engineers in-house

Wins on long-term retention, deep domain ownership, and lower run-rate cost beyond year two. Loses on time-to-first-deploy (six to nine months in most US and Western European metros for a senior Kotlin hire with Compose, foreground-service, and Play Integrity experience), on the OEM device lab and Fastlane infrastructure that comes free with a pod, and on the bench depth that absorbs an unexpected resignation in week eight without slipping a target SDK deadline. Most of our pods sit alongside a small in-house Android team that owns the long arc; we own the device lab and the rejection runbook.

Freelance Android crew assembled from marketplaces

Wins on hourly rate and on the speed of the first contract signature. Loses on the eight artefacts an Android pod actually owns: nobody on a freelance crew owns the Play Console release pipeline, the rejection runbook, the OEM device matrix, the Play App Signing enrollment, or the staged-rollout policy. The screens ship; the program does not. We have inherited two engagements where the freelance crew shipped twelve sprints of feature work and the Play upload key still lived on a single contractor's personal laptop.

Single-vendor Android agency

Wins on speed of project setup and on the polished pitch deck. Loses when the agency is a generalist mobile shop that happens to take Android work (most are; the rejection runbook is shallow because the senior partner is iOS-leaning), when the engagement is structured as a fixed-bid project rather than a long-running pod, and when the OEM-skin parity log is a Confluence page nobody updated since 2022. Pick an agency for a fixed-bid launch; pick a pod for the program after launch.

Dual-platform mobile pod (iOS + Android together)

Wins when iOS and Android are roughly nine in ten of the product on consumer surfaces, web is light, and the buyer wants one cadence on both stores. Loses when Android is genuinely 90 percent or more of sessions, when the deep work is Android-specific (Play Integrity, foreground services, OEM-skin push, Foldables, Wear OS, Android Auto), or when an OEM partnership makes iOS irrelevant. Paying the iOS specialist seat for a 96-percent-Android product is a tax on the deep Android work the program actually needs.

Mini case study — Larkspur Microcredit, Android Go performance and Play Integrity rebuild

A Mexico-headquartered microcredit app where the install base was 96 percent Android and iOS was a marketing afterthought.

Larkspur Microcredit is a Mexico City-headquartered consumer microcredit and savings app serving roughly 1.2 million active borrowers across Mexico, Colombia, and Peru. Personal-loan tickets average USD 180; repayment cycles are biweekly; the borrower base skews toward unbanked or underbanked customers on lower-end Android handsets. By the engineering team's own analytics, 96 percent of monthly sessions came from Android: Samsung A-series and Motorola G handsets dominated, Xiaomi Redmi sat at about 18 percent of sessions, and an Android Go reference handset (Samsung A04, 2 GB RAM) accounted for nearly 7 percent on its own. iOS was 2.4 percent and dropping; the iOS surface had not shipped a meaningful release in eleven months.

The Android app had drifted hard. Cold start on the Samsung A04 was around 5.8 seconds against a written budget of 2.0 seconds. APK size had grown to 84 MB after years of dependency creep, R8 had not been retuned since 2022, baseline profiles were absent, and dynamic feature modules were not in use. Crash-free sessions sat at 98.1 percent and ANR rate at 1.3 percent on Xiaomi handsets where MIUI's aggressive battery optimisation was killing the foreground service that ran the bureau-pull during repayment scoring. FCM push delivery on Xiaomi MIUI sat at 76 percent against a target above 95 percent, which the marketing team was already paying for in lost reactivation. Play had flagged a Personal Loan policy review for the next submission cycle, the data-safety form was 19 months old, the target SDK was two majors behind the Play deadline, and the fraud team needed Play Integrity attestation on the loan-decision flow before the next regulatory audit.

A six-seat Siblings Android pod was placed alongside the internal team of two engineers and one product owner: an Android tech lead, two senior Kotlin and Compose engineers (one carrying R8 + baseline-profile + dynamic-feature-module work, one carrying foreground services + Play Integrity + biometric depth), a flex second-screen specialist who joined for sprints six through nine for a Wear OS notification companion, a dedicated QA automation engineer running a real OEM device lab (Samsung A04, A14, A24, Xiaomi Redmi Note 12, Oppo A78, Vivo Y56, Motorola G54), and a part-time mobile DevOps engineer at 2.5 days a week on Fastlane supply, Play App Signing, and the staged-rollout policy. Charter: stop the bleed in the first three sprints, hit the Play target SDK cutoff, ship Play Integrity to the loan-decision flow without locking out legitimate users, and repair MIUI / ColorOS push delivery before the marketing team launched the Q3 reactivation campaign.

Sprint zero produced the rejection runbook, the OEM device-matrix proposal cut from real-user analytics, the data-safety YAML source-of-truth, the target SDK calendar, and a Fastlane supply lane that took the release pipeline off the only laptop in the office that could ship a build. Sprints one through three retired the deprecations that the Play target SDK cutoff required, classified foreground services correctly for Android 14, migrated to Photo Picker for camera intake during loan verification, and rebuilt the consent flow for the Personal Loan policy disclosures. Sprints four through seven ran the performance program on Android Go: R8 retuning, baseline profile generation against the parent screen, dynamic feature modules splitting the savings flow and the loan flow into on-demand modules, AVIF image migration on the marketing screens, and a layout flattening pass that cut the parent inflation cost. Sprints eight through eleven shipped the Play Integrity rebuild against the loan-decision pipeline, instrumented a side channel for users on rooted phones and Huawei devices without GMS, and rolled out the OEM-aware in-app battery-whitelist guidance flow that lifted MIUI push delivery. Sprints twelve through fourteen shipped the Wear OS repayment-reminder companion, the Foldable layout pass for the Galaxy Z Flip share of the install base, and a Custom Store Listing experiment that lifted Play install-to-first-application conversion.

Headline numbers across the first fourteen sprints. Cold start on the Samsung A04 5.8s → 2.1s. APK size 84 MB → 28 MB after R8 retuning + dynamic feature modules + AVIF migration. Crash-free sessions 98.1 percent → 99.7 percent. ANR rate 1.3 percent → 0.31 percent on Xiaomi handsets after the foreground-service classification fix. Play Integrity false-positive lockout rate on the loan-decision flow held below 0.03 percent of attestation calls without weakening the fraud decision. FCM push delivery on Xiaomi MIUI 76 percent → 96 percent after the in-app battery-whitelist guidance and the high-priority-FCM reservation policy. Play Store rating 3.4 → 4.5 over the eight weeks after the first stable release. Play target SDK cutoff hit nineteen days before the deadline. Custom Store Listing experiment lifted install-to-first-application conversion by a measured percentage on the marketing campaign window. Engagement cost ~USD 26K/month for the six-seat pod across the fourteen sprints; the internal team stayed and shipped against the ride-along feature roadmap throughout.

What we'd do differently next time: spend two extra discovery days writing the OEM-aware battery-whitelist flow before the Play Integrity rebuild lands. The MIUI delivery problem and the Play Integrity rollout were treated as separate workstreams in the original plan; in production, a user who refused the battery-whitelist prompt would also fail Integrity attestation more often because the device-trust scoring picked up the background-restriction signal. A combined OEM-aware onboarding flow would have caught that interaction in the first sprint instead of in week ten.

Engagement at a glance

  • Industry: LATAM consumer microcredit / savings, regulated
  • Surfaces: Android phone (96% of sessions), Wear OS companion mid-engagement
  • Stack: Kotlin, Jetpack Compose, Hilt, Coroutines, Flow, Room, WorkManager, Play Integrity, FCM, R8, baseline profiles, dynamic feature modules, Fastlane supply
  • Pod shape: 6 seats — tech lead, 2 senior Kotlin engineers, second-screen specialist (sprints 6–9), QA + OEM device lab, fractional mobile DevOps
  • Duration: 14 sprints (28 weeks)
  • Cold start (Samsung A04): 5.8s → 2.1s
  • APK size: 84 MB → 28 MB
  • Crash-free Android: 98.1% → 99.7%
  • MIUI push delivery: 76% → 96%
  • Play target SDK: hit 19 days before deadline
  • Engagement cost: ~USD 26K/month

For published numbers on a different LATAM commerce engagement, read the Bari wholesale portal case study.

Realistic use cases the pod ships against

The use cases below show up on roughly four out of five Android programs we sign. Each one earns its own engineering budget because each one is the kind of work that fails silently on launch day if the pod has not lived through it before.

Offline-first sync for spotty connectivity

The borrower is on the bus with patchy 3G; the technician is in a basement Faraday cage; the driver is on a stretch of highway between cell towers. Offline-first on Android is not a flag on a Retrofit call; it is a queue with conflict resolution backed by Room, a write-ahead log persisted across process restarts, an exponential backoff strategy aware of WorkManager constraints, and a UI that clearly distinguishes local state from server-confirmed state. Every Android pod we ship has shipped at least one offline-first surface, because the alternative is a screen that lies to the user about whether their input was saved.

Foreground services with Doze and App Standby discipline

Android 14 tightened foreground-service classification; Android 15 tightened it again. A foreground service that runs a bureau pull during loan scoring, a fleet-tracking ping during a delivery, or a step-counting service for a fitness app must declare an exact use-case (dataSync, mediaPlayback, location, connectedDevice, mediaProjection, camera, microphone, phoneCall, remoteMessaging, shortService, systemExempted) on the manifest, register the service correctly, and survive Doze, App Standby, and OEM-skin background restrictions. The pod owns the classification matrix and the regression on real OEM handsets that enforce different battery rules.

BLE scanning at scale and geofencing

Bluetooth scanning on Android changed materially with Android 12 (BLUETOOTH_SCAN replacing the location-permission entanglement) and continues to drift on OEM skins. Fleet trackers, retail beacons, hospital asset-tracking apps, micromobility unlocking flows, and OBD-II dongle programmes all run into the same trio of issues: scan quotas under battery saver, GATT timeouts on Samsung, location-permission UX that legitimately confuses users. The pod owns the state machine, the background reconnection strategy, the GATT timeout handling, and the in-app rationale flow that keeps the legitimate user permitting the scan.

Play Integrity, payment apps, and PCI-adjacent flows

Payment apps and lending apps live with Play policies that other categories do not see (Personal Loan, Financial Services, Health Apps), with PCI scope that the team must respect on the Android client side, with a fraud team that needs a strong device-trust signal, and with the OEM-skin variance that determines whether the foreground service that pulls a payment-bureau decision actually completes. The pod treats Play Integrity as a fraud signal, not a launch gate, and it treats the Play policy text as the contract the engineering team reads before the marketing team designs a campaign.

In-app updates, App Bundles, and dynamic feature modules

Forced updates land harder on Android than on iOS because the user has no implicit "you must update to keep using" gate. The pod ships in-app updates wired correctly for the critical hotfix path (immediate update, flexible update, or none, deliberately chosen per release), splits the install footprint into dynamic feature modules where the analytics show low usage on a feature, and migrates the app to the Android App Bundle so Play can deliver the right ABI / density / language split. Done well, install size halves and the cold-start budget on Android Go finally fits.

Push at scale, deeplinks for marketing

Push at scale on Android is FCM, plus the OEM-skin layer the documentation pretends does not exist. High-priority FCM reserved for messages that earn it, foreground-service classification for the campaigns that need a wake, OEM-aware battery-whitelist guidance for MIUI / HyperOS / ColorOS / Funtouch / EMUI / Origin OS, deferred-deeplink reliability through Firebase Dynamic Links or App Links + a backend reconciliation, Play store-listing experiments wired to the same campaign UTM scheme. The pod owns the deliverability dashboard the marketing team reads before the campaign, not after.

Risks specific to Android programs and how the pod mitigates them

The risks below are the ones that take a healthy Android program off the rails. None of them are unusual; all of them are predictable. The pod ships with a written mitigation for each, agreed in sprint zero, not invented during the next incident.

OEM-skin variance and FCM kill behavior

MIUI, HyperOS, ColorOS, Funtouch, EMUI, Origin OS, and (to a lesser extent) One UI ship aggressive battery-optimisation defaults that kill foreground services and silence FCM notifications. Mitigation: the OEM-skin parity log, the in-app battery-whitelist guidance flow tailored per OEM, the high-priority FCM reservation policy, the daily Push Probe synthetic check from a real device per OEM, the foreground-service classification audit on every release. We have lifted MIUI delivery from 76 percent to 96 percent on real engagements with this exact pattern.

Target SDK compliance windows and Play deadlines

Play target API levels move every year; missing the deadline removes the app from the Play Store for new installs. Mitigation: the target SDK calendar maintained in source from week one, the deprecation register and the breaking-behavior register updated every quarter, modernisation budget allocated as a constant percentage of every sprint rather than a one-off project, regression on real handsets running the new platform version before the cutoff.

Play Store policy churn (Personal Loan, Health, Family)

The Personal Loan policy keeps tightening; the Health Apps policy keeps tightening; the Family Policy keeps tightening. A previously-compliant app fails the next review without warning. Mitigation: the data-safety form authored as YAML in source, a Play policy review on the calendar at every minor release, written category-specific disclosures kept current, and a refusal to ship a build the engineering team cannot defend on the policy text.

Foreground-service classification tightening on Android 14 / 15

Foreground services that worked on Android 13 fail to start on Android 14 and 15 if the use-case classification on the manifest is incorrect. Mitigation: the foreground-service classification matrix authored in source, regression tests that explicitly start each FGS on a real handset already running the target version, and a written exit criterion when a service can no longer justify foreground status (ShortService, JobScheduler, WorkManager, or none of the above).

R8 and Proguard breakage on a release branch

R8 retuning surfaces serialization breakage and reflection-based crashes that did not exist in debug builds. Mitigation: a release-shaped CI build that runs R8 on every PR (not only on production tags), keep rules updated alongside the dependency that needs them, and a mandatory smoke test on a real device against the release build before the staged rollout starts. We refuse to ship a release build that has only been smoke-tested in debug.

Large-screen quality bar and Foldable scope creep

"We could just add a Foldable layout" is the second-most-expensive sentence in Android (behind "we could just add a Wear OS app"). Mitigation: a one-page brief required for any Foldable, large-screen tablet, Wear OS, Auto, TV, or XR surface, a tech-lead refusal until the brief is signed, a launch-week marketing message agreed before any code is cut, and a written quality bar against Play's large-screen quality tier expectations. Foldables are scheduled inside a release window, not inside a roadmap quarter.

Performance regression on low-end OEMs

The team tests on Pixel 8 and a recent Galaxy S; the lowest-end device producing five percent of sessions is a 2 GB RAM Android Go handset. Mitigation: the device matrix carries the lowest-end Go device, the cold-start budget is measured on that device on every release, the macrobenchmark runs against a real handset on every PR that touches the parent screen, and the regression is release-blocking. Without it, the one-star reviews from low-tier OEMs become the QA process.

Jetpack and Compose dependency drift

Jetpack BOM versions, Compose compiler / runtime alignment, AGP / Kotlin / KSP version interlock, Hilt + Compose Hilt navigation versions — all of these drift if no one is watching. Mitigation: a quarterly dependency review that produces a written upgrade plan, a fixed budget for keeping the parent app within one minor of the current Compose BOM, and a rule that a sprint may not start if the Gradle build is broken on the release branch.

OUR STANDARDS

Android programs that ship reliably are boring on the inside.

Definition-of-Done for every release branch: green CI on the device lab matrix, the macrobenchmark green on the lowest-end Android Go handset, the data-safety form regenerated from YAML, a written reviewer-notes file, a kill-switch flag in place, a rollback build sitting on the same branch, the in-app update prompt wired for the hotfix path, and the rejection runbook entry updated for any new policy ground touched in the release.

Crash-free sessions above 99.5 percent on the latest two releases. ANR rate under 0.47 percent on Play vitals. Cold-start under 2.0 seconds on the lowest-end Android Go handset that produces 5 percent of sessions. p95 deeplink resolution under 250ms. FCM push delivery above 95 percent on every supported OEM measured by Push Probe. Accessibility audit (TalkBack, large text, high-contrast) green on every release. None of these are aspirational; they are the gates the staged rollout pauses on.

Internal-link directory: the parent app development team umbrella covers Windows and the cross-platform routing decision. The dual-platform mobile pod is the right page when iOS and Android are both meaningful. The Android-stack relative Kotlin development team is for buyers who want a Kotlin discipline across multiple surfaces. The Android staff augmentation route places individuals; the app development outsourcing service and the Android app development service are the service-level pages above this team page. Browse the case studies for shipped programs; the Bari wholesale portal case study is the closest LATAM published numbers.

Talk to a delivery lead

Buyer questions we get every week, answered honestly.

Frequently asked questions

Android-pod-specific, not the generic stems.

When Android is the product, not half of it. That means an installed base where Android is 88 percent or more of sessions (most of LATAM, India, Africa, and parts of Eastern Europe and South-East Asia), an OEM partnership where the app ships pre-installed on a specific manufacturer's catalogue, an Android-first B2B fleet (commercial drivers, field-service technicians, retail point-of-sale, healthcare on a managed Android tablet programme), or an in-vehicle, kiosk, or wearable surface where iOS is genuinely irrelevant. The dual-platform mobile pod is right when iOS and Android are roughly nine in ten of how customers reach the product on consumer surfaces. We will not push you toward an Android-only pod if your roadmap actually has both stores; we will say so on the discovery call.

Yes, and most engagements need all three. Jetpack Compose is the default for any new screen we ship; the older XML view system carries the screens the previous vendor built that are not worth rewriting yet; the Java codebase is the part we modernise on a calendar that respects production stability instead of a one-shot rewrite. The tech lead writes the modernisation plan in week one and updates it every quarter. We refuse to start a Java-to-Kotlin rewrite in sprint one of an engagement on a production app; the right move is observability first, regression tests second, incremental migration third, and a written exit criterion before the migration is declared done.

The pod owns release. Play upload keys, Play App Signing enrollment, Play Console roles, the data-safety form authored as YAML in source, the permissions declaration, the target SDK roadmap aligned to the published Play deadlines, the staged-rollout policy at five, ten, twenty-five, fifty, and one hundred percent, the kill-switch feature flag, the in-app update prompt for the critical hotfix path, and the rejection runbook all live in the pod's source tree from sprint zero. Account ownership stays with you; our engineers sit inside your developer teams as members, never as a vendor that holds release infrastructure hostage.

OEM-skin push delivery is the single most-reported reliability issue we inherit on Android programs, and the fix is not in the FCM token. MIUI, HyperOS, ColorOS, Funtouch, Origin OS, and EMUI ship with aggressive battery-optimisation defaults that kill background services and silence notifications unless the user manually whitelists the app. The pod owns: an in-app guidance flow that detects the OEM and walks the user through the battery-whitelist screen on first launch and after a major OS update, foreground services classified for Android 14 and 15 with an exact use-case declaration on the manifest, a high-priority FCM message reserved for the messages that earn it (so the OEM does not down-rank routine sync), Push Probes that run from a real device per OEM at least once a day, and a deliverability dashboard the marketing team reads before they invest in a campaign. We have lifted MIUI delivery from 76 percent to 96 percent on real engagements with this exact pattern.

Target SDK updates are calendar work, not feature work. The pod treats them like the regulatory deadlines they actually are. Sprint zero on a target SDK migration produces the deprecation register (every API the new target removes or restricts that touches your code), the breaking-behavior register (foreground service classification, photo picker, partial media access, permission scopes that change between Android 13, 14 and 15), the testing matrix on real devices that already run the new platform version, and the order in which feature work pauses. Sprint one through three retire the deprecations on the order published by the platform owner. Feature work continues alongside on the modules that the new target does not touch. Most target SDK migrations cost four to six sprints depending on how much of the codebase is on Java and how aggressive the previous vendor was with deprecated APIs.

Yes, but with discipline. The pod requires a one-page written brief before any second-screen work starts: what surface, what minimum viable feature, what the parent app does when the surface is offline or the user has not paired it, what the launch-week marketing message is, and which engineer on the pod owns the surface. Foldable layouts and large-screen tablets reuse the parent codebase with adaptive layouts (window size classes, canonical layouts) and a screenshot-test baseline against regression. Wear OS companions are usually built in Compose for Wear with Health Services and paired to the parent app over the Wearable Data Layer. Android Auto follows the driver-distraction templates we will not improvise around. Android XR is treated as an experiment with its own sprint zero and a written exit criterion.

Carefully. Play Integrity is the right surface for fraud-sensitive flows (lending, payments, betting, ticketing, account creation under regulatory scrutiny), but it is the wrong surface to gate the entire app on. The pod follows a three-step pattern: gate only the high-value action with Play Integrity, not the launch screen; integrate the verdict into the existing fraud-decision pipeline alongside device signals and behavioural signals so a single Integrity miss does not lock a real user out; instrument a side channel for the small number of users on devices that legitimately fail attestation (rooted phones, custom ROMs, certain Huawei devices without GMS) so the support team can resolve them. We have shipped Play Integrity into a fraud pipeline where false-positive lockouts dropped from 4.2 percent to 0.03 percent of attestation calls without weakening the fraud decision.

Yes, and it is a common path. The engagement starts with one or two senior Kotlin and Compose engineers through our hire-app-developers Android staff augmentation route. The engineers learn your repo, your domain, and your release rituals; once the work warrants it we add the Android tech lead, the QA automation engineer, the part-time mobile DevOps seat, and the release-pipeline ownership to convert the engagement into a dedicated Android pod. The conversion adds roles, not new faces, so the engineers you already trust keep shipping while we tighten the program around them. The conversation that triggers it is usually 'we are tired of being paged when MIUI changes how it kills foreground services', not 'we want more headcount'.

If you're interested in hiring developers for this capability in Argentina, visit the Argentina version of this page.

CONTACT US

Tell us about the Android program. We'll tell you the shape that fits.