
EDITORIALS
Cross-browser testing explained
Cross-browser testing is all about making sure everything works as intended for anyone on your site – not just users who happen to be on the same browser your dev team prefers.

Your app needs to work on more than just the phone in your pocket. That’s why you need mobile OS compatibility testing – to check your app works as it should on both iOS and Android devices.
ome people use Android, others use iOS. Mobile OS compatibility testing makes sure your app works exactly as intended on both operating systems. Your app might work perfectly on your own iPhone 15 Pro. Launches fast, looks sharp, everything responds instantly. Then a user on a three-year-old Samsung Galaxy A12 downloads it, waits 30 seconds for it to load, taps a button that doesn't respond, and deletes it. You've just lost a customer because you only tested on the phone in your pocket.
This is why mobile-specific OS compatibility testing exists. The challenge is that "the devices people use" means thousands of different combinations of operating systems, manufacturers, and hardware. You can't test them all. You may be wondering which devices actually matter, how to test on phones you don't own, and what typically breaks across different operating systems. Keep scrolling to find out.
Fragmentation means one operating system exists in dozens of different versions across thousands of devices. Android runs on over 24,000 different device models according to OpenSignal's 2015 report, and that number has only grown since. iOS runs on about 30. This is a big problem for testers.
Google releases Android as open-source software. Samsung takes it, modifies it, adds their own interface, and ships it on hundreds of device models across different price points. Xiaomi does the same thing. So does Oppo, Vivo, OnePlus, Motorola, and dozens of other manufacturers. They all start with Android but end up with something that looks and behaves differently.
Same code, different results. Your layout looks perfect on a Pixel but breaks on a Samsung because of how Samsung changed the default fonts. Your app crashes on a Xiaomi phone because of how Xiaomi kills background apps. You fixed a bug on one Android device only to discover it still happens on three others.
iOS is simpler but not simple. Apple controls both the hardware and software, so you're dealing with fewer variables. But you still have multiple iOS versions running across phones, tablets, and varying screen sizes. And Apple releases major iOS updates every September that can break things in unexpected ways.
Global market share sits at roughly 72% Android and 27% iOS. That single iOS percentage represents a more predictable testing scenario than the entire Android percentage combined.
Mobile OS compatibility issues show up differently than desktop. These are some very real, common issues that happen between different mobile operating systems – all proving why OS mobile testing is necessary:
Screen layouts collapse on smaller devices – Your carefully designed interface assumes a 6.5" screen. Half your Android users have 5.5" screens or smaller. Text overlaps, buttons disappear off the edge, the interface becomes unusable.
Touch targets are too small – What feels fine with a mouse cursor doesn't work with a finger. Apple's Human Interface Guidelines recommend 44x44 points minimum for touch targets. Android recommends 48 pixels minimum. Buttons smaller than this get mistapped or missed entirely.
Permissions work completely differently – iOS asks for location permission with a system dialog you can customize but not control. Android lets you request permissions whenever you want. Some Android manufacturers add extra permission layers on top. Your carefully timed permission request works on iOS and confuses Android users.
The back button exists on Android but not iOS – Android users expect a back button in the bottom navigation bar. iOS users expect a back button in the top left corner or a swipe gesture. Your navigation needs to work with both.
Apps get killed in the background differently – iOS is aggressive about closing background apps to save battery. Some Android manufacturers (particularly Chinese brands) are even more aggressive. If your app doesn't properly save its state, users open it and lose their work.
Keyboards cover input fields – The on-screen keyboard appears and hides your login form. Users can't see what they're typing. This happens because operating systems handle keyboard appearance differently and your layout doesn't account for it.
Gesture conflicts – You added a swipe-from-edge gesture to open a menu. Android's system back gesture uses the same swipe. Now your menu and the back button fight each other.
These problems are predictable once you know they exist, and most of them show up within minutes of testing on an actual device that isn't the one you develop on.
You're not going to test all 15,000 Android devices. The question is: which ones matter enough to test?
Start with your analytics – Check which devices your users generally have. Google Analytics, Firebase, or your app store dashboard will show you the breakdown. If 60% of your users are on Samsung devices, test on Samsung devices. If barely anyone uses tablets, don't prioritize tablet testing yet.
Test the extremes – Pick one high-end flagship phone (newest Samsung Galaxy or Pixel) and one budget device from 2-3 years ago (Samsung A-series, older Xiaomi). The flagship shows you what your app can do. The budget phone shows you what breaks when resources are limited. Most devices fall between these extremes.
Cover the main Android manufacturers – Samsung, Xiaomi, and Google make the most popular phones. Test on at least one device from each. They modify Android differently, which means your app behaves differently.
Test the iOS versions people actually use – Check Apple's data on iOS version adoption. Usually about 90% of users are on the current or previous iOS version within a few months of release. Test on the current version and one version back. Don't worry about iOS 12 unless your analytics show meaningful usage.
Include one tablet if tablets matter – iPad and Android tablets represent maybe 10-15% of users for most apps. If you know tablets are important for your use case, include one in testing. Otherwise, test phones first.
This approach covers 80-90% of your user base without needing dozens of devices. Testing everything is impossible and unnecessary, but testing nothing means shipping broken software. Test strategically.
Once you know which devices matter, you need access to them. You have three practical options:
### Cloud-based device testing platforms
Services like BrowserStack, Sauce Labs, or AWS Device Farm give you remote access to real physical devices. You control the phone through your browser, run through your test scenarios, and see exactly how it behaves. This works well for checking that things work across many device combinations without buying hardware. The downside is slightly laggy interaction and occasional connection issues. But for catching broken layouts and basic problems, cloud platforms are practical.
Android Studio includes Android emulators. Xcode includes iOS simulators. Both are free and run on your development machine. They're useful for quick checks during development but shouldn't be your only testing method. Emulators don't show real device performance, touch behavior feels different, and hardware-specific problems don't appear. Use them for rapid iteration, not for final validation.
Buy or borrow a small collection of actual phones. This matters when you need to test performance, gestures, or anything that requires realistic device interaction. You don't need 50 devices. Five to seven carefully chosen devices covering your main user segments works fine. One high-end Android, one budget Android, one Samsung specifically, one iOS device, maybe one tablet. Keep them on your desk and use them regularly.
Most teams combine approaches – emulators during development for quick checks, cloud platforms for broader coverage, and a few physical devices for the final thorough testing pass before release. You don’t need to pick one method, just pick the right method for each testing phase.
Test when it matters, not constantly. Here's when it matters:
Before submitting to app stores – Always. Run through your app on the key device combinations before you submit to the App Store or Google Play. Find the broken layouts, permission issues, and crashes before reviewers or users do.
When iOS or Android release major updates – Apple ships iOS updates in September. Android updates roll out gradually across manufacturers throughout the year. When a new version releases, test your app on it. Updates change how things work and can break features that were working fine before.
After changing navigation or gestures – If you modified how users move through your app or added gesture controls, test on both iOS and Android devices. Navigation and gestures work differently enough between iOS and Android that they break more often than other features.
When adding permissions – Every time you request a new permission (camera, location, notifications), test the permission flow on multiple devices. iOS and Android handle this completely differently, and manufacturers add extra variations on top.
When user feedback clusters around specific devices – If you're getting crash reports or complaints that all mention Samsung devices or older iOS versions, that's your signal. Do focused testing on those specific setups.
Between these moments, you don't need constant cross-device testing. Focus your compatibility testing when you're making changes that touch the operating system or when you're about to ship. The rest of the time, develop on whatever device you have and trust your regular development testing.
The biggest mistake is testing only on flagship devices your team owns. Your developers have iPhone 15 Pros. Your designer has a Pixel 8. You test on those because they're convenient. Then you ship and discover your app barely runs on the budget Android phones that represent 40% of your user base.
Budget devices expose memory issues, performance problems, and rendering bugs that expensive phones mask with better hardware. If you only test on high-end devices, you're only testing for users who can afford them.
Include at least one budget device in your testing. It doesn't need to be the absolute cheapest phone available. Just something from the mid-to-low range like a two-year-old Samsung A-series, a Motorola G, or a budget Xiaomi device. Run your app on it and see how it performs when memory is limited and the processor is slower. That's the experience many of your users are getting.
Mobile OS compatibility testing sounds overwhelming until you break it down. Check your analytics to see which devices your users have. Pick five devices that represent most of your user base – probably a mix of Android models across price points and a recent iOS device. Test your app on those devices before major releases and when the operating systems update.
That's it. You're not trying to achieve perfect coverage across every device ever made. You're trying to make sure your app works for the people actually using it. Start with the devices that matter, catch the obvious problems, and expand your testing from there as needed.
Want straightforward testing advice? Subscribe to get practical tips on mobile testing, exploratory testing, and keeping quality high straight to your inbox.

EDITORIALS
Cross-browser testing is all about making sure everything works as intended for anyone on your site – not just users who happen to be on the same browser your dev team prefers.

EDITORIALS
Device compatibility testing makes sure your app works for everyone – even those still clinging to a five-year-old Android with a cracked screen and 16GB of storage that's chronically full.

EDITORIALS
Wanting a better grasp of hardware compatibility testing? This guide shows you what it is, when you’ll need to use it, the software issues it can reveal, and how to actually start doing it.