
EDITORIALS
How to Get Started With Manual Testing
The simple yet effective approach to testing that can transform your development process, helping you deliver more reliable software through human insight and real-world usage scenarios.

Cross-browser testing is all about making sure everything works as intended for anyone on your site – not just users who happen to be on the same browser your dev team prefers.
ross-browser testing makes sure your site works for everyone, no matter what browser they’re on. Your website might work perfectly on your MacBook Pro running Chrome. But what about your users on Windows 11 with Edge? Or the ones still clinging to an iPad from 2018 running Safari? Or the person accessing your web app through Firefox on a Linux machine with an older screen resolution? Even slight differences in browser versions can break layouts, slow down features, or cause buttons not to work.
That's where cross-browser testing comes in. Here we’ll break down how it works, why it’s useful, and some tricks of the testing trade that make it easier.
Cross-browser testing is exactly what it says on the tin – making sure your website or web app works properly across different browsers. It’s about catching layout glitches, broken features, or weird behavior for users who aren’t on your team’s preferred setup.
You’ll usually check:
Your users don't all use the same setup as your dev team. They're using different browsers, devices, and operating systems. If your checkout process works flawlessly in Chrome but breaks in Safari, you've just lost customers who happen to use iPhones.
Some cross-browser issues are sneaky – a button that’s nudged out of place, a dropdown that refuses to show up. And your users won’t tell you. They’ll just assume your site is broken and leave.
This happens because every browser has its own way of reading your code. Chrome, Safari, Firefox all speak slightly different languages. So even a perfectly crafted layout can look messy if a browser decides to do its own thing.
The goal isn't perfection across every possible combination (that's impossible and unnecessary). It's about ensuring your product works well enough on the configurations your actual users rely on.
The reason browsers behave differently comes down to how they're built. Each browser vendor makes decisions about which web standards to implement, when to implement them, and how strictly to follow the specs. Chrome might rush to support a new CSS feature while Safari takes a more cautious approach. Firefox might interpret a JavaScript API slightly differently than Edge. These aren't bugs, they're just differences in priorities and timelines.
For you as a tester or developer, this means the same code produces different results. A feature that works perfectly in your primary browser might be completely broken elsewhere. Here are some issues that appear regularly when you test across browsers:
CSS rendering differences – Flexbox, Grid, and newer CSS features work differently across browsers. Your perfectly aligned layout in Chrome could be a jumbled mess in older Safari versions.
JavaScript compatibility – Modern JavaScript features aren't universally supported. Arrow functions, async/await, and newer APIs work great in current browsers but break in older versions. Safari on iOS is particularly notorious for being behind on JavaScript features.
Form validation – HTML5 form validation displays differently across browsers. Required fields, email validation, and date pickers look and behave completely differently in Chrome vs. Safari vs. Firefox.
Font rendering – The same font can look noticeably different across browsers and operating systems. Text that's perfectly readable in Chrome on Windows might be too thin in Safari on Mac.
Video and media – Autoplay policies, codec support, and video controls vary wildly. A video that autoplays fine in Chrome might be blocked in Safari. Format support differs too.
Scrolling behavior – Scrolling acts differently across browsers. Elements that should stick to the top of the page might jump around on mobile Safari. Animations that trigger when you scroll might fire at different times. On iPhones, when you scroll past the top or bottom of a page, it keeps going and then bounces back – this rubber-band effect can cause visual glitches that you won't see in desktop browsers.
Testing across browsers doesn’t need to be a big, process-driven ordeal. Here’s a straightforward way to start:
Start with data
Check your analytics to see what browsers, devices, and operating systems your users actually use. Don't waste time testing configurations that represent 0.1% of your traffic. Focus on the combinations that matter – usually the top 3-5 browsers and the major device types your audience uses.
As of 2025, Chrome dominates with around 65% of global browser market share, followed by Safari at roughly 20%, Edge at 5%, and Firefox at 3%. But these numbers shift dramatically by region and device type. Safari dominates on mobile in the US and UK. Chrome leads on desktop everywhere. Your analytics will show what matters for your specific audience.
Get a testing process going
Once you know which browsers to test, the process is straightforward. Open your site in the first browser – say, Chrome. Work through your critical flows: log in, fill out a form, complete a checkout, navigate the main features. Note anything that breaks, looks wrong, or feels slow. Take screenshots of visual issues. Document broken functionality.
Then repeat in your next browser – Safari, Firefox, Edge, whatever your list includes. Compare what you see. Does the form validation work the same way? Does the layout hold up? Are there JavaScript errors in the console?
You're looking for: broken functionality (things that don't work at all), layout shifts (elements in the wrong place), visual glitches (missing images, weird fonts, misaligned buttons), and performance issues (slow loading, janky animations). A simple checklist or spreadsheet works fine for tracking what you've tested and what broke where.
Test early and often
Don't wait until the end of development to discover your layout breaks on mobile Safari. Test on different browsers and devices throughout the development cycle. Catching cross-browser issues early is far cheaper than fixing them later.
Use real devices where possible
Browser dev tools and emulators are useful for quick checks, but they're not perfect substitutes for real hardware. If mobile users are significant, test on actual phones and tablets. Borrow devices from colleagues, set up a device lab, or use cloud-based testing services that provide access to real devices.
Cloud platforms like BrowserStack and LambdaTest give you access to real browsers and devices without needing to maintain your own lab. Useful if you need to test on devices you don't own, but nothing beats having actual hardware in front of you for critical testing.
Prioritize by impact
Not all cross-browser issues are equal. A broken checkout flow is critical. A slightly off font rendering is probably fine. Focus on functionality first, then visual consistency where it matters most.
Document what you're testing
Keep track of which browser/device/OS combinations you've tested. This doesn't need to be complicated – a spreadsheet works. So does duplicating your test plan and renaming it for each browser. Or use something like Testpad where you can create variations of your checklists in seconds and see everything at a glance.

Don't try to test everything. Focus on the features and flows that matter most to the user experience:
Critical user flows – Login, signup, checkout, payment processing. If these break, you lose customers immediately. Test them thoroughly across all your target browsers.
Forms – Input validation, dropdown menus, date pickers, file uploads, error messages. Forms behave wildly differently across browsers. Test every form on your site.
Navigation – Menus, dropdowns, mobile hamburger menus, search functionality. Make sure users can actually find what they're looking for on every browser.
Interactive elements – Buttons, modals, tooltips, accordions, carousels. Anything users click or interact with needs to work consistently.
Media – Images loading, video playback, audio controls. Check that your media displays and functions correctly everywhere.
Layout at different screen sizes – Your site on a 27-inch monitor vs. a phone vs. a tablet. Responsive design breaks in surprising ways across different browsers.
Security features – HTTPS connections, certificate handling, how browsers store passwords and sensitive data. Browsers handle security differently so what works securely in Chrome, for example, might create vulnerabilities in older browsers.
Third-party integrations – Payment processors, chat widgets, analytics, social media embeds. These often work in Chrome but fail elsewhere.
Start with your most critical paths and expand from there. A broken login is worse than a slightly misaligned footer.
Even teams that know they need to test across browsers often make predictable mistakes. These aren't about lack of effort – they're about testing the wrong things or making incorrect assumptions about how users actually access your site. Here's what to avoid:
Testing only on the latest versions – many users don't update immediately (or at all). Test on older versions that still have significant user bases.
Assuming mobile = one experience – iOS Safari behaves differently than Chrome on Android. Test both, especially if mobile traffic is significant.
Ignoring actual network conditions – test on slower connections, not just your office's gigabit fiber. Performance issues often only show up on real-world networks.
Over-testing obscure configurations – yes, someone somewhere uses Internet Explorer 6, but unless that's actually in your analytics, don't spend time on it.
We always recommend you start with manual testing as it’s the fastest, most cost-effective way to spot issues. Open your site in different browsers and actually use it. Click through your critical paths, watch what breaks, what looks wrong and what feels off. This is how you catch the issues that matter like visual problems, layout shifts and usability quirks that no automated tool will flag.
Automation can help with the repetitive stuff – especially regression testing across multiple browsers after you make changes. Tools exist to run tests across different browser/OS combinations automatically. This is valuable for catching regressions quickly when you make changes to your codebase. Just remember, automation has its limits. It won't catch subtle visual issues or usability problems that only humans notice. Manual testing across key configurations remains essential, especially for new features or significant changes.
Cross-browser testing doesn't require complex processes or expensive tools. Start simple: test on the browsers and devices your users actually use. Create a checklist (we happen to know a pretty good checklist-style testing tool…) of the critical paths through your website. Work through them on different browser/device combinations. Document what passed and what needs fixing.
The goal is ensuring your users can actually use your website or web app, regardless of their browser or device. Focus on that, skip the rest, and you'll be fine.
Want more practical advice sent straight to your inbox? Subscribe to get straightforward tips on all things testing.

EDITORIALS
The simple yet effective approach to testing that can transform your development process, helping you deliver more reliable software through human insight and real-world usage scenarios.

EDITORIALS
Software testing isn’t about formalized test cases or fancy burndown charts. It’s not even a quality assurance mechanism. So, what is it? Why does it matter? And how hard can it be?

EDITORIALS
Geolocation testing is just checking if your software works when someone uses it in another country. Are German users seeing prices in euros? Are certain features blocked due to local laws?