POSITIVE vs NEGATIVE Testing in QA Automation (With Examples for UI and APIs)
If you’re a QA engineer or SDET doing test automation and you’re only writing positive tests, you’re leaving most bugs and security issues untouched.
In this guide, we’ll walk through negative testing in QA, with real examples for UI automation and API testing (login, auth, status codes, edge cases).
─────────────────────────────────────────
✅ 𝐏𝐨𝐬𝐢𝐭𝐢𝐯𝐞 𝐓𝐞𝐬𝐭𝐢𝐧𝐠: 𝐃𝐨𝐞𝐬 𝐢𝐭 𝐰𝐨𝐫𝐤 𝐰𝐡𝐞𝐧 𝐞𝐯𝐞𝐫𝐲𝐭𝐡𝐢𝐧𝐠 𝐠𝐨𝐞𝐬 𝐑𝐈𝐆𝐇𝐓?
❌️ 𝐍𝐞𝐠𝐚𝐭𝐢𝐯𝐞 𝐓𝐞𝐬𝐭𝐢𝐧𝐠: 𝐃𝐨𝐞𝐬 𝐢𝐭 𝐛𝐫𝐞𝐚𝐤 𝐠𝐫𝐚𝐜𝐞𝐟𝐮𝐥𝐥𝐲 𝐰𝐡𝐞𝐧 𝐞𝐯𝐞𝐫𝐲𝐭𝐡𝐢𝐧𝐠 𝐠𝐨𝐞𝐬 𝐖𝐑𝐎𝐍𝐆?
Let me give you a real-world example. Imagine you're testing a login page.
𝐏𝐨𝐬𝐢𝐭𝐢𝐯𝐞 𝐓𝐞𝐬𝐭:
  • Enter valid username and password. Click login.
  • You should get in.
𝐍𝐞𝐠𝐚𝐭𝐢𝐯𝐞 𝐓𝐞𝐬𝐭:
  • Enter wrong password. Click login.
  • You should see an error message (not crash, not let you in, not expose security info).
That's it.
Positive tests check the happy path. Negative tests check all the ways users can mess things up, intentionally or accidentally.
𝐖𝐇𝐘 𝐓𝐇𝐈𝐒 𝐌𝐀𝐓𝐓𝐄𝐑𝐒
Here's the truth: Most automation engineers only write positive tests.
They test that the login works.
They test that creating a user works.
They test that the checkout flow works.
𝐁𝐮𝐭 𝐭𝐡𝐞𝐲 𝐟𝐨𝐫𝐠𝐞𝐭 𝐭𝐨 𝐭𝐞𝐬𝐭 𝐰𝐡𝐚𝐭 𝐡𝐚𝐩𝐩𝐞𝐧𝐬 𝐰𝐡𝐞𝐧 𝐭𝐡𝐢𝐧𝐠𝐬 𝐃𝐎𝐍’𝐓 𝐰𝐨𝐫𝐤.
  • That's exactly where most bugs live.
  • That's where security holes hide.
  • That's where your application crashes in production.
┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈
📝 If you want to stand out in interviews? Talk about negative testing.
Mention that you don't just verify the happy path, you verify error handling, edge cases, and security boundaries. That alone will put you ahead of 70% of candidates.
┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈
𝐍𝐄𝐆𝐀𝐓𝐈𝐕𝐄 𝐓𝐄𝐒𝐓𝐒 𝐅𝐎𝐑 𝐔𝐈 𝐀𝐔𝐓𝐎𝐌𝐀𝐓𝐈𝐎𝐍
For automated UI tests, negative testing means testing user mistakes and edge cases. Think about all the ways a real user could mess things up.
𝐄𝐱𝐚𝐦𝐩𝐥𝐞𝐬 𝐲𝐨𝐮 𝐬𝐡𝐨𝐮𝐥𝐝 𝐛𝐞 𝐭𝐞𝐬𝐭𝐢𝐧𝐠:
⟩ Submitting forms with missing required fields. Does the error message show? Is it clear? Does the form stay on the page?
⟩ Entering invalid data formats. Put letters in a phone number field. Put a fake email format. Does the validation work?
⟩ Clicking buttons multiple times rapidly. Does it submit twice? Does it crash?
⟩ Trying to access pages you shouldn't have access to. Can a regular user type in the URL to an admin page and get in?
⟩ Testing session timeouts. What happens if you leave the app open for 2 hours and then try to do something?
⟩ File upload tests. Upload a file that's too large. Upload the wrong file type. Upload nothing at all.
⛓️‍💥 The goal is simple: make sure your app doesn't break, doesn't expose errors to users, and handles mistakes gracefully.
𝐍𝐄𝐆𝐀𝐓𝐈𝐕𝐄 𝐓𝐄𝐒𝐓𝐒 𝐅𝐎𝐑 𝐀𝐏𝐈 𝐀𝐔𝐓𝐎𝐌𝐀𝐓𝐈𝐎𝐍 (𝐓𝐡𝐢𝐬 𝐈𝐬 𝐖𝐡𝐞𝐫𝐞 𝐘𝐨𝐮 𝐆𝐨 𝐀𝐋𝐋 𝐈𝐍)
Here's where negative testing becomes absolutely critical.
And honestly?
This is where most QA Automaton Engineers should be spending their time.
Why?
Because most security vulnerabilities, data leaks, and critical bugs live in the API layer.
Your UI might look pretty, but if your API lets unauthorized users access data or doesn't validate input properly, you have a serious problem.
𝐓𝐡𝐢𝐬 𝐢𝐬 𝐧𝐨𝐭 𝐨𝐩𝐭𝐢𝐨𝐧𝐚𝐥. 𝐘𝐨𝐮 𝐧𝐞𝐞𝐝 𝐓𝐎𝐍𝐒 𝐨𝐟 𝐧𝐞𝐠𝐚𝐭𝐢𝐯𝐞 𝐭𝐞𝐬𝐭𝐬 𝐟𝐨𝐫 𝐀𝐏𝐈𝐬.
🔎 Test every single HTTP error code your API should return:
𝟒𝟎𝟎 𝐁𝐚𝐝 𝐑𝐞𝐪𝐮𝐞𝐬𝐭:
⟩ Send malformed JSON. Send invalid data types. Send requests with missing required parameters. Your API should reject these, not crash.
𝟒𝟎𝟏 𝐔𝐧𝐚𝐮𝐭𝐡𝐨𝐫𝐢𝐳𝐞𝐝:
⟩ Try to access protected endpoints without an authentication token. Try with an expired token. Try with a token from a deleted user. Make sure you get kicked out.
𝟒𝟎𝟑 𝐅𝐨𝐫𝐛𝐢𝐝𝐝𝐞𝐧:
⟩ This is huge for security. Log in as User A and try to access User B's data. Try to delete someone else's project. Try to modify records you don't own. Your API should say "No, you can't do that" with a 403.
𝟒𝟎𝟒 𝐍𝐨𝐭 𝐅𝐨𝐮𝐧𝐝:
⟩ Request resources that don't exist. Try to get user ID 9999999 when it doesn't exist. Try to access deleted items. Make sure you get proper 404s, not crashes or weird null responses.
𝟒𝟐𝟐 𝐔𝐧𝐩𝐫𝐨𝐜𝐞𝐬𝐬𝐚𝐛𝐥𝐞 𝐄𝐧𝐭𝐢𝐭𝐲:
⟩ Send valid JSON but with data that doesn't make sense. Negative age. Invalid email formats. Dates in the past when they should be in the future.
𝟒𝟐𝟗 𝐓𝐨𝐨 𝐌𝐚𝐧𝐲 𝐑𝐞𝐪𝐮𝐞𝐬𝐭𝐬:
⟩ If you have rate limiting, test it. Send 1000 requests in one second. Does your API block you properly?
𝟓𝟎𝟎 𝐈𝐧𝐭𝐞𝐫𝐧𝐚𝐥 𝐒𝐞𝐫𝐯𝐞𝐫 𝐄𝐫𝐫𝐨𝐫:
⟩ You should NEVER intentionally get 500s from valid requests. If your negative test triggers a 500, that's a bug. The API should handle it gracefully with 400-level errors instead.
𝐇𝐞𝐫𝐞 𝐚𝐫𝐞 𝐜𝐫𝐢𝐭𝐢𝐜𝐚𝐥 𝐧𝐞𝐠𝐚𝐭𝐢𝐯𝐞 𝐭𝐞𝐬𝐭 𝐬𝐜𝐞𝐧𝐚𝐫𝐢𝐨𝐬 𝐟𝐨𝐫 𝐀𝐏𝐈𝐬:
📌 Authorization bypass attempts:
User A tries to access User B's projects by guessing the project ID in the URL.
This should return 403, not the data.
📌 Missing authentication tokens:
Hit every protected endpoint without sending a token.
All should return 401.
📌 Invalid authentication tokens:
Send garbage tokens, expired tokens, tokens from deleted users.
All should return 401.
📌 Cross-user data manipulation:
User tries to update or delete another user's resources.
Should be blocked with 403.
📌 Invalid HTTP methods:
Try to POST to a GET-only endpoint.
Try to DELETE something that shouldn't be deletable.
📌 Oversized payloads:
Send massive JSON bodies. Send 10MB of data to an endpoint expecting 1KB.
Does it handle it or crash?
📌 Special characters and encoding:
Send Unicode, emojis, null bytes, special characters.
Does your API handle them properly?
📌 Business logic violations:
Try to transfer more money than you have.
Try to book the same appointment twice.
Try to create duplicate records where uniqueness is required.
𝐓𝐡𝐞 𝐁𝐨𝐭𝐭𝐨𝐦 𝐋𝐢𝐧𝐞
Implementing negative tests and knowing what they are and WHY they matter significantly improves the quality of a product.
It also shows that you, the person who implemented them, are a knowledgeable QA engineer who understands how the entire system works.
Anyone can write a test that checks if login works.
A real QA engineer tests what happens when login DOESN'T work:
⊹They test security boundaries.
⊹They test error handling.
⊹They think like an attacker, a confused user, and a system under stress.
𝐈𝐟 𝐲𝐨𝐮 𝐨𝐧𝐥𝐲 𝐭𝐚𝐤𝐞 𝐨𝐧𝐞 𝐭𝐡𝐢𝐧𝐠 𝐟𝐫𝐨𝐦 𝐭𝐡𝐢𝐬 𝐩𝐨𝐬𝐭, 𝐭𝐚𝐤𝐞 𝐭𝐡𝐢𝐬:
┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈
Start adding negative tests to your automation suite TODAY.
And when you're in your next interview and they ask about your testing strategy, make sure you mention negative testing.
Talk about how you test authentication failures, authorization bypasses, invalid inputs, and error handling.
┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈
That's what separates junior automation engineers from senior ones.
𝐏.𝐒. Go look at your current test suite right now. Count how many tests are positive versus negative. If the ratio is 90-10, you have work to do. Aim for at least 40-50% negative test coverage, especially on your APIs. Your future self will thank you when you catch that security bug before production.
𝐏.𝐏.𝐒. 🚩 𝐈𝐟 𝐲𝐨𝐮 𝐡𝐚𝐯𝐞𝐧’𝐭 𝐰𝐚𝐭𝐜𝐡𝐞𝐝 𝐢𝐭 𝐲𝐞𝐭, 𝐲𝐨𝐮𝐫 𝐧𝐞𝐱𝐭 𝐬𝐭𝐞𝐩 𝐢𝐬 𝐭𝐡𝐞 𝐅𝐑𝐄𝐄 𝟑-𝐩𝐚𝐫𝐭 “𝐌𝐚𝐧𝐮𝐚𝐥 𝐐𝐀 → 𝐒𝐃𝐄𝐓” 𝐰𝐨𝐫𝐤𝐬𝐡𝐨𝐩, 𝐚 𝐬𝐡𝐨𝐫𝐭 𝐦𝐢𝐧𝐢-𝐜𝐨𝐮𝐫𝐬𝐞 𝐭𝐡𝐚𝐭 𝐠𝐢𝐯𝐞𝐬 𝐲𝐨𝐮 𝐭𝐡𝐞 𝐟𝐮𝐥𝐥 𝐫𝐨𝐚𝐝𝐦𝐚𝐩 𝐭𝐨 𝐛𝐞𝐜𝐨𝐦𝐢𝐧𝐠 𝐚 𝐦𝐢𝐝-𝐥𝐞𝐯𝐞𝐥 𝐒𝐃𝐄𝐓 𝐚𝐧𝐝 𝐩𝐚𝐬𝐬𝐢𝐧𝐠 𝐢𝐧𝐭𝐞𝐫𝐯𝐢𝐞𝐰𝐬.
13
0 comments
Matviy Cherniavski
6
POSITIVE vs NEGATIVE Testing in QA Automation (With Examples for UI and APIs)
AI & QA Accelerator
skool.com/qa-automation-career-hub
From QA Automation to AI-Powered SDET. Join AI & QA Accelerator.
Leaderboard (30-day)
Powered by