Two Test Frameworks. Same AI Agent. Same Prompt. Same Task.
๐๐ฑ๐ฑ ๐ฎ ๐น๐ผ๐ด๐ถ๐ป ๐๐ฒ๐๐.
Framework 1 result:
- Test passes on first run.
- Credentials pulled from the fixture.
- File placed in the right folder.
- Naming follows the existing convention.
- Page object used. No raw selectors.
Framework 2 result:
- Test technically runs.
- Credentials hardcoded directly in the test.
- New file dropped in the root directory.
- Named `test_new.py.`
- Raw selectors everywhere. No page object in sight.
The test in Framework 1 looks like if was written by an actual engineer.
The test in Framework 2 is a mess, it is kinda of working... but still a complete mess.
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
๐ง ๐๐๐ซ๐ ๐๐ฌ ๐๐ก๐ฒ
The AI Agent does not decide what good tests look like.
It reads what already exists in your repo and continues the pattern.
Framework 1 had fixture files, page objects, consistent naming, and a clear folder structure. The agent read all of that. Matched against it. Wrote a test that fits right in.
Framework 2 had hardcoded values, raw selectors, no structure, and copy-pasted setup code in every file. The agent read that too. It draws one conclusion: this is the standard here. And continued exactly that pattern.
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
๐ ๐๐ก๐ ๐๐ง๐๐จ๐ฆ๐๐จ๐ซ๐ญ๐๐๐ฅ๐ ๐๐ซ๐ฎ๐ญ๐ก
Before you hand your repo to an agent, ask yourself one question:
โWould I be comfortable showing this code to senior engineers?โ
If yes, start using AI coding agents.
If no, fix the framework issues first, then bring in AI. Because it wonโt fix bad test automation code. It will scale it.
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Want to learn how to use AI coding agents in test automation ? Checkout this live workshop.