Busted out of jail and it cost $6.90
$6.90 and 250 calls to Gemini 3.1 Pro to port over 80% of poor man's memory from a claude code plugin to opencode. I already talked about how my second max 20 plan got banned, this isn't just what has happened since then, but also what happens moving forward: 1. I still have my first Max 20 plan (for personal use) that initially kicked off this project. Still need it to finish the 800+ probe test for v1.5 of the paper. Can't switch models for agents, evaluators, scorers and judges mid way. 2. I had Model Ark's Coding Plan Pro for the odd emergency or busting out of token jail (which I side-loaded into claude code), which continues to run the odd routine work (and now that I think about it, I should have used those models to run the telegram orchestration and maintenance bits to avoid getting banned in the first place), kimi-k2.5 was my go-to for the orchestrator and dola-seed-2.0-pro was my goto for light coding tasks. 3. I switched from cc (with 2% left before weekly token jail), bit the bullet and installed opencode. Tried both the free minimax2.5 model and byteplus kimi and dola-seed. Great for light coding and conversations, not great for heavy coding tasks. 4. Switched to Gemini-3.1 pro preview, found that it was suited for the complex task ahead (refactoring claude plugins for opencode). With all the switching going on post account ban, it has been the same memory folder and files. The agent retained its knowledge (somewhat... deeper retention with some models, pretty basic with others). Then, we got to porting over one skill at a time. Optimised some of them for opencode. 80% done at the time of me writing this. Costs $6 in vertex ai costs. Making the switch was a blessing = we've been multi-session at the start of the build, and then cross-app (cowork and code), and recent events forced us to accelerate cross-harness / platform / model compatibility. We were single user before (1 memory implementation => 1 user) and were testing multi-user (single channel) with telegram when we got banned. Thanks to recent events, we're bumping up the timeline for multi-user, multi-channel memory (because that's how institutional knowledge works).