Show Your Work: APA-Grade Integrity for AI-Assisted Criminology and Practitioner Analysis
Yesterday, during out Coffee Hour chat, a few of us dad an interesting conversation about the use of Artificial Intelligence (AI) in both academic and intelligence fields.The fastest way AI improves analysis in academia and intelligence work is when you treat it like a very capable junior analyst: useful, tireless, occasionally brilliant, and absolutely capable of confidently saying something wrong. The best practice is to make your AI usage auditable—not mysterious. That means documenting what you asked, what you fed it, what it produced, and what you did next. In academic work, that looks like keeping a “prompt trail” the same way you keep notes for a literature review: prompts, model/tool used, dates, key outputs, and which outputs were accepted, rejected, or revised (with why). In intelligence-style analysis, it’s basically tradecraft: a chain-of-custody for reasoning. If someone can’t reconstruct how you got from question → evidence → judgment, your conclusion is fragile no matter how slick it sounds. So, when you “stack” or “layer” data with prompts, you’re not just iterating—you’re building a transparent workflow: first prompt to define the question and terms, next to surface hypotheses, next to map what evidence would confirm or falsify each hypothesis, next to test with sourced material, and only then to draft a judgment with confidence levels and stated assumptions. The goal isn’t to show off prompts; it’s to show your reasoning wasn’t a vibes-based séance. Where people go off the rails is letting AI become the author of conclusions instead of the engine for structured thinking. A clean approach is: constrain the model, separate tasks, and force friction. Constrain it by telling it exactly what it may use (your notes, specific documents, a bounded dataset) and what it may not do (invent citations, infer facts not in evidence, fill gaps with “common knowledge”). Separate tasks by using different prompts for different cognitive moves—summarize, compare, extract claims, find contradictions, generate counterarguments, identify missing data, propose collection requirements—rather than one mega-prompt that produces a polished fairy tale. And force friction by routinely asking it to argue against your emerging conclusion, list disconfirming evidence, and identify the assumptions that—if wrong—would flip the assessment. In academic settings, that friction shows up as better literature synthesis and cleaner argumentation; in professional settings, it shows up as fewer analytic faceplants when reality shows up holding a baseball bat.