Activity
Mon
Wed
Fri
Sun
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
Feb
Mar
What is this?
Less
More

Memberships

AI Agent Automation Agency

3.2k members • Free

AI Automation Society

293.6k members • Free

JUSTANOTHERPM

994 members • Free

11 contributions to JUSTANOTHERPM
AIPMA | Week 5 | Activity
Three things to do before Module 6: 1. Create your 5 spec files using the mega prompt. Read it before you paste it — then customize it for your product. 2. Audit your own PRD. For every assumption you made — check if your observability plan would actually catch it if you're wrong. 3. Write quality examples for a product that isn't yours. Pick one of the four practice products and write great/bad/edge outputs + must-fail-safely cases.
1 like • 8d
@Sid Arora, while reviewing my notes from the AI PRD, I noticed in your example there isn't specific call outs of functional and non-functional requirements nor functional and emotional jobs (e.g. JTBD). I'm wondering why these elements from traditional PRDs don't get a lot more coverage in the AI PRD: Audio Journal.
0 likes • 1d
@Sid Arora this makes sense. Will keep this in mind. BTW, I think it would be great to incorporate a PRD example for a B2B product. I view the Audio Journal example as a good consumer or Pro consumer use case. Thanks for a great course and excellent learning experience.
AIPMA Week 4 Activitiy Submission
This is where you submit your work for the three Module 4 activities. Reply to this post with your submissions. What to submit: Activity 1 — The PM Decision Audit Your 4-section diagnosis memo (300–500 words). Include all four sections: what the user expected, what the system did, root cause, and recommended fix. Activity 2 — Design the Invisible Decisions Your answers to all 5 PM decisions for the Spotify "Why This Song?" feature. Be specific — "it should be smart" doesn't count. Activity 3 — The Trade-off Debate Your synthesis paragraph(s). Complete the sentence for each dimension: "Notion AI's approach is better when ___. Gemini's approach is better when ___." How to submit: Make a copy of the google doc in the original post. Add your answers to it. Reply to this post, and include a link to your doc. Peer review (Activity 1 only): After you submit your diagnosis memo, read two other students' submissions and reply to their comment with your peer review. Do you agree with their root cause? Would their fix work? Did they catch something you missed? Drop your submissions below 👇
1 like • 23d
Here is my submission. This was a thoughtful and challenging activity.
AIPMA | Week 3 Activity | Coh 001
Before you build AI, you define what "right" looks like. That's a golden set. Your task: Create 10 test cases for a travel itinerary chatbot. Define the user, their message, and exactly what the AI should (and shouldn't) do. Full brief with product context and template linked in the above post. Drop your submission link in the comments 👇
1 like • 26d
here are my ten test cases. this was an insightful exercise.
Week 2 Activity
This week you've got 5 activities that put everything from Module 2 into practice: 1. Fix the Prompt — Take broken prompts and rewrite them using the 5 Elements framework 2. Diagnose the Failure — Figure out why an AI product is giving bad output (hint: it's almost never the model) 3. Design the Context — Map out all 6 context components for a real product scenario 4. Classify the Approach — Decide whether a feature needs a simple prompt, RAG, an agent, or fine-tuning 5. Write a System Prompt — Write a production-quality system prompt from a product brief, then test it live This Google Doc has all 5 activities. Here's what to do: → Make a copy of the doc → Work through the activities → Link your completed copy as a comment on this post
1 like • Feb 9
Week 2 Activity - Love the detailed nature of this activity. Nice way to connect the learning with actual practice. Good insights.
AIPMA | Module 1 Activity | Coh 001
Please share a document with the LLM's name, prompt and the learning summary of session. Please include a visual (optional) Also share in the comments below how would you define "good quality" in this case, and how would you measure success of the "Online classes learning summariser" feature
2 likes • Feb 1
Good Quality Control (What does “good quality” actually mean for this summarizer?) 1. Source Fidelity (Truth Over Fluency). Every output must be traceable back to the transcript. No invented concepts, no “helpful” additions, no synthesized frameworks. Fluency is secondary to faithfulness. If the speaker didn’t say it, it doesn’t exist. Quality starts with strict grounding. 2. Concept Extraction (Frameworks Beat Filler). The summarizer must prioritize mental models, decisions, and product insights over conversational noise. Logistics and small talk should disappear. Core frameworks (like deterministic vs. probabilistic) must surface clearly. If filler survives but concepts don’t, extraction failed. 3. Applied Insight (From Notes to Leverage). A strong summary converts ideas into practical implications. It should answer: What changes for me as a PM? If users finish reading without clearer direction on discovery, design, or measurement, the summarizer produced notes, not value. 4. Cognitive Efficiency (Designed for Two-Minute Recall). Formatting is part of quality. Use structured sections, crisp bullets, and scannable layouts so users can refresh the entire session in under two minutes. The goal isn’t completeness—it’s fast comprehension. Measuring Success: 1. Friction Index (How Much Did Users Have to Fix?). Measure how much users modify the output before keeping or sharing it. Minimal edits mean the summary matched intent. Heavy rewrites or deletions signal quality gaps. Low friction = real time saved. 2. Reuse Signal (Did It Become Working Material?). Track whether users copy sections into docs, notes, or follow-ups. When content leaves the product and shows up in real workflows, that’s stronger than any rating—it proves usefulness. 3. Steerability Rate (Can Users Recover Quickly?). Measure how often users successfully improve a weak result using regenerate, focus, or refinement controls. If users can course-correct and accept the next output, your recovery UX is doing its job. 4. Reference Benchmarking (Are We Matching Expert Output?). Maintain a small set of expert-written “reference summaries.” Regularly score AI outputs against them on coverage, accuracy, and actionability. This gives you a concrete baseline to track real improvement over time.
1-10 of 11
Phil L
3
45points to level up
@phil-l-6559
Product Management leader in real estate technology.

Active 9h ago
Joined Jan 11, 2026
Powered by