User
Write something
UX Happy Hour irl (NYC) is happening in 16 hours
Sneak peek of my new portfolio site (WIP)
I'm in the process of rebuiding my portfolio using AI-powered design and coding workflows. I've completed the homepage and wanted to release it early as a build-in-public project: https://lab.yatongwang.com. ### Tools I Use - **Design:** Stitch with Google, Figma, Pencil - **Primary IDE:** Cursor - **Version Control:** GitHub - **Deployment:** Netlify (CI/CD from `main` branch) - **Coding & Engineering Guidance:** Claude, ChatGPT (for minor coding problem assistance, Git/GitHub best practices, repo architecture, version control workflows, etc.) Feel free to let me know what you think!
Sneak peek of my new portfolio site (WIP)
Non-Violent, Clean Communication for Agentic AI
Was just playing around with some thoughts in chat GPT. ==== A practical guide to building agents that collaborate without burning energy Agentic AI systems fail for the same reasons human teams do: over-context, unclear boundaries, moralized directives, and hidden agendas. The fix isn’t more intelligence—it’s clean interfaces. Here’s how to apply Non-Violent, Clean Communication (NVCC) to agentic AI so emergence can happen without stalls, loops, or energy loss. === The Core Principle === Give each agent enough context to act—no more, no less. Too little → failure. Too much → paralysis. Clean communication is not cold. It’s non-violent because it avoids coercion, overload, and implicit control. === The 4 Rules of NVCC for Agents === 1. Separate Observation from Interpretation Bad: “Agent A is failing to prioritize correctly.” Clean: “Agent A returned output X after input Y in 3.2s.” Agents should receive facts, not judgments. Interpretation creates hidden pressure and cascading corrections. 2. State the Need, Not the Narrative Bad: “We need better results because the system looks unreliable.” Clean: “Goal: reduce error rate from 12% to <5% on task Z.” Narratives add noise. Needs create direction. 3. Make Requests, Not Commands Bad: “Fix this immediately and coordinate with all other agents.” Clean: “Attempt solution A. Do not consult other agents unless confidence <0.6.” Requests preserve autonomy. Autonomy enables emergence. 4. Explicitly Bound Responsibility Bad: “Handle the issue end-to-end.” Clean: “Your scope ends at generating options. Do not execute.” Unbounded responsibility causes agents (and humans) to overreach, loop, or stall. === Why This Works === Clean interfaces prevent: - Recursive awareness (“What are the other agents thinking?”) - Moral load (“I must fix everything.”) - Energy leakage (over-coordination) They enable: - Faster alignment - Faster detection of non-alignment - Emergent solutions no one pre-designed
0
0
Airborne w/AI
While airborne on my way to my holiday destination, instead of watching Home Alone, I used the time to explore a festive idea as a design exercise. I’m sharing the result below, nothing fancy, just a joyful little experiment. I hope you all are enjoying your holiday season https://sparkle-path-plan.lovable.app
How to talk about AI - Interview Cheat Sheet
In the support call @Danny Setiawan mentioned the following video: https://www.youtube.com/watch?v=fyHnGHxGuhI about "Why Andrej Karpathy Feels "Behind". I realized it has a very strong framework for how we can talk and think about AI. I put the video transcript into GPT and was playing around it with, I asked it to summarize the concepts, and also apply them to the UX role. I also did some quick fire rounds on AI interview questions. The Cheatsheet: https://docs.google.com/document/d/1hTPZPahta5v6lz0dLz-CWMdTV4nth9fol_xaiqhz81c/edit?usp=sharing Full Convo w/GPT https://chatgpt.com/share/695e7327-6bfc-8002-a4fa-c8fff57c187f
1-4 of 4
powered by
UX Support Group
skool.com/ux-support-group-6932
The Accelerator for Future-Ready UX Leaders
Build your own community
Bring people together around your passion and get paid.
Powered by