Activity
Mon
Wed
Fri
Sun
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
What is this?
Less
More

Owned by Toivo

Cursor Skool

95 members • Free

Learn how to use Cursor - The #1 community for Cursor users

Rule Of 300 Club

9 members • Free

A club for everyone doing the Rule of 300

Memberships

Vibe Coders Club

297 members • Free

MicroSaaS.org

434 members • Free

AI MicroSaaS Launchpad

1.1k members • Free

9toSaaS: Clone & Collect

62 members • Free

The No B.S. SaaS

137 members • Free

SaaS Life

186 members • Free

Valokuvaus

13 members • Free

Upscale (Free)

24.9k members • Free

Skool Add-ons

683 members • Free

3 contributions to University of Code
Built a memory system for chatbots that mimics how humans remember and forget
Sharing a quick breakdown of something I recently built that dramatically improved long-session performance in LLM-based agents. The challenge => Most chatbots forget everything between sessions — or worse, overload memory with raw logs that aren’t meaningful. My solution => a structured memory system that’s adaptive, contextual, and forgets what it should. Here’s the architecture: 1. Memory Points - Each user message is turned into a memory point with: - A content summary - A dynamic importance score - A category (like “personal info” or “casual”) - A timestamp - Linked to a user ID 2. Memory Decay - Memories fade over time — but not equally. - Critical info (like names or settings) decays slowly - Small talk fades fast I use exponential decay with category-based rates. 3. Long-Term Summarization - When a memory fades below a threshold, the LLM generates a long-term summary (e.g., "User likes dark mode"). These are stored as part of the user’s evolving profile. 4. Contextual Retrieval & Conflict Handling - At inference time, the bot pulls in recent memory points + long-term memory. If there are conflicts, it resolves them based on score, recency, and category. Why it matters:It creates conversations that feel personalized and consistent, without storing everything. It’s lean, adaptive, and avoids token overload. If anyone else here is building AI agents or tools around personalization/memory — happy to trade notes or dive deeper!
0 likes • Jun 26
@Anmol Raj Okay yeah that makes sense, in that context mem0 is... I suppose you'd need to fight with mem0 quite a bit to get it to work And add invalidation scheduling, warmness or something either as labels attached to the memory or as an outside process... Yeah neither is good, I can see it's just better to roll your own Overall sounds very cool! How did you do the decay in practice? Did you compute it during query or pre-compute + update it somehow? How was the retrieval for the LLM? My understanding is that it can be a hard to get right Right now I'm learning Cursor and what software engineering looks like now that we have LLMs for code generation Looking forward to trying out Gemini CLI Working on a client project at work & did as a hobby project a new version for a P2P airsoft marketplace Nothing technologically interesting in either but I'm just baffled at how fast software development is now, especially if the LLM knows well the problem domain and the tools. The development speed on the P2P marketplace was... I don't even know how many X. Cursor did in like 5 minutes features that would've taken me say 1-2 hours. And I had 3 Agents running simultaneously.
1 like • Jun 27
@Anmol Raj Okay yeah that makes sense Yeah sounds very cool! Thank you for sharing this Yeah Cursor is similar to v0. I think about Cursor as GitHub Copilot on steroids. v0 is great for figuring out the frontend and then the actual application I'd in Cursor My experience has been very positive using LLMs for coding. Especially the flagship models (o3, Sonnet 4, Gemini 2.5 Pro) are quite capable for programming. Obviously this depends on your project and stack but yeah they are getting quite good. I don't have a video or something to share that would show it unfortunately. I suppose I could record one.
Claude 4 models released!
Anthropic just released the next generation of Claude models: Claude Opus 4 and Claude Sonnet 4! The models achieve SOTA performance in SWE-bench with a score of 72.5% for Opus and 72.7% for Sonnet Along with the model release, the models get a bunch of new abilities and features: - Extended thinking with tool use (beta): Both models can use tools—like web search—during extended thinking so the models can to alternate between reasoning and tool use to provide way better responses - Parallel tool execution: Both models can now use tools in parallel - Improved prompt adherence: The models now follow instructions more precisely than previous models - Better memory capabilities: The new models are better at extracting and saving key facts to maintain continuity and build tacit knowledge over time The pricing for the models remains consistent with previous Opus and Sonnet models: Opus 4 at $15/$75 per million tokens (input/output) and Sonnet 4 at $3/$15. Both models are available on the Anthropic API, Amazon Bedrock, and Google Cloud's Vertex AI. The Anthropic API also received some updates that should help build more powerful AI agents: - code execution tool - MCP connector - Files API - caching prompts for up to one hour Lastly, Claude Code is now generally available and it now supports background tasks via GitHub Actions and native integrations with VS Code and JetBrains. The models are already available in AI IDEs such as Cursor and GitHub Copilot. Read the full announcement post here: https://www.anthropic.com/news/claude-4
Claude 4 models released!
Moral dilemma - The inside story
I picked up this piece, what would happen if you put your features in a 2-year circle nowadays 🤔 would you survive with the best software being developed every day? Obviously it's a big no, but here is my take "Moving fast and addressing the problem, wouldn't put you at risk" Also setting a time frame to marge codes I feel its a best practice not really sure how long that should be but 7 to 14 days for me is best - I happen to change my mind and remove some features 😉 either way, @Raymond Adeniyi @Digitl-Alchemyst Steven-Watkins what's your views?
Moral dilemma - The inside story
2 likes • May 8
I think this heavily depends on the project and team. For example, trunk-based development is great for small teams that want to move fast. Merges are super small and bad merge conflicts are rare. Release maybe continuous or pick specific moments for a release (let's say separate prod and dev branches in Git) Side note: Shopify ships a new version 50(!) times per day which is pretty impressive! They have some interesting blog posts on this: https://shopify.engineering/automatic-deployment-at-shopify and https://shopify.engineering/software-release-culture-shopify On the other hand, software that has major consequences if it fails I think should be shipped way slower. Things like airplane control software and so on where a failure could lead to people being injured.
1-3 of 3
Toivo Mattila
1
1point to level up
@toivo-mattila-6413
Software Engineer, MSc Now using Cursor to build better software faster

Active 2h ago
Joined May 8, 2025
Lahti, Finland
Powered by