Activity
Mon
Wed
Fri
Sun
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
What is this?
Less
More

Memberships

Automation Incubator™

42.8k members • Free

AI Alchemists

310 members • Free

Early AI-dopters

770 members • $59/month

AI Automations by Jack

1.4k members • $77/m

AI Automation Society

202.4k members • Free

AI Workshop Lite

12.5k members • Free

Builder’s Console Log 🛠️

1.2k members • Free

Tech Snack University

12.9k members • Free

AI Accelerator

16.1k members • Free

73 contributions to Ai Titus
Google Competing with N8N?
Very interesting worth checking out... https://workspace.google.com/blog/product-announcements/introducing-google-workspace-studio-agents-for-everyday-work
1 like • 2h
@Isaac Tut 💯
🎅🏻 Advent of Agents 2025
25 days to master AI Agents with Gemini 3, Google ADK, and production templates. Daily tutorials with copy-paste code. Start here Read the Introduction to Agents white paper 100% free. 🙌🏻
1 like • 2d
@Rey Bond My pleasure.
1 like • 2d
@Frank van Bokhorst 💪🏻🦾
Next Big Leap in LLM/AI...
Worth reading and keeping an eye on.. Introducing Nested Learning: A new ML paradigm for continual learning We introduce Nested Learning, a new approach to machine learning that views models as a set of smaller, nested optimization problems, each with its own internal workflow, in order to mitigate or even completely avoid the issue of “catastrophic forgetting”, where learning new tasks sacrifices proficiency on old tasks. The last decade has seen incredible progress in machine learning (ML), primarily driven by powerful neural network architectures and the algorithms used to train them. However, despite the success of large language models (LLMs), a few fundamental challenges persist, especially around continual learning, the ability for a model to actively acquire new knowledge and skills over time without forgetting old ones. When it comes to continual learning and self-improvement, the human brain is the gold standard. It adapts through neuroplasticity — the remarkable capacity to change its structure in response to new experiences, memories, and learning. Without this ability, a person is limited to immediate context (like anterograde amnesia). We see a similar limitation in current LLMs: their knowledge is confined to either the immediate context of their input window or the static information that they learn during pre-training. The simple approach, continually updating a model's parameters with new data, often leads to “catastrophic forgetting” (CF), where learning new tasks sacrifices proficiency on old tasks. Researchers traditionally combat CF through architectural tweaks or better optimization rules. However, for too long, we have treated the model's architecture (the network structure) and the optimization algorithm (the training rule) as two separate things, which prevents us from achieving a truly unified, efficient learning system.
3 likes • 3d
@Titus Blair 🤔
2 likes • 14d
@Frank van Bokhorst Most definitely.
1 like • 5d
@Enoch Adebisi 👀
1-10 of 73
Mišel Čupković
5
252points to level up
@bili-piton-3689
It's not a bug, it's an unexpected learning opportunity.

Active 33m ago
Joined Aug 19, 2025
INTP
Dubai