Activity
Mon
Wed
Fri
Sun
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
What is this?
Less
More

Memberships

Flow State

801 members • Free

AI Systems & Soda

2.7k members • Free

Ai Titus+

61 members • $8/m

AI Automation Circle

4.7k members • Free

Ai Titus

812 members • Free

AI Automations by Kia

21.8k members • $1/month

AI Automation First Client

726 members • Free

AI Automation (A-Z)

116.3k members • Free

AI Automation Society

202.4k members • Free

11 contributions to Ai Titus
Next Big Leap in LLM/AI...
Worth reading and keeping an eye on.. Introducing Nested Learning: A new ML paradigm for continual learning We introduce Nested Learning, a new approach to machine learning that views models as a set of smaller, nested optimization problems, each with its own internal workflow, in order to mitigate or even completely avoid the issue of “catastrophic forgetting”, where learning new tasks sacrifices proficiency on old tasks. The last decade has seen incredible progress in machine learning (ML), primarily driven by powerful neural network architectures and the algorithms used to train them. However, despite the success of large language models (LLMs), a few fundamental challenges persist, especially around continual learning, the ability for a model to actively acquire new knowledge and skills over time without forgetting old ones. When it comes to continual learning and self-improvement, the human brain is the gold standard. It adapts through neuroplasticity — the remarkable capacity to change its structure in response to new experiences, memories, and learning. Without this ability, a person is limited to immediate context (like anterograde amnesia). We see a similar limitation in current LLMs: their knowledge is confined to either the immediate context of their input window or the static information that they learn during pre-training. The simple approach, continually updating a model's parameters with new data, often leads to “catastrophic forgetting” (CF), where learning new tasks sacrifices proficiency on old tasks. Researchers traditionally combat CF through architectural tweaks or better optimization rules. However, for too long, we have treated the model's architecture (the network structure) and the optimization algorithm (the training rule) as two separate things, which prevents us from achieving a truly unified, efficient learning system.
5 likes • 3d
Very interesting! 🤔🤔🤔 maybe thanks Titus!
4 likes • 4d
I absolutely love this guy’s videos! Thanks!
Prototype to Production
Google just dropped 40+ page guide on taking your AI Agent from prototype to production. 100% free. 🙌🏻
1 like • 19d
🙏🙏🙏
Vibe Coding Hidden Dangers
I had a conversation with an Indian guy, who has built an agent that interviews IT people for jobs, and he was going at vibe coding giving examples of how it can be completely insecure and that they prefer to build things coding themselves without any AI tools. Even though I think he was taking it to far, this actually made me think about the potential security issues and how to prevent them. I would really appreciate if guys who came across this issues or prevented them shared their experience. From my side, I found some videos and sources to study this subject for myself. Sharing a talk I found: https://www.youtube.com/watch?v=XaosRsgGSX8
0 likes • 20d
Here is a short vid I found about setting up security in n8n - https://www.youtube.com/shorts/8ADnqkkM5a4 UPD Also n8n introduced a new node - Guardrails Node, which is quite helpful to make the workflow more secure.
1 like • 26d
Gemini?🤔
1-10 of 11
Olga Kukh
3
40points to level up
@olga-kukh-5315
Staff training courses developer; Sales and communication trainer; Instructional designer; Recently started my journey learning how to make AI agents

Active 12h ago
Joined Sep 16, 2025