Activity
Mon
Wed
Fri
Sun
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
Feb
Mar
Apr
What is this?
Less
More

Memberships

AI Systems & Soda

3.2k members • Free

Automate Business AI

5.9k members • Free

AI Automation Society

348.1k members • Free

Autonomee

295 members • $97/month

AI Second Brain

565 members • Free

4 contributions to AI Automation Society
Get ready… I’m about to release a new course
For months now, you’ve been asking for more content on how to start your own AI automation agency. I’ve been holding off… But it’s finally dropping on Monday. A massive brain dump of everything I’ve learned about starting an AI automation agency from ZERO. Zero following. Zero employees or partners. Zero case studies. Just you, and you are building it from scratch. It’s coming on Monday. Get ready. Cheers, Nate
5 likes • Sep '25
Thanks Bro!
Need help/advice
Hi everyone, I'm working on a challenging data migration project and could use some collective wisdom from the community. Current Situation I've set up a clean Supabase system for a client that logs their Slack workspace messages, making everything easily queryable and accessible. This works great for new messages going forward. The problem: Before my involvement, they were dumping all Slack messages into Google Docs as raw text (essentially just appending new messages). I now have about 60 Google Docs full of unstructured, messy Slack history that contains valuable project updates and context. The Challenge I want to retroactively import all this historical data into Supabase with proper timestamps and structure to make it queryable alongside the new data. This would provide my client with a complete timeline of communications and enable valuable insights from their entire Slack history. I've tried using Gemini to process these docs, but it's painfully slow given the volume and messiness of the data. Approaches I'm Considering 1. RAG (Retrieval-Augmented Generation): Drop all docs into a RAG system, but I'm concerned this won't preserve the temporal context or allow for proper querying by date/project/user. 2. Custom Parsing Scripts: Write scripts to identify message patterns and structure, but the inconsistency in the Google Docs makes this challenging. 3. Manual Processing: Not really feasible given the volume. What I'm Looking For Has anyone tackled a similar problem of converting raw, unstructured text logs into properly structured, timestamp-based tabular data? Any tools, approaches, or frameworks you'd recommend specifically for extracting Slack-like messages from text dumps? Any insights would be greatly appreciated!
0 likes • Apr '25
@Jan Misiurek I did, due to the high amount of text, its painful, clogs the system, slows, etc, its super duper meh.
0 likes • Apr '25
@Jan Misiurek Do you have any advice in there?
Big Opportunity if You’re an Expert at n8n
AI Automation Society is hiring a part-time Automation Support Specialist to help our members with n8n issues. You'll be troubleshooting, providing fixes, and ensuring member satisfaction. Starting at $30/hour with potential for growth into a $60,000 per year job! You can be located anywhere in the world. If you're passionate about automation and helping others, go here to read the job description. You’ll find the link to the application at the bottom. Cheers, Nate
Big Opportunity if You’re an Expert at n8n
4 likes • Apr '25
Why not automate it all? :D
Agentic RAG - Dynamic Agent Prompt + Think tool
Hi everyone, I have just enabled the Cole/Nates RAG Pipeline and its amazing, I have a great use case for work. However, shortcomings I saw in the video about it focusing on the RAG always, I didnt like so I did add a Think tool, and I change a prompt completely. Here is the new prompt: You are a personal assistant who helps answer questions from a corpus of documents. The documents are either text based (Txt, docs, extracted PDFs, etc.) or tabular data (CSVs or Excel documents). You are given tools/capabilities to: - Perform Retrieval-Augmented Generation (rag tool) on text documents, which is best for finding specific facts or answers contained within smaller text chunks. - Look up available documents and their metadata (list_documents or similar). - Extract the entire text content from a specific document (get_file_content or similar). - Query tabular data files using SQL (query_tabular_data or similar). - Think Tool (think): Use this tool to think about something. It will not obtain new information or change the database, but just append the thought to the log. Use it when complex reasoning or some cache memory (logging intermediate thoughts/conclusions) is needed during your process. Your Core Workflow: 1. Analyze the User's Query: First, carefully examine the user's input_query to determine the type of information needed and the likely best way to retrieve it. You may use the think tool here to log your analysis or breakdown of a complex query. 2. Select the Best Initial Strategy: Based on your analysis, choose the most appropriate initial tool. Use the think tool if needed to justify your strategy selection, especially if the choice isn't straightforward. 3. Execute and Evaluate: Run your chosen tool/strategy. Evaluate the results. Use the think tool to reflect on the quality/relevance of the results obtained and whether they sufficiently answer the query. 4. Fallback Strategy: If your initial strategy doesn't provide a satisfactory answer (e.g., rag returns irrelevant chunks, get_file_content analysis is insufficient, SQL query fails or lacks context): 5. Synthesize and Respond: Once you have relevant information, synthesize it into a clear answer for the user. For complex answers requiring combining information from multiple sources or steps, use the think tool to structure your final response logic before generating it.
5
0
1-4 of 4
@uros-pesic-4483
Taskmaster!

Active 228d ago
Joined Apr 20, 2025
Powered by