Activity
Mon
Wed
Fri
Sun
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
What is this?
Less
More

Memberships

AI Developer Accelerator

10.8k members • Free

6 contributions to AI Developer Accelerator
Any tips to run LLMs locally ?
Hey guys, do you have any tips on how to run projects locally ? I kind of struggle with this now... I am using a laptop with 16 GB (15,3) of RAM and 4 core Intel i5 (2,60 GHz) processor running on Ubuntu. No Nvidia graphics card unfortunately... I am trying to use only small models such as phi, phi3 or llama2 with Ollama, but most of the times it simply gets stuck while running the code or returns weird characters instead of the agents' work results. I don't have any errors in the code, the program runs well in the beginning, but it freezes usually after 4-5 minutes...
0 likes • May '24
@Jason Rennie It works, but slowly... And when running a bunch of agents it's painfully slow (10-20 min between agents' replies)... And what is the most annoying, is that the agents' replies (with green text) are usually meaningless - no calculations, hallucinations, etc...
0 likes • May '24
@Paul Miller thanks for these details. I'm still window shopping... About the desktop as a server option - how is the speed when accessed from another device? - over the internet, from another location... I am also looking for something which is not so difficult to carry on... - I guess the best option would be a MacBook Pro with 128 GB M3 (but it's insanely expensive). I expect M4 to be even more expensive... - as a second (kid of mobile) option I was considering a Mini-PC... any thoughts on this idea?
biggest challenge - the hardware needed to run crews
In my opinion, the most challenging aspect of running crews of AI agents is the hardware needed to do this. As costs can easily add up while testing various modifications in the code. - having a powerful machine seems to be a must (using services like Lightning AI could be only a temporary solution, as paying $ 50-100 daily can easily be the case) - another cost can be generated by using paid models (like Open AI) - also here bills can sky rocket quickly - using Lightning AI in combination with Open AI would be extremely expensive (1) The best case scenario is running open source models locally. But not all models would work on various machines. In this setup, usually the smallest models can be used. (2) Second best option would be either running paid models locally or running (more advanced) open source models on virtual machines in the cloud. (3) The most expensive situation would be running paid models on virtual machines in the cloud. But the trick is finding the right model, for each project. In order to do so, lots of testing is needed... Which will eventually (potentially) generate quite high costs. Personally, I only managed to replicate only one crewai example, using Lightining AI studio (free plan). After running it once, most of the free credits were gone... 😆 So if I plan to run it again, I would need to buy a subscription... What are your thoughts on this ?
CrewAI Tutorial for Beginners: Learn How To Use Latest CrewAI Features
Gear up to unlock AI's potential in our CrewAI Beginner's Tutorial. You'll grasp the newest CrewAI features to orchestrate AI agents with precision and craft workflows that tackle tasks effortlessly. 🎓 Learn to: - Integrate major CrewAI updates for enhanced functionality - Initiate projects and manage dependencies with confidence - Design and assign tasks to AI agents seamlessly - Implement tools and callbacks for efficient operations - Execute Crew runs and marvel at your AI-driven results For queries or a walk-through, drop a comment. Ready to transform challenges into automated tasks with CrewAI? This is the video for you! Source code: https://github.com/bhancockio/crewai-updated-tutorial-hierarchical
0 likes • May '24
@Brandon Hancock thanks for mentioning that Ollama does not support async tasks. I was trying to replicate this tutorial locally and it didn't work... 🙂
YouTube channel update and small favor to ask
Hey team! I just got back from an quick vacation in Mexico with my better half – needed to recharge those creative batteries! Even with the beach and margs, I couldn't stop thinking about ways to level up our channel and new video ideas. After a few too many margs - I realized I should ask you guys instead of guessing? Genius, right? So now that I'm back and ready to start pumping out some new content for you – I’ve put together a super quick survey. This is your chance to let me know what's the biggest challenge you’re facing right now with AI. Here's the link to the survey: https://forms.gle/u6KhaB4si1nD3eUy8 It’ll only take 5 minutes, but your feedback will directly shape the content I produce for you guys on YouTube. Thanks a ton for your help! I’m all fired up to dive back into creating content that’s tailored just for you. Cheers, Brandon Hancock P.S. Thank you guys so much for all the support you've given me and my family over the past year! I can't wait to see how much we grow in 2024!
YouTube channel update and small favor to ask
0 likes • May '24
done
Hello
Hi everyone, this is a very good topic for a community. Thanks Brandon for setting this up !! Looking forward to interesting discussions.
0
0
1-6 of 6
Alex K
1
3points to level up
@62625609
self-taught developer (mostly Python), very interested in new technologies (AI, Blockchain)

Active 184d ago
Joined May 2, 2024
Powered by