Activity
Mon
Wed
Fri
Sun
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
What is this?
Less
More

Memberships

Zero to Pitch

2k members • Free

AI Business Trailblazers Hive

12.7k members • Free

AI Developer Accelerator

10.8k members • Free

Eon City

397 members • $27/m

Agent Zero

1.8k members • Free

4 contributions to AI Developer Accelerator
Failed to set up Poetry Environment
I tried installing pipx, but still pipx install poetry was showing error "invalid syntax". So i ran command "pip install poetry ==1.2.0" and that gave me this error- anaconda-client 1.12.3 requires platformdirs<5.0,>=3.10.0, but you have platformdirs 2.6.2 which is incompatible. conda 24.5.0 requires platformdirs>=3.10.0, but you have platformdirs 2.6.2 which is incompatible. I am not sure how to sort this. Kindly help.
0 likes • Aug '24
you need to change the version of platformdirs installed, run "pip install platformdirs==3.10.0" then try again to install poetry
What have you built/automated with CrewAI?
Trying to get some inspiration and ideas for my next crew..
2 likes • Aug '24
I want an AI agent that can do 3 things: 1) Search Jobs on upwork 2) complete the job 3) deposit profits into my bank account That would be great.
Favorite Open Source, fully autonomous, general-purpose AI agents
What is your favorite Agent Application, available for install complete with a webui, auto task execution, local and api models, code generation and execution ability, tools library use, web crawler, image analysis, vector db and ideally local PC mouse keyboard control. Either a system built with LangGraph, LangChain, Langroid. Autogen, CrewAI, vanilla python or similar. Recently ive been testing Ailice: https://github.com/myshell-ai/AIlice It works well, but the code library is complicated so im looking for other alternatives. Some alternatives are: https://github.com/daveshap/ACE_Framework https://github.com/brainqub3/meta_expert https://github.com/frdel/agent-zero Which other packages can you recommend?
2
0
Local vs cloud , Help me choose
I need a new system cause mine is 8 y/o. I am working on learning programming, full stack, DS & ml. Over the course of next year & half. While I start my business. I quit my job & plan to change lanes so I am working on savings. So my highest concern is economic efficiency. I saw @tom-welsh-8986 post & got some clarity on running locally. The issue is. I’m torn between 4 options 1. M3 max MacBook Pro with 64 gb ram. + cloud 2. Pro art laptop + cloud 3. Pc 1 = 14700 + 4090 + 128 gb ddr4. = ~$4,100 4. Pc 2 = threadripper 7960x + A6000 + 128gb ddr5 ~$8,10 . 1. Cloud compute - paperspace vs AWS vs vertex. = a rough estimate of 1000 - 2000 usd per year (minimum) ~5,000 - 7,000 usd per year. ( the cloud compute cost was calculated based on average havy api usage of individual with agents & a near accurate estimate of the tokes that need to be read & generated based on the context window & frequency) I completely accept that this system is Going to be obsolete in 3 years max. And currently my savings would let me spend a maximum of 4K on compute ( pc/ cloud) I can borrow 4 more from my parents. That’s it. So… I can not even think of local servers. Which would have the above would best suit the following use cases. ——————————————————— # Context & use case Privacy is not a greatest concern. But it will have a lot of sensitive information. I plan on running multiple agents simultaneously & one of which will constantly loop through the following. 1. figure out the context & essence of my question, by predicting & refining what kind of answer i expect with a every question i post. 2. ask for clarification to ensure it's assumptions, align with reality 3. split the question i asked into multiple questions to ensure i get a comprehensive response. 5. rephrase the question following to get an answer that align with the requirements. 6. deterring the order / sequence to ask questions to generate the best answer. 7. generate multiple responses in separate threads to determine which aligns with the requirement the most.
1 like • Aug '24
I recently bought a Dell Aurora R16 PC with an RTX4070, I highly recommend these PCs excellent value for money. The 4070 is great value as a 12GB Vram can run most 7B ollama models with partial quantization. A 4090 costs three times as much, but will only speed up inference speed by 35%. To run a 70B ollama model locally you need two A6000 gpus, much cheaper to run this on an api.
1-4 of 4
Matthew Kelly
2
15points to level up
@matthew-kelly-5720
Self taught programmer who works as an electrical engineer by day and hacks during the night.

Active 17d ago
Joined Jun 26, 2024
Powered by