Activity
Mon
Wed
Fri
Sun
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
What is this?
Less
More

Memberships

The AI Agency

303 members • Free

AI Developer Accelerator

10.8k members • Free

11 contributions to AI Developer Accelerator
How to pretrain llm with documents?
I have lot of documents specific to a domain and I want to pretrain the model with the domain knowledge. How to achieve this? Thanks in advance
0 likes • May '24
Maybe have a look at sbert, see more at https://youtu.be/W735DaBOKKo?feature=shared
Brain the size of a planet...
If I was Marvin the Paranoid Android from Douglas Adams 'The Hitch-hikers guide to the galaxy" My morning would have gone something like "Funny, how just when you think life can't possibly get any worse it suddenly does" i have spent the past hour trying to work out why my new crew isn't working against my ollama llama 3 when it was working earlier. Typically i jump straight in and look at all the possible issues, Libraries, bad code, numerous reads of woefully inadequate blog posts, and then i had an epiphany and checked the first thing i should have checked. .... YES, THE SERVICE WASN'T RUNNING!!! I had stopped ollama earlier to delete a model and didn't restart it. Moral of the story. sometimes its the little things that get you, not the code.
0 likes • May '24
Yeah, very often at my work many problems are somewhere else, than I expected.
What code Assistant do you like best?
I'm very curious what everyone is using when they are writing code for CrewAI. Def leave a comment to give context if you want. If you can't pick multiple options leave the other ones you use in the comments.
Poll
38 members have voted
1 like • May '24
I am using mostly local running models, Currently: deepseekcoder starcoder llamacoder mixtral8x7b phi3 Still the local models are not convincing and hopefully become better in time. Curious about gema 2, coming in june.
Any tips to run LLMs locally ?
Hey guys, do you have any tips on how to run projects locally ? I kind of struggle with this now... I am using a laptop with 16 GB (15,3) of RAM and 4 core Intel i5 (2,60 GHz) processor running on Ubuntu. No Nvidia graphics card unfortunately... I am trying to use only small models such as phi, phi3 or llama2 with Ollama, but most of the times it simply gets stuck while running the code or returns weird characters instead of the agents' work results. I don't have any errors in the code, the program runs well in the beginning, but it freezes usually after 4-5 minutes...
1 like • May '24
@Tom Welsh Also possible to run llms on it, pls see: https://www.jetson-ai-lab.com/tutorial_nano-llm.html Nano still available until Jan-2027
1 like • May '24
@Alex K Some years ago I used the jetson nano developer kit 4GB. I did object detection on it. From @Tom Welsh post I see that it is available until Jan-2027.
Ollama with Llama3-gradient
Capable of up to 1M token, but only with >100GB GPU memory. See https://ollama.com/library/llama3-gradient
1-10 of 11
Thomas Block
2
8points to level up
@thomas-block-5151
Sw Engineer

Active 5d ago
Joined May 2, 2024
Powered by