how to organize fine-tuned models
hey all, thanks in advance ... I'm working on a small project that I hope to grow. it's a market awareness monitor that gathers news (serper.dev) based on a list of managed keywords. we're finding 100+ articles per day of mixed value. the user reads the articles in an inbox form factor (airtable interface) at least once per day and enriches the data with relevance to the business (none, low, med, high), priority for the business (1-5), tags (unlimited custom tags for now, like under 20) and comments (free text explaining choices of relevance, priority, and tags) I think this is ripe for supervised fine tuning of a model and want to try openai's tools for this. the goals, in order of most important first, would be to (a),add a reference case to my portfolio that includes fine tuning and (b) to automate the process of enriching the data with relevance, priority, tags and comments 1. is this a good match for supervised fine tuning of a model? if so, what best way to go about this? is it all about trying and checking or are there some systematic ways to choose base model, number of training rows, etc? i've read the basics but haven't tried this yet 2. s openai the right choice for this? the client is ok to spend some money (<$500 to train, < $20/mo on-going) if it will result in better automatic news curation for them 3. what is the right structure for supervised models? should I train 1 model on all 4 outputs: relevance, priority, tags and text? or should I train one model for each output? or should I do them in distinct sets of 2 or overlapping sets of 3? how do you go about thinking through all the options? appreciate any guidance here since it will almost certainlysave me time and money experimenting in the dark 🙏