Activity
Mon
Wed
Fri
Sun
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
What is this?
Less
More

Memberships

AI Developer Accelerator

10.7k members β€’ Free

2 contributions to AI Developer Accelerator
OpenAI Functions Calling / tools feature
Hi everyone πŸ‘‹, I am curious if anyone used OpenAI Function Calling / tools feature? Here is a quick 5 min videos that explains it well: https://www.youtube.com/watch?v=Qor2VZoBib0 Basically the functions are normal functions in your code, you pass the unstructured input/request and your functions definitions/descriptions to the LLM, and the LLM comes back with suggestions on what functions (if any) in your own code that should be called to process the request as well as the structured arguments (!) you would pass to the functions. While AI frameworks are great for many use cases, this approach feels leaner and more transparent. In some cases, introducing a full AI framework into an existing production environment just isn’t practical β€” maybe your team works primarily in C#, Node.js, or PHP, and you simply want to sprinkle in some AI-driven decision-making without adding major dependencies or new infrastructure. I am curious what others think about it. Has anyone else tried using the OpenAI Functions Calling (or a similar feature from another AI provider API) on a project in production? If so, how did it work out for you in terms of reliability or performance?
1 like β€’ 21d
@Ignacio Huerta - one example is customer service AI chat. A customer may ask to cancel or refund their order, get the order shipping status, ask for information about specific products, ask general questions, like company returns policy. In your app you have functions (your own code) that handle order refunds and cancellations, check shipping status, get product information, etc. You pass your functions descriptions and their arguments/parameters to the LLM (OpenAI API) along with the customer request (unstructured text). The LLM responds with a suggestion on what function your code should call and with the parameters formatted for that function. For example, the customer may ask "Will this replacement part work with my refrigerator model XYZ?" OpenAI API response would look for the best function call match, based on the function descriptions you provided to it, and would include a function call to function check_product_compatibility (the name of the function in your code), with parameters {"refrigeratorModel":"XYZ", "replacementPart":"partABC"} - (the replacement part info can be grabbed from the product detail page the customer is looking at). Your function can then get the specs for that product from your product database and compare them against the customer request (push that data to another LLM / OpenAI API call, if necessary). Or the customer may ask "Cancel order #4545" - OpenAI API's response would include function call to function cancel_order (the name of the function in your code) with a parameter {"orderId": "4545"}. It is up to you (your code) to call that function (OpenAI API response suggests the function in your code that should be called, but it's up to your code to call it), and to do all the proper verification, authentication, etc. So if order with the id 4545 does not exist for that customer, or if the order can't be cancelled, your code would respond accordingly. This seems to me like a very lean and controlled approach, which I like, especially when delivering quality and predictability to clients in production.
Stuck in an AI coding loop - anyone familiar with Cursor?
Hey everyone, I'm building an AI-powered app and I keep running into technical issues that I can't seem to get past. Every time I try to fix something with AI assistance, I hit new errors and dysfunctionalities. It's become a bit of a frustrating loop. I'm working with Cursor and AI coding tools, but I'm clearly missing something in how I'm approaching the technical implementation. If anyone here has experience building with AI tools and wouldn't mind chatting about general troubleshooting approaches, I'd really appreciate it. Happy to exchange messages or grab a quick informal chat if you have time. would be grateful for any guidance from someone who's been through similar challenges. Thanks! πŸ™
1 like β€’ 22d
Yep, happens all the time. In addition to the advice mentioned in the previous comments about moving in smaller chunks and testing each step, here is what has helped me: - When Cursor is stuck in a dysfunctional loop, use ChatGPT - copy-paste the error / explain the problem there and see what it comes up with. If any solution seems reasonable, ask Cursor to implement it. ChatGPT often has other or better ideas. - Try switching to a different Cursor model. Open a new chat window, pick a different model, feed it the context / markdown and the problem / errors, and see what it comes up with. As frustrating as it may seem - the bright side is that developer jobs are not going away any time soon :-)
1-2 of 2
Elena K
1
2points to level up
@elena-kor-5538
A versatile professional skilled in full-stack dev, product/project management, and solving tough problems with elegant, entrepreneurial solutions.

Active 9d ago
Joined May 29, 2025
Powered by