Activity
Mon
Wed
Fri
Sun
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
What is this?
Less
More

Memberships

Q's Mastermind

5.1k members • Free

AI Assistant Lab | Sense

257 members • Free

AI Mastery Alliance

577 members • $27/m

AI Agents Academy

383 members • Free

Ai Automation Vault

14.4k members • Free

Agent Video Mastery

609 members • Free

The Chatbot Challenge

1.5k members • Free

Russ Ward’s AI Mastermind

404 members • Free

Aminos Community

6.1k members • Free

715 contributions to Assistable.ai
BYOK
If I bring my own key, it still costs the same compared to if I used Assistable's key, yeah?
0 likes • 1d
@Jace Nelson It works better for larger prompts since it has a larger context window.
1 like • 11h
@Jace Nelson I use 4.1 for voice and 4.1 mini for chat. I normally use 2 separate bots for voice and chat.
Tooling Updated UI - Preparing For More Functionality
**This post is for the technical builders** For those who dont use tooling -- its the actual key for enterprise use cases and its the biggest differentiator, even internally, as you chase big projects / clients. We have set up abilities using tools, middleware and external database that you wouldn't even believe and they are very high ticket (think north of $50k), I suggest its something to take a look at if you haven't before So, as we build alongside you for fulfillment we noticed some limitations that I think needed to be addressed. So, here is the basis of those additional functionalities: post / put / patch / get / delete, execution type (proxy, direct, workflow execution), headers, timeout MS, flattened tool menu to a page not a popup Proxied tool calls is your traditional tool call for middleware as you see it today. Output is wrapped in an args object with a meta_data object. These are best suited for middleware usage (make, n8n, buildship, etc) because not only do you have your agent's output but you have location id, contact id, etc. Direct tool calls are exactly that - we dont wrap it in anything - we just send the raw request exactly as you have configured it. This is for when you want to attach to something like an MLS, or niche API directly. Cofigure headers (variable friendly), configure body parameters (and soon query parameters), and the http type. We'll send the request on your behalf just as you would anywhere and return the direct output. Workflow execution is for the internal workflow engine we are adding so we can supply you with a middleware if you dont already have one, or looking for something more native to our functinoality. more to come, happy thursday
Tooling Updated UI - Preparing For More Functionality
1 like • 11h
I just SCREAMED!!!
How to Use Automatic Data Extraction?
Hey lovely community! A few of you were asking about how to reliably extract any required information from users while the AI is actively having a conversation beyond just basic name, email, and phone. While we already have tools for standard fields, the most powerful and reliable approach for advanced, custom, or business-specific data collection is using Automatic Data Extraction (Map Custom Fields) instead of Extraction Tools. Here's the guide on how to do it- https://help.assistable.ai/how-to/how-to-gather-and-extract-important-user-information-using-automatic-data-extraction @Ryan B hope you find this useful. This directly solves the data capture reliability issue you were facing
0 likes • 14h
Does this work for voice and chat and will it pull in name an phone number or is that still a separate tool?
AI SMS double processing
Hey everyone! First of all, I hope you are crushing the end of this year. Second of all, we are having some use cases of the AI sending 2 messages simultaneously. This is only occuring when a customer responds twice in a couple seconds. Does anyone have any fix for this?
AI SMS double processing
1 like • 1d
If you increase your wait time to respond, it should fix this.
Feature Release: Chat History Token Optimization
So, when using your own openai key (and even us as a business), you notice with agent stack (tools, prompt, convo history, RAG, etc) it starts to stack up quick - especially if you have a really involved process. We implemented a token optimization model before our chat completions to ensure you get the cost savings and ill share some data at the end :) So, we are now truncating and summarizing conversation history - we noticed there are large chat completeions coming through with 300-400+ message histories. This becomes expensive overtime if its a lead you've been working or following up with for a while engaging in conversation, so we are reducing that number and summarizing the history to ensure the intelligence stays the same but the token consumption goes way down (98% decrease on larger runs) Another thing we are doing is truncating large tool call outputs within the window that are not relevant to the current task - meaning, if there are tool calls with large outputs (like get_availability), if they are not relevant to the current task at hand, we truncate the response to show the agent that the action happened but the context is shorter. This saw a huge reduction in token consuption as well (96% decrease on larger runs) Here is the before and after, this is the same exact conversation history, assistant ID, tools, custom fields, knowledge base, etc - but see the speed and cost difference and the output was the exact same message: Differences: - 35 seconds faster - 95.95% cheaper ---- Before: "error_type": null, "usage_cost": { "notes": null, "tokens": { "output": 211, "input_total": 175948, "input_cached": 0, "input_noncached": 175948 }, "total_cost": 0.353584, "model_normalized": "gpt-4o", "models_encountered": [ "gpt-4o" ], "price_used_per_million": { "input": 2.5, "cached_input": 1.25, "output": 10 }, "error_message": null, "run_time_seconds": 32.692, "returned_an_error": false, After: "run_time_seconds": 2.618, "returned_an_error": false,
0 likes • 2d
@Juanes Correa Are you sure no one is spamming one of your bots?
1-10 of 715
Brandon Duncan
6
554points to level up
@brandon-duncan-8295
REALTOR, Techie, Owner @ Vochat AI

Active 2h ago
Joined Jun 23, 2024
Powered by