Activity
Mon
Wed
Fri
Sun
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
What is this?
Less
More

Owned by Jorden

Assistable.ai

3k members • Free

We give you the most dominantly unfair advantage in the agency space. The most installed GoHighLevel AI ever.

Memberships

LeadIndicator.ai

505 members • Free

2205 contributions to Assistable.ai
Tool Return Mapping
Doing some work on tool initializaiton and reference mapping to help with direct API calls and even filtering data you want sent back to the assistant - should see the mapping and request builder go into production tomorrow
Tool Return Mapping
3 likes • 8h
@Jacovia Cartwright oh yes, very very soon
3 likes • 8h
@Harry Stokes yes! Async tool calling coming soon and yes
Tooling Updated UI - Preparing For More Functionality
**This post is for the technical builders** For those who dont use tooling -- its the actual key for enterprise use cases and its the biggest differentiator, even internally, as you chase big projects / clients. We have set up abilities using tools, middleware and external database that you wouldn't even believe and they are very high ticket (think north of $50k), I suggest its something to take a look at if you haven't before So, as we build alongside you for fulfillment we noticed some limitations that I think needed to be addressed. So, here is the basis of those additional functionalities: post / put / patch / get / delete, execution type (proxy, direct, workflow execution), headers, timeout MS, flattened tool menu to a page not a popup Proxied tool calls is your traditional tool call for middleware as you see it today. Output is wrapped in an args object with a meta_data object. These are best suited for middleware usage (make, n8n, buildship, etc) because not only do you have your agent's output but you have location id, contact id, etc. Direct tool calls are exactly that - we dont wrap it in anything - we just send the raw request exactly as you have configured it. This is for when you want to attach to something like an MLS, or niche API directly. Cofigure headers (variable friendly), configure body parameters (and soon query parameters), and the http type. We'll send the request on your behalf just as you would anywhere and return the direct output. Workflow execution is for the internal workflow engine we are adding so we can supply you with a middleware if you dont already have one, or looking for something more native to our functinoality. more to come, happy thursday
Tooling Updated UI - Preparing For More Functionality
Feature Release: Chat History Token Optimization
So, when using your own openai key (and even us as a business), you notice with agent stack (tools, prompt, convo history, RAG, etc) it starts to stack up quick - especially if you have a really involved process. We implemented a token optimization model before our chat completions to ensure you get the cost savings and ill share some data at the end :) So, we are now truncating and summarizing conversation history - we noticed there are large chat completeions coming through with 300-400+ message histories. This becomes expensive overtime if its a lead you've been working or following up with for a while engaging in conversation, so we are reducing that number and summarizing the history to ensure the intelligence stays the same but the token consumption goes way down (98% decrease on larger runs) Another thing we are doing is truncating large tool call outputs within the window that are not relevant to the current task - meaning, if there are tool calls with large outputs (like get_availability), if they are not relevant to the current task at hand, we truncate the response to show the agent that the action happened but the context is shorter. This saw a huge reduction in token consuption as well (96% decrease on larger runs) Here is the before and after, this is the same exact conversation history, assistant ID, tools, custom fields, knowledge base, etc - but see the speed and cost difference and the output was the exact same message: Differences: - 35 seconds faster - 95.95% cheaper ---- Before: "error_type": null, "usage_cost": { "notes": null, "tokens": { "output": 211, "input_total": 175948, "input_cached": 0, "input_noncached": 175948 }, "total_cost": 0.353584, "model_normalized": "gpt-4o", "models_encountered": [ "gpt-4o" ], "price_used_per_million": { "input": 2.5, "cached_input": 1.25, "output": 10 }, "error_message": null, "run_time_seconds": 32.692, "returned_an_error": false, After: "run_time_seconds": 2.618, "returned_an_error": false,
1 like • 10d
@Juanes Correa
0 likes • 2d
@Juanes Correa we saw a 95% reduction in token runs, are you using a model like 4o and doing a lot of volume?
Warning / Exciting Things For Voice V3
So, its been a long process and a lot of work - literally re-writing our voice orchestration to be native - and we still have a small sprint before we will start flowing traffic that way. Here is just a general guide of somethings that will change, be added and be deprecated. So, rollout. first - the users using our numbers will start seeing the traffic flow first. Then older numbers. Then trunked numbers. Nothing tangible changes from your perspective besides lower latency. Tooling - So, we are adding POST / GET / PUT / PATCH / DELETE to api request types. We also have it where you can configure headers, query parameters, path parameters and body parameters. So, fully customized API calls. We will suppose variables in these as well so you can pass API keys or your access token for GHL. Here's the thing, tools is now running through a proxy of ours that will open up the doors for a couple things: - Tools to run as they do today - Direct API calls (will be a toggle), so instead of being wrapped in a args object with metadata, etc - we will just send the API call. This gets rid of the need for middleware because you can make direct API calls. - Workflow execution; not today or the next week but we are adding native workflows to the platform. You can configure a custom tool to trigger a workflow locally to run multi-agent frameworks and basic automation so we will support a middleware internally. Abilities - all stays the same, but we are adding agent teams in voice (chat following right after). So, add different assistants into call to create a team env on the call either through a "prompt change" so we just change the prompt dynamically, or through a team so mutliple voices, etc. AI Models - We will be running a custom fine-tuned model thats been fine-tuned for voice orchestration. aka, this llm outputs text-to-speech friendly output to keep weird translations to a minimum. You will be able to select an OpenAI model if you would like and run it with your key if you prefer OpenAI output.
2 likes • 3d
@Jason Dueck 1. We will pass it as it goes today as an option but I can make a vid on it, so no big worries on it rn 2. I can take a look today while I’m playing around in yhefe
2 likes • 2d
@Giles Parnell yeah not trying to replace them but we have some users that don’t use ghl and having native tools just helps with more in depth builds Ghl wfs are still fine for most uses
Voice V3 Sample
This is me (clone), calling me
1 like • 3d
@Anthony Castiglia yes sir
0 likes • 3d
@Ben B call latency, we basically replaced our entire voice infrastructure
1-10 of 2,205
Jorden Williams
8
17,165points to level up
founder of assistable

Active 2h ago
Joined Jan 30, 2024
ENTJ
Boca Grande, FL
Powered by