Activity
Mon
Wed
Fri
Sun
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
What is this?
Less
More

Memberships

Assistable.ai

3k members • Free

AI Agency Growth

92 members • Free

43 contributions to Assistable.ai
Warning / Exciting Things For Voice V3
So, its been a long process and a lot of work - literally re-writing our voice orchestration to be native - and we still have a small sprint before we will start flowing traffic that way. Here is just a general guide of somethings that will change, be added and be deprecated. So, rollout. first - the users using our numbers will start seeing the traffic flow first. Then older numbers. Then trunked numbers. Nothing tangible changes from your perspective besides lower latency. Tooling - So, we are adding POST / GET / PUT / PATCH / DELETE to api request types. We also have it where you can configure headers, query parameters, path parameters and body parameters. So, fully customized API calls. We will suppose variables in these as well so you can pass API keys or your access token for GHL. Here's the thing, tools is now running through a proxy of ours that will open up the doors for a couple things: - Tools to run as they do today - Direct API calls (will be a toggle), so instead of being wrapped in a args object with metadata, etc - we will just send the API call. This gets rid of the need for middleware because you can make direct API calls. - Workflow execution; not today or the next week but we are adding native workflows to the platform. You can configure a custom tool to trigger a workflow locally to run multi-agent frameworks and basic automation so we will support a middleware internally. Abilities - all stays the same, but we are adding agent teams in voice (chat following right after). So, add different assistants into call to create a team env on the call either through a "prompt change" so we just change the prompt dynamically, or through a team so mutliple voices, etc. AI Models - We will be running a custom fine-tuned model thats been fine-tuned for voice orchestration. aka, this llm outputs text-to-speech friendly output to keep weird translations to a minimum. You will be able to select an OpenAI model if you would like and run it with your key if you prefer OpenAI output.
0 likes • 1d
@Jorden Williams Definitely excited for this!
Voice V3 Sample
This is me (clone), calling me
1 like • 3d
That sounds so real it's insane. Can't wait to get my hands on it! @Jorden Williams Does V3 support fluent multilingual the same way that 2.5 does by chance?
0 likes • 3d
@Jorden Williams Game changer...
Telnyx Partnership, Native CRM, and All Updates Coming Soon
We've been massively busy in the past 8 weeks - there has been a lot of radio silence in the community from me where I used to keep a tab of Skool open at all times. We are 'crossing the chasm' as far as numbers we are seeing and so we have been investing in a lot of resources like human capital to scale even further alongside you. TLDR; - Massive deal with Telnyx allowing us to have native, clean telephony at scale and native texting, call center ops calling and AI voice telephony - 25+ new team members to help serve on support, success, dev and enterprise sales - Migration off bubble, and all related vendors (anything on the voice and chat side that is not native) in the next 30-45 days - Native lite API-friendly CRM with full call center ops and AI deployment natively 30-45 days - More focus on sustainability of AI deployment and enterprise-grade resources - We now have friends in all the highest of places to not only allow us to do things at scale and pass cost savings to you, but allow us to pass the recourses as well Loom attached happy thursday
Telnyx Partnership, Native CRM, and All Updates Coming Soon
4 likes • Sep 25
God bless @Jorden Williams, thank you for all of your hard work, and for advocating for all of us and building a platform that is scalable and allows us to grow at a low cost. 💪🏼🙏
1 like • 10d
@Jorden Williams I'm very excited for the native lite CRM. I would love to not need to use a GHL account to complete my service offering.
Feature Release: Chat History Token Optimization
So, when using your own openai key (and even us as a business), you notice with agent stack (tools, prompt, convo history, RAG, etc) it starts to stack up quick - especially if you have a really involved process. We implemented a token optimization model before our chat completions to ensure you get the cost savings and ill share some data at the end :) So, we are now truncating and summarizing conversation history - we noticed there are large chat completeions coming through with 300-400+ message histories. This becomes expensive overtime if its a lead you've been working or following up with for a while engaging in conversation, so we are reducing that number and summarizing the history to ensure the intelligence stays the same but the token consumption goes way down (98% decrease on larger runs) Another thing we are doing is truncating large tool call outputs within the window that are not relevant to the current task - meaning, if there are tool calls with large outputs (like get_availability), if they are not relevant to the current task at hand, we truncate the response to show the agent that the action happened but the context is shorter. This saw a huge reduction in token consuption as well (96% decrease on larger runs) Here is the before and after, this is the same exact conversation history, assistant ID, tools, custom fields, knowledge base, etc - but see the speed and cost difference and the output was the exact same message: Differences: - 35 seconds faster - 95.95% cheaper ---- Before: "error_type": null, "usage_cost": { "notes": null, "tokens": { "output": 211, "input_total": 175948, "input_cached": 0, "input_noncached": 175948 }, "total_cost": 0.353584, "model_normalized": "gpt-4o", "models_encountered": [ "gpt-4o" ], "price_used_per_million": { "input": 2.5, "cached_input": 1.25, "output": 10 }, "error_message": null, "run_time_seconds": 32.692, "returned_an_error": false, After: "run_time_seconds": 2.618, "returned_an_error": false,
0 likes • 10d
Thanks @Jorden Williams, this is great!
More than a business & community, its a family
I wont get too sappy here but I cant believe how much we all (that is including you) have accomplished in the last 2 years. It genuinely feels like a giant family and its super super heart warming to see. Over the past two years, we have had mountain highs and valley lows - we have experienced crazy growth, and the good and bad outcomes of crazy growth. We have built some of the best deployable agent technology in the agency and enterprise game (so im told) and none of that even compares to the family and community that has been built. I always get asked whats next? What are the plans? The plans are to double down on what we do best and to continuously innovate to serve our community and put scale and efficiency in the hands of business owners who wouldnt have been able to do it without AI. Love yall, happy wednesday. I'm on a dev sprint so expect a wave of patch posts and feature releases over the next week
More than a business & community, its a family
2 likes • 16d
Amen, God is great, and we are all blessed to share this short and precious life together. Thank you for all of your hard work which is changing lives and empowering many! 🙏
0 likes • 14d
@Jorden Williams amen brother 🙏🏻
1-10 of 43
Anthony Castiglia
4
72points to level up
@anthony-castiglia-6799
Matthew 19:26

Active 2m ago
Joined Aug 17, 2025
Powered by