Activity
Mon
Wed
Fri
Sun
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
Feb
Mar
What is this?
Less
More

Owned by Loyd

Memberships

Claw & Automate

1.1k members • Free

AI Money Lab

59.3k members • Free

Early AI-dopters

1k members • $64/month

Applied AI Academy

3k members • Free

Brendan's AI Community

23.7k members • Free

The RoboNuggets Network (free)

34.8k members • Free

N8nLab

6k members • Free

Chase AI Community

44.3k members • Free

Tech Snack University

17.3k members • Free

292 contributions to Assistable.ai
Post-Call Webhook (BEEFED UP) *check out the rest of the posts while youre here*
Post-call webhook is super integrated into our managed service because it allows us to integrate it with other system and keep external databases for specific use cases and clients. It's also extremely important for post-call processing. As you know we send *enough* - but now, we send everything you need. This includes the full variable list used during the call, recording URL AND transfer recording URL (the recording of the transferee and the contact talking), latency averages, tools called during the call, etc.etc.etc. { "call_id": null, "call_type": "web_call", "from": null,, "disconnection_reason": "agent_hangup", "user_sentiment": "positive", "call_summary": "A user called to schedule a demo appointment with an agent. They discussed available dates and times, and after some confusion regarding the correct date, the appointment was successfully booked for Thursday, March 13th at 12:30 PM. The user provided their email for confirmation and expressed gratitude at the end of the call.", "call_completion_reason": null,, "recording_url": null,, "call_time_ms": null, "call_time_seconds": null,, "full_transcript": null,, "start_timestamp": null,, "end_timestamp": null,, "added_to_wallet": true, "extractions": {}, "called_tools": [ "book_appointment", "get_availability", "update_user_details" ], "latency_averages": { "average_transcription_duration_ms": 274, "average_llm_first_token_duration_ms": 2162, "average_audio_first_token_duration_ms": 213, "average_end_user_perceived_latency_ms": 2649, "average_start_speaking_plan_extra_wait_duration_ms": 0, "turn_count": 11 }, "transfer_recording_url": null, "recordings": [ ], "transcript_object": [ { "role": "agent", "content": " You're very welcome, Bernie!", "metadata": { "assistant_id": "assistant-676f56d1-9066-4796-8714-4683cce89b0f", "transcription_duration_ms": 377, "llm_first_token_duration_ms": 437, "audio_first_token_duration_ms": 167, "end_user_perceived_latency_ms": 981, "start_speaking_plan_extra_wait_duration_ms": 0
0 likes • 16h
Dude, we need to connect... So many ideas...
0 likes • 13h
@Jorden Williams This is the reminder.
Hubspot Integration & Telephony-As-A-Service
we've got it working in hubspot and salesforce via a native caller and SMS provider with us. This is for AI use and non-AI use. At my last role, we used hubspot and paid 5 figures to a company just to make calls - crazy... And, we get a lot of requests to decouple / ingest other CRMs etc. etc. So, we put a lot of time into re-thinking how we built the platform and where we need to be for you and our clients. So, the reason a lot of this is taking forever is because we are focusing on building from ground up for modular tech that is built to scale and easy to integrate. So, we have full A2P, telephony controls, and sub-account connection to hubspot with FULL 2-WAY SYNC - shortly after salesforce, fub and others. this should come shortly after front end migration and on sister-platforms shortly after.
Hubspot Integration & Telephony-As-A-Service
0 likes • 17h
nice
Test Calls In the Builder - Have AI Call you
so, recently on some of our enterprise builds - going to run the contact through the make ai call action is just slow when testing. So, we added this. You can search for your contact, select it, and make an outbound call to that contact right from the portal as you're testing.
Test Calls In the Builder - Have AI Call you
0 likes • 17h
nice
Kimi K2 - LLM Latency & Feature Tagging
We added Kimi K2 thats locally hosted in different regions across the world in different data centers. So, you dont have to worry about vendor updates to the model - we control it. The anchor sites are all over the world so you should get consistent low latency whereever you / your contact is located. Kimi K2 is a voice only model right now - chat getting the update at a later date and will default to gpt-4.1. It's fast and intelligent - consistent 400ms turn takes on english / US use cases.
Kimi K2 - LLM Latency & Feature Tagging
0 likes • 17h
What is the cost per minute? And how does it work if your'e assistant is set up to call and send sms?
beta answer to latency and intelligence
so dev stuff is taking a while with front end migration and Assistable's For Business platform (first no-white label workspace), so to hold you over on some stuff - this has been good:
beta answer to latency and intelligence
0 likes • 2d
Are you using a mac studio for the hosting?
0 likes • 2d
I tried using the kimi and couldn’t get it to work. For some reason it was causing an error.
1-10 of 292
Loyd Hale
5
252points to level up
@loyd-hale-9984
In the Healthcare niche

Active 8m ago
Joined Aug 10, 2024
INTP
Dallas Area - Texas
Powered by