Build AI Restaurant Voice Receptionist (Complete Implementation)
Learn how to create a complete AI-powered restaurant voice receptionist with OpenAI Realtime API, reservation management, and email confirmations using Lovable Cloud.
Build a production-ready AI restaurant receptionist with real-time voice conversations, automatic reservation booking, and email confirmations - all in one comprehensive implementation.
Key Notes
  • Complete frontend with 4 pages: Live Demo, Conversations, Reservations, Settings
  • OpenAI Realtime API integration with WebRTC for voice
  • Database schema: agent_config, conversations, messages, reservations tables
  • Edge functions for session management and email confirmations
  • Function calling for creating reservations from voice conversations
  • Automated email confirmations using Resend API
  • Dark theme with lime green accents and audio visualizations
The Prompt Sequence
Follow these prompts in order to recreate the app. Copy any prompt to use it.
1
Complete Restaurant AI Voice Receptionist Implementation
Context: This is a comprehensive single-prompt implementation that recreates an entire AI restaurant voice receptionist application from scratch.
Copy >>>
Create a complete AI restaurant receptionist application with voice calling capabilities via Twilio and real-time web demo. This app should work immediately after setup.
CRITICAL: Getting Your Twilio Edge Function URL
After this prompt completes, you'll get an edge function URL that looks like:
You need to configure this URL in your Twilio phone number settings:
Go to Twilio Console → Phone Numbers → Active Numbers
Select your phone number
Under "Voice & Fax" → "A CALL COMES IN"
Method: HTTP POST
Required Secrets (Add these after the app is created):
OPENAI_API_KEY (for OpenAI Realtime API)
RESEND_API_KEY (for email confirmations)
Frontend Requirements
Create a React app with dark theme (background: #0A0A0A, sidebar: #1A1A1A) and lime green accent (#84CC16).
App Structure:
Sidebar navigation with logo and 4 menu items:
Live Demo (Home icon)
Conversations (MessageSquare icon)
Reservations (Calendar icon)
Settings (Settings icon)
Page 1: Live Demo (/live-demo or /)
Hero section: "AI Voice Agent Demo" title with description
Center microphone button with pulsing animation (lime green glow)
Status indicator showing: "Ready", "Connecting...", "Listening", "Speaking"
Audio visualizer (animated bars when speaking)
Conversation transcript display (scrollable)
Use WebRTC connection to OpenAI Realtime API through edge function
Page 2: Conversations (/conversations)
Stats cards showing: Total Calls, Active Now, Avg Duration
Data table with columns: Customer Name, Date, Duration, Status
Load from 'conversations' table with real-time updates
Filter and search capabilities
Page 3: Reservations (/reservations)
Stats cards: Total Reservations, Today's Reservations, This Week
Data table with columns: Name, Email, Date, Time, Guests, Status
Load from 'reservations' table
Status badges (confirmed/cancelled)
Page 4: Settings (/settings)
Form to edit agent configuration from 'agent_config' table:
Restaurant Name (input)
Restaurant Hours (textarea)
Menu (textarea)
Custom Instructions (textarea)
Save button with toast notifications
Load single config row on mount
Backend Requirements (Lovable Cloud/Supabase)
Database Schema:
agent_config table:
CREATE TABLE agent_config (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
restaurant_name TEXT NOT NULL DEFAULT 'Restaurant',
restaurant_hours TEXT NOT NULL DEFAULT 'Monday-Sunday: 5:00 PM - 10:00 PM',
menu TEXT NOT NULL DEFAULT 'Menu items',
instructions TEXT NOT NULL DEFAULT 'You are a friendly restaurant receptionist. Be helpful and professional.',
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
-- RLS Policy
ALTER TABLE agent_config ENABLE ROW LEVEL SECURITY;
CREATE POLICY "Public access to agent_config" ON agent_config FOR ALL USING (true) WITH CHECK (true);
-- Insert default config
INSERT INTO agent_config (restaurant_name, restaurant_hours, menu, instructions)
VALUES (
'Demo Restaurant',
'Monday-Sunday: 5:00 PM - 10:00 PM',
'Starters: Caesar Salad, Soup of the Day. Mains: Grilled Salmon, Ribeye Steak, Vegetarian Pasta. Desserts: Tiramisu, Chocolate Cake.',
'You are a friendly receptionist for our restaurant. Help customers make reservations and answer questions about our menu and hours. Always be professional and courteous.'
);
conversations table:
CREATE TABLE conversations (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
customer_name TEXT,
status TEXT NOT NULL DEFAULT 'active',
started_at TIMESTAMPTZ NOT NULL DEFAULT now(),
ended_at TIMESTAMPTZ,
duration_seconds INTEGER,
created_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
ALTER TABLE conversations ENABLE ROW LEVEL SECURITY;
CREATE POLICY "Public access to conversations" ON conversations FOR ALL USING (true) WITH CHECK (true);
messages table:
CREATE TABLE messages (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
conversation_id UUID NOT NULL,
role TEXT NOT NULL,
content TEXT NOT NULL,
created_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
ALTER TABLE messages ENABLE ROW LEVEL SECURITY;
CREATE POLICY "Public access to messages" ON messages FOR ALL USING (true) WITH CHECK (true);
reservations table:
CREATE TABLE reservations (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
conversation_id UUID,
name TEXT NOT NULL,
email TEXT NOT NULL,
phone TEXT,
date DATE NOT NULL,
time TIME NOT NULL,
guests INTEGER NOT NULL,
status TEXT NOT NULL DEFAULT 'confirmed',
created_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
ALTER TABLE reservations ENABLE ROW LEVEL SECURITY;
CREATE POLICY "Public access to reservations" ON reservations FOR ALL USING (true) WITH CHECK (true);
Edge Function 1: twilio-voice
Create supabase/functions/twilio-voice/index.ts with this EXACT code:
import { createClient } from "https://esm.sh/@supabase/supabase-js@2.51.0";
const OPENAI_API_KEY = Deno.env.get("OPENAI_API_KEY");
const SUPABASE_URL = Deno.env.get("SUPABASE_URL") as string;
const SUPABASE_SERVICE_ROLE_KEY = Deno.env.get("SUPABASE_SERVICE_ROLE_KEY") as string;
const supabase = createClient(SUPABASE_URL, SUPABASE_SERVICE_ROLE_KEY);
serve(async (req) => {
const url = new URL(req.url);
console.log("Request received:", url.pathname);
// ===== ENDPOINT: /twiml =====
if (url.pathname.endsWith("/twiml")) {
const twiml = `
Please wait while I connect you to our AI receptionist
`;
console.log("Returning TwiML");
return new Response(twiml, {
headers: {
"Content-Type": "application/xml",
"Access-Control-Allow-Origin": "*"
},
});
}
// ===== ENDPOINT: /media-stream (WebSocket) =====
if (url.pathname.endsWith("/media-stream")) {
console.log("Upgrading to WebSocket");
const { socket: twilioSocket, response } = Deno.upgradeWebSocket(req);
let openaiSocket: WebSocket | null = null;
let streamSid: string | null = null;
let conversationId: string | null = null;
const fnArgs: Record = {};
const connectOpenAI = () => {
if (openaiSocket) return;
console.log("🔌 Connecting to OpenAI...");
openaiSocket = new WebSocket(
[
"realtime",
`openai-insecure-api-key.${OPENAI_API_KEY}`,
"openai-beta.realtime-v1",
]
);
openaiSocket.onopen = () => {
console.log("✅ OpenAI WebSocket connected successfully");
};
openaiSocket.onmessage = async (event) => {
const response = JSON.parse(event.data);
if (response.type !== "response.audio.delta" && response.type !== "input_audio_buffer.speech_started") {
console.log("🤖 OpenAI event:", response.type);
}
if (response.type === "session.created") {
console.log("🆗 OpenAI session.created received, loading agent config...");
let instructionsText = "You are a helpful restaurant receptionist.";
try {
const { data: config } = await supabase
.from('agent_config')
.select('*')
.limit(1)
.maybeSingle();
if (config) {
instructionsText = `You are a receptionist for ${config.restaurant_name}.\nHours: ${config.restaurant_hours}\nMenu: ${config.menu}\n${config.instructions}`;
}
} catch (e) {
console.error('Failed to load agent config:', e);
}
console.log("📝 Sending session.update with loaded instructions");
openaiSocket!.send(
JSON.stringify({
type: "session.update",
session: {
modalities: ["text", "audio"],
instructions: instructionsText,
voice: "alloy",
input_audio_format: "g711_ulaw",
output_audio_format: "g711_ulaw",
input_audio_transcription: { model: "whisper-1" },
turn_detection: {
type: "server_vad",
threshold: 0.5,
prefix_padding_ms: 300,
silence_duration_ms: 1000,
},
temperature: 0.8,
tools: [
{
type: "function",
name: "create_reservation",
description: "Create a restaurant reservation with customer details",
parameters: {
type: "object",
properties: {
name: { type: "string" },
email: { type: "string", description: "Customer email address" },
date: { type: "string", description: "YYYY-MM-DD" },
time: { type: "string", description: "HH:MM" },
guests: { type: "number" }
},
required: ["name", "email", "date", "time", "guests"],
additionalProperties: false
}
}
],
tool_choice: "auto",
},
})
);
}
if (response.type === "response.audio.delta" && streamSid) {
twilioSocket.send(
JSON.stringify({
event: "media",
streamSid: streamSid,
media: { payload: response.delta },
})
);
}
if (response.type === "response.audio.done") {
console.log("🔊 Audio response completed");
}
if (response.type === "response.function_call_arguments.delta") {
const { call_id, delta } = response;
fnArgs[call_id] = (fnArgs[call_id] || "") + delta;
}
if (response.type === "response.function_call_arguments.done") {
try {
const { call_id } = response;
const argsStr = fnArgs[call_id] || response.arguments || "{}";
delete fnArgs[call_id];
const args = JSON.parse(argsStr);
console.log("🛠️ Function args parsed:", args);
const insertPayload: any = {
date: args.date,
time: (args.time?.length === 5 ? args.time + ":00" : args.time) || null,
guests: Number(args.guests) || 1,
name: args.name,
email: args.email,
status: "confirmed",
};
if (conversationId) insertPayload.conversation_id = conversationId;
const { data: resv, error: resvErr } = await supabase
.from('reservations')
.insert(insertPayload)
.select('*')
.maybeSingle();
if (resvErr) {
console.error('❌ Failed to create reservation:', resvErr);
} else {
console.log('✅ Reservation stored:', resv?.id);
try {
const { data: config } = await supabase
.from('agent_config')
.select('restaurant_name')
.limit(1)
.maybeSingle();
await supabase.functions.invoke('send-reservation-confirmation', {
body: {
name: args.name,
email: args.email,
date: args.date,
time: args.time,
guests: args.guests,
restaurantName: config?.restaurant_name || 'Restaurant'
}
});
console.log('📧 Confirmation email sent');
} catch (emailErr) {
console.error('⚠️ Failed to send email:', emailErr);
}
}
openaiSocket!.send(JSON.stringify({ type: 'response.create' }));
} catch (e) {
console.error('❌ Error handling tool call:', e);
}
}
if (response.type === "conversation.item.created") {
console.log("💬", response.item);
}
if (response.type === "error") {
console.error("❌ OpenAI error event:", response.error);
}
};
openaiSocket.onerror = (error: Event) => {
console.error("❌ OpenAI WebSocket error:", error);
};
openaiSocket.onclose = (event: CloseEvent) => {
console.log("🔴 OpenAI WebSocket closed:", event.code, event.reason);
};
};
twilioSocket.onopen = () => {
console.log("✅ Twilio WebSocket connected");
};
twilioSocket.onmessage = async (event) => {
try {
const data = JSON.parse(event.data);
console.log("📥 Twilio event:", data.event);
if (data.event === "start") {
streamSid = data.start.streamSid;
console.log("📞 Stream started:", streamSid);
try {
const { data: conv, error: convErr } = await supabase
.from('conversations')
.insert({ status: 'active' })
.select('id')
.maybeSingle();
if (convErr) {
console.error('❌ Failed to create conversation:', convErr);
} else if (conv?.id) {
conversationId = conv.id;
console.log('🗂️ Conversation created:', conversationId);
}
} catch (e) {
console.error('❌ Error creating conversation:', e);
}
connectOpenAI();
}
if (data.event === "media" && !openaiSocket) {
if (!streamSid && data.streamSid) {
streamSid = data.streamSid;
console.log("ℹ️ Inferred streamSid from media:", streamSid);
}
console.log("⚠️ OpenAI not connected yet. Connecting now due to media event.");
connectOpenAI();
}
if (data.event === "media" && openaiSocket?.readyState === WebSocket.OPEN) {
openaiSocket.send(
JSON.stringify({
type: "input_audio_buffer.append",
audio: data.media.payload,
})
);
}
if (data.event === "stop") {
console.log("📞 Stream stopped");
try {
if (conversationId) {
const { error: updErr } = await supabase
.from('conversations')
.update({ status: 'completed', ended_at: new Date().toISOString() })
.eq('id', conversationId);
if (updErr) console.error('❌ Failed to update conversation:', updErr);
else console.log('🗂️ Conversation completed:', conversationId);
}
} catch (e) {
console.error('❌ Error updating conversation:', e);
}
openaiSocket?.close();
}
} catch (error) {
console.error("Error handling Twilio message:", error);
}
};
twilioSocket.onerror = (error) => {
console.error("❌ Twilio WebSocket error:", error);
};
twilioSocket.onclose = () => {
console.log("📞 Twilio disconnected");
openaiSocket?.close();
};
return response;
}
return new Response("Twilio Voice Agent - Use /twiml or /media-stream", {
status: 200,
headers: { "Content-Type": "text/plain" }
});
});
Edge Function 2: realtime-session
Create supabase/functions/realtime-session/index.ts:
const corsHeaders = {
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Headers": "authorization, x-client-info, apikey, content-type",
};
serve(async (req) => {
if (req.method === "OPTIONS") {
return new Response(null, { headers: corsHeaders });
}
try {
const OPENAI_API_KEY = Deno.env.get("OPENAI_API_KEY");
if (!OPENAI_API_KEY) {
throw new Error("OPENAI_API_KEY is not set");
}
const response = await fetch(
{
method: "POST",
headers: {
"Authorization": `Bearer ${OPENAI_API_KEY}`,
"Content-Type": "application/json",
},
body: JSON.stringify({
model: "gpt-4o-realtime-preview-2024-12-17",
voice: "alloy",
instructions: "You are a helpful restaurant voice agent.",
}),
}
);
if (!response.ok) {
const errTxt = await response.text();
console.error("OpenAI session error:", errTxt);
return new Response(JSON.stringify({ error: "Failed to create session" }), {
status: 500,
headers: { ...corsHeaders, "Content-Type": "application/json" },
});
}
const data = await response.json();
return new Response(JSON.stringify(data), {
headers: { ...corsHeaders, "Content-Type": "application/json" },
});
} catch (error) {
console.error("Error:", error);
return new Response(JSON.stringify({ error: (error as Error).message }), {
status: 500,
headers: { ...corsHeaders, "Content-Type": "application/json" },
});
}
});
Edge Function 3: send-reservation-confirmation
Create supabase/functions/send-reservation-confirmation/index.ts:
import { Resend } from "https://esm.sh/resend@4.0.0";
const resend = new Resend(Deno.env.get("RESEND_API_KEY"));
const corsHeaders = {
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Headers": "authorization, x-client-info, apikey, content-type",
};
interface ReservationConfirmationRequest {
name: string;
email: string;
date: string;
time: string;
guests: number;
restaurantName: string;
}
const handler = async (req: Request): Promise => {
if (req.method === "OPTIONS") {
return new Response(null, { headers: corsHeaders });
}
try {
const { name, email, date, time, guests, restaurantName }: ReservationConfirmationRequest = await req.json();
console.log("Sending confirmation email to:", email);
const emailResponse = await resend.emails.send({
from: "Restaurant ",
to: [email],
subject: `Reservation Confirmation - ${restaurantName}`,
html: `
Reservation Confirmed!
Dear ${name},
Your reservation at ${restaurantName} has been confirmed.
Reservation Details:
Date: ${date}
Time: ${time}
Guests: ${guests}
We look forward to seeing you!
Best regards,${restaurantName} Team
`,
});
console.log("Email sent successfully:", emailResponse);
return new Response(JSON.stringify(emailResponse), {
status: 200,
headers: {
"Content-Type": "application/json",
...corsHeaders,
},
});
} catch (error: any) {
console.error("Error in send-confirmation function:", error);
return new Response(
JSON.stringify({ error: error.message }),
{
status: 500,
headers: { "Content-Type": "application/json", ...corsHeaders },
}
);
}
};
serve(handler);
Update supabase/config.toml:
[
functions.twilio-voice
]
verify_jwt = false
[
functions.realtime-session
]
verify_jwt = false
[
functions.send-reservation-confirmation
]
verify_jwt = false
Implementation: src/utils/RealtimeAudio.ts
Create this exact file for WebRTC audio handling:
import { supabase } from "@/integrations/supabase/client";
export class AudioRecorder {
private stream: MediaStream | null = null;
private audioContext: AudioContext | null = null;
private processor: ScriptProcessorNode | null = null;
private source: MediaStreamAudioSourceNode | null = null;
constructor(private onAudioData: (audioData: Float32Array) => void) {}
async start() {
try {
this.stream = await navigator.mediaDevices.getUserMedia({
audio: {
sampleRate: 24000,
channelCount: 1,
echoCancellation: true,
noiseSuppression: true,
autoGainControl: true
}
});
this.audioContext = new AudioContext({
sampleRate: 24000,
});
this.source = this.audioContext.createMediaStreamSource(this.stream);
this.processor = this.audioContext.createScriptProcessor(4096, 1, 1);
this.processor.onaudioprocess = (e) => {
const inputData = e.inputBuffer.getChannelData(0);
this.onAudioData(new Float32Array(inputData));
};
this.source.connect(this.processor);
this.processor.connect(this.audioContext.destination);
} catch (error) {
console.error('Error accessing microphone:', error);
throw error;
}
}
stop() {
if (this.source) {
this.source.disconnect();
this.source = null;
}
if (this.processor) {
this.processor.disconnect();
this.processor = null;
}
this.stream.getTracks().forEach(track => track.stop());
this.stream = null;
}
if (this.audioContext) {
this.audioContext.close();
this.audioContext = null;
}
}
}
export class RealtimeChat {
private pc: RTCPeerConnection | null = null;
private dc: RTCDataChannel | null = null;
private audioEl: HTMLAudioElement;
private recorder: AudioRecorder | null = null;
private conversationId: string | null = null;
constructor(private onMessage: (message: any) => void) {
this.audioEl = document.createElement("audio");
this.audioEl.autoplay = true;
}
async init() {
try {
// Create conversation record
const { data: conv, error: convErr } = await supabase
.from('conversations')
.insert({ status: 'active' })
.select('id')
.single();
if (convErr) throw convErr;
this.conversationId = conv.id;
console.log('Conversation created:', this.conversationId);
// Get ephemeral token
const { data: tokenData, error: tokenError } = await supabase.functions.invoke("realtime-session");
if (tokenError || !tokenData?.client_secret?.value) {
throw new Error("Failed to get ephemeral token");
}
const EPHEMERAL_KEY = tokenData.client_secret.value;
// Create peer connection
this.pc = new RTCPeerConnection();
// Set up remote audio
this.pc.ontrack = e => this.audioEl.srcObject = e.streams[0];
// Add local audio track
const ms = await navigator.mediaDevices.getUserMedia({ audio: true });
this.pc.addTrack(ms.getTracks()[0]);
// Set up data channel
this.dc = this.pc.createDataChannel("oai-events");
this.dc.addEventListener("message", async (e) => {
const event = JSON.parse(e.data);
console.log("Received event:", event);
this.onMessage(event);
// Handle session.created - send configuration
if (event.type === 'session.created') {
const { data: config } = await supabase
.from('agent_config')
.select('*')
.limit(1)
.maybeSingle();
const instructions = config
? `You are a receptionist for ${config.restaurant_name}.\nHours: ${config.restaurant_hours}\nMenu: ${config.menu}\n${config.instructions}`
: "You are a helpful restaurant receptionist.";
this.dc!.send(JSON.stringify({
type: 'session.update',
session: {
modalities: ['text', 'audio'],
instructions: instructions,
voice: 'alloy',
input_audio_format: 'pcm16',
output_audio_format: 'pcm16',
input_audio_transcription: { model: 'whisper-1' },
turn_detection: {
type: 'server_vad',
threshold: 0.5,
prefix_padding_ms: 300,
silence_duration_ms: 1000
},
tools: [
{
type: 'function',
name: 'create_reservation',
description: 'Create a restaurant reservation',
parameters: {
type: 'object',
properties: {
name: { type: 'string' },
email: { type: 'string', description: 'Customer email address' },
date: { type: 'string', description: 'YYYY-MM-DD' },
time: { type: 'string', description: 'HH:MM' },
guests: { type: 'number' }
},
required: ['name', 'email', 'date', 'time', 'guests']
}
}
],
tool_choice: 'auto',
temperature: 0.8
}
}));
}
// Handle function calls
if (event.type === 'response.function_call_arguments.done') {
const args = JSON.parse(event.arguments);
// Save reservation
const { data: resv, error: resvErr } = await supabase
.from('reservations')
.insert({
conversation_id: this.conversationId,
name: args.name,
email: args.email,
date: args.date,
time: args.time,
guests: args.guests,
status: 'confirmed'
})
.select()
.single();
if (!resvErr && resv) {
// Send confirmation email
const { data: config } = await supabase
.from('agent_config')
.select('restaurant_name')
.limit(1)
.maybeSingle();
await supabase.functions.invoke('send-reservation-confirmation', {
body: {
name: args.name,
email: args.email,
date: args.date,
time: args.time,
guests: args.guests,
restaurantName: config?.restaurant_name || 'Restaurant'
}
});
}
}
});
// Create and set local description
const offer = await this.pc.createOffer();
await this.pc.setLocalDescription(offer);
// Connect to OpenAI
const model = "gpt-4o-realtime-preview-2024-12-17";
const sdpResponse = await fetch(`${baseUrl}?model=${model}`, {
method: "POST",
body: offer.sdp,
headers: {
Authorization: `Bearer ${EPHEMERAL_KEY}`,
"Content-Type": "application/sdp"
},
});
const answer = {
type: "answer" as RTCSdpType,
sdp: await sdpResponse.text(),
};
await this.pc.setRemoteDescription(answer);
console.log("WebRTC connection established");
} catch (error) {
console.error("Error initializing chat:", error);
throw error;
}
}
async sendText(text: string) {
if (!this.dc || this.dc.readyState !== 'open') {
throw new Error('Data channel not ready');
}
this.dc.send(JSON.stringify({
type: 'conversation.item.create',
item: {
type: 'message',
role: 'user',
content: [{ type: 'input_text', text }]
}
}));
this.dc.send(JSON.stringify({type: 'response.create'}));
}
async disconnect() {
// Update conversation status
if (this.conversationId) {
await supabase
.from('conversations')
.update({
status: 'completed',
ended_at: new Date().toISOString()
})
.eq('id', this.conversationId);
}
this.recorder?.stop();
this.dc?.close();
this.pc?.close();
}
}
Implementation: src/components/VoiceInterface.tsx
import React, { useEffect, useRef, useState } from 'react';
import { Button } from '@/components/ui/button';
import { useToast } from '@/components/ui/use-toast';
import { RealtimeChat } from '@/utils/RealtimeAudio';
import { Mic, MicOff } from 'lucide-react';
interface VoiceInterfaceProps {
onSpeakingChange: (speaking: boolean) => void;
}
const VoiceInterface: React.FC = ({ onSpeakingChange }) => {
const { toast } = useToast();
const [isConnected, setIsConnected] = useState(false);
const chatRef = useRef(null);
const handleMessage = (event: any) => {
console.log('Received message:', event);
if (event.type === 'response.audio.delta') {
onSpeakingChange(true);
} else if (event.type === 'response.audio.done') {
onSpeakingChange(false);
}
};
const startConversation = async () => {
try {
chatRef.current = new RealtimeChat(handleMessage);
await chatRef.current.init();
setIsConnected(true);
toast({
title: "Connected",
description: "Voice interface is ready",
});
} catch (error) {
console.error('Error starting conversation:', error);
toast({
title: "Error",
description: error instanceof Error ? error.message : 'Failed to start conversation',
variant: "destructive",
});
}
};
const endConversation = () => {
chatRef.current?.disconnect();
setIsConnected(false);
onSpeakingChange(false);
toast({
title: "Disconnected",
description: "Conversation ended",
});
};
useEffect(() => {
return () => {
chatRef.current?.disconnect();
};
}, []);
return (
{!isConnected ? (
) : (
)}
);
};
export default VoiceInterface;
Styling Requirements
Add this animation to your global CSS (index.css):
@keyframes pulse-glow {
0%, 100% {
box-shadow: 0 0 20px rgba(132, 204, 22, 0.5);
}
50% {
box-shadow: 0 0 40px rgba(132, 204, 22, 0.8);
}
}
.animate-pulse-glow {
animation: pulse-glow 2s ease-in-out infinite;
}
Post-Setup Instructions
After the app is created:
Add Required Secrets:
Go to Backend → Secrets
Add OPENAI_API_KEY (get from OpenAI platform)
Add RESEND_API_KEY (get from resend.com)
Verify Resend Domain:
Verify build-loop.ai domain
Configure Twilio:
Find your edge function URL in Backend → Edge Functions
Go to Twilio Console → Phone Numbers → Active Numbers
Select your number
Under "Voice & Fax" → "A CALL COMES IN" → Set to Webhook
Paste the URL
Method: HTTP POST
Save
Test the App:
Open the Live Demo page for web testing
Call your Twilio number for phone testing
Check Conversations and Reservations pages for data
Verify emails are being sent
Critical Implementation Notes
All RLS policies must be set to public access (true) for demo purposes
Use WebRTC (RTCPeerConnection) for web demo, not WebSockets
Twilio edge function uses WebSocket (Deno.upgradeWebSocket)
Audio format for Twilio: g711_ulaw
Audio format for web: pcm16
Sample rate: 24kHz for both
Always use server VAD (Voice Activity Detection)
Tool calling must use exact schema provided
Email sender: hi@build-loop.ai
This prompt should create a fully functional AI restaurant receptionist that works via both web interface and phone calls through Twilio!
---
## Complete Implementation Plan for Single-Prompt Recreation
This comprehensive prompt will recreate your entire AI restaurant receptionist application with **Twilio phone integration** from scratch. Here's what makes this the definitive version:
### 🎯 Key Improvements Over Previous Version:
1. **Twilio Integration is Primary Focus**
- Complete `twilio-voice` edge function with both `/twiml` and `/media-stream` endpoints
- Clear instructions on WHERE to find the edge function URL after creation
- Step-by-step guide on configuring Twilio phone number
- Proper WebSocket handling for phone calls
2. **Dual Interface Support**
- Web demo using WebRTC (RTCPeerConnection) for browser testing
- Phone calls using Twilio WebSocket for production use
- Both use same backend (reservations, emails, database)
3. **Complete Technical Specifications**
- Exact audio formats: g711_ulaw for Twilio, pcm16 for web
- Proper session configuration with server VAD
- Function calling schema for reservations
- Email confirmation integration
4. **Database Schema with Defaults**
- All 4 tables with complete column definitions
- Default values to prevent insertion errors
- Public RLS policies for demo purposes
- Sample data insert for agent_config
5. **Three Edge Functions (Complete Code)**
- `twilio-voice`: Handles phone calls via Twilio
- `realtime-session`: Creates OpenAI ephemeral tokens for web demo
- `send-reservation-confirmation`: Sends emails via Resend
6. **Frontend Components**
- Complete dark theme styling
- 4 pages: Live Demo, Conversations, Reservations, Settings
- Real-time updates and data tables
- Audio visualizations and status indicators
### 📋 Post-Prompt Setup Checklist:
**Step 1: Add Secrets**
- `OPENAI_API_KEY` (from OpenAI platform)
- `RESEND_API_KEY` (from resend.com)
**Step 2: Verify Email Domain**
- Verify `build-loop.ai`
**Step 3: Get Edge Function URL**
- Navigate to Backend → Edge Functions in Lovable
- Find `twilio-voice` function
**Step 4: Configure Twilio**
- Twilio Console → Phone Numbers → Active Numbers
- Select your number
- Voice & Fax → "A CALL COMES IN"
- Set to: Webhook
- Method: HTTP POST
- Save
**Step 5: Test Everything**
- Web: Open Live Demo page, click microphone
- Phone: Call your Twilio number
- Verify: Check Conversations and Reservations pages
- Email: Confirm emails are delivered
### 🔑 Critical Technical Details:
**Audio Configuration:**
- Twilio calls: g711_ulaw format (telephone quality)
- Web demo: pcm16 format (high quality)
- Sample rate: 24kHz for both
- Server VAD enabled for automatic turn detection
**Function Calling:**
- Tool name: `create_reservation`
- Required fields: name, email, date, time, guests
- Auto-triggers email confirmation
- Persists to database with conversation_id
**Email System:**
- Sender: `hi@build-loop.ai`
- Uses Resend API
- HTML template with reservation details
- Automatic after successful reservation
**Database Design:**
- `agent_config`: Single row with restaurant settings
- `conversations`: Tracks all calls/sessions
- `messages`: Conversation history (optional logging)
- `reservations`: All bookings with email addresses
💡Requires OPENAI_API_KEY and RESEND_API_KEY secrets to be configured. Uses OpenAI Realtime API with WebRTC for voice, and Resend for email confirmations.
8:41
7
8 comments
Eliya Elmakis
4
Build AI Restaurant Voice Receptionist (Complete Implementation)
powered by
Lovable AI
skool.com/lovable-ai-3884
This is the official global community for all things Lovable AI! Whether you’re a beginner or an advanced user, this group is the place to ask
Build your own community
Bring people together around your passion and get paid.
Powered by