Activity
Mon
Wed
Fri
Sun
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
What is this?
Less
More

Owned by Maxime

Match Founders

10 members • Free

We connect technical builders and business cofounders. Show your work, get matched on skills + timezone + commitment, go from idea to traction NOW

Memberships

Agency Vault

151 members • Free

Skoolers

179.6k members • Free

High Ticket AI Builders

49 members • Free

Wealth Academy Lite

34.3k members • Free

Amplify Views

27.4k members • Free

FBACADEMIE™

1.8k members • Free

Amazon FBA Empires FREE

3.1k members • Free

The Trading Connect

351 members • Free

The FBA Advantage

657 members • $1/month

1 contribution to Match Founders
🧠 AceClip.com – Co-Founder Brief & Invitation
👋 Introduction Hey builders, creators, and AI tinkerers, I’m Maxime Yao, an 20-year-old Business student at the University of Exeter and a Product Manager intern at Amazon. For the past two months, I’ve been building AceClip.com solo AI-powered system that turns long-form videos (podcasts, interviews, lectures) into short, searchable, personalised knowledge clips. I’m now looking for a technical co-founder to help take AceClip from a working MVP to a production-ready web platform. If you’re passionate about AI, education, and the future of knowledge discovery, this might resonate with you. 🎬 What Is AceClip? AceClip is your “personal YouTube brain.” It helps users find, organize, and revisit the smartest moments buried inside hours of video content. Core functions: - 🧠 Automatically detects the most insightful moments in podcasts & interviews - ✂️ Generates professional short-form clips with captions, face cropping, and speaker tracking - 🔍 Supports semantic video search using RAG + vector embeddings - 📚 Builds a personalized knowledge library an encyclopedia of ideas tailored to you Think of it as the next evolution of intentional learning: turning internet noise into structured, actionable wisdom. ⚙️ How It Works – Under the Hood - Language & video stack: Python | OpenCV | FFmpeg | GPT for semantic understanding - Perception & audio: InsightFace for facial recognition, pyannote.audio for diarization - Knowledge layer: RAG + embeddings for search; DeepSeek-OCR for 10× transcript compression - Performance: Processes 1 hour of video in ≈ 15 minutes on cloud GPUs - Scalability: Embedding billions of minutes for <$100 one-time processing and $10–20 monthly storage - Codebase: ~15 K modular lines (AI-assisted); local backend fully automated at MVP stage The system chunks transcripts into 3–7 minute segments (≈1 000–1 500 words), indexes metadata (speakers, timestamps, topics), and enables natural-language queries across millions of clips.
1-1 of 1
Maxime Yao
1
4points to level up
@maxime-yao-2206
Founder of AceClip.com We Ace your Clips. Current 20 year old Business student at the University of Exeter.

Active 17d ago
Joined Oct 31, 2025
London