AI + Long term memory + Vision + Voice + Tools
Imagine an AI assistant that doesn't just chat but remembers, recognizes faces, manages your GitHub repos, and even sees through your camera to answer questions in real-time! Meet Onyx AI Assistant—designed to take virtual assistance to the next level. 🎥 See Onyx in Action [Loom Demo Video] - https://www.loom.com/share/1264867b9c0042f3b5294d2f784d722e 📌 Key Features: - Persistent Memory: Onyx recalls past interactions, making every interaction more personalized. - Face Recognition: Identifies faces, making it uniquely aware of its users. - Camera and Screen Vision: Can answer questions about objects in view or what's displayed on the screen - Google Lens Integration: Can identify objects in front of the camera and perform a Google Lens search using Serper for quick, relevant search results. - GitHub Integration: Easily create and clone repos from the app itself. Onyx leverages Qdrant as a vector store to power conversational memory and enable retrieval-augmented generation (RAG). For optimized graph-based RAG, it integrates Neo4j alongside Qdrant, creating a seamless, efficient memory system. This implementation is made possible by the OSS mem0ai library, which powers Onyx’s memory capabilities. While Gemini doesn't yet support graph-based RAG through Mem0 but openai does. Available on GitHub for both text-based (main branch) and voice-to-voice (speech-based-assistant branch) interactions. Repo Link - https://github.com/Divyanshu9822/onyx-ai-assistant