FeedMonky helps you collect customer feedback. And uses a fine-tuned AI to generate TODOs from long customer feedbacks. You'll be able to see which feature is how many times requested at any given time. It helps you ship features your customer most want.
Vera serves as a personalized learning assistant, helping you, your students, or your children overcome challenges and excel in their studies.
Build custom AI assistants with No-code– resolve customer queries, empower employees with knowledge base support, capture leads, and automate appointments. Explore endless use cases – from personal study assistants to launching your own no-code AI business.
Sick of bullshit Google search results? Trust Reddit more, but find sifting through it a pain? RedditRecs collates comments across Reddit and lists monitors by popularity. Skip irrelevant comments and zoom in to the ones you care about.
Daily, Nightly: Submit a journal review of your day every night at 9pm, and receive positive affirmations daily that utilize AI to send you the perfect affirmations. I created this app to model my daily routine that has helped me regulate my own emotions.
Wanderboat is a travel platform using AI to find and sort the best point of interest with videos, images, and insights. From signature dishes to photo spots, you can ask questions freely in-chat, in-document, or on-map for personalized travel experiences.
JetCode is a breakthrough AI software development platform that transforms project requirements into precise coding guides. Designed for engineering managers and teams, it streamlines the coding process, making web, mobile, and other platform development.
ChatTTS is a voice generation model on GitHub at 2noise/chattts,Chat TTS is specifically designed for conversational scenarios. It is ideal for applications such as dialogue tasks for large language model assistants, as well as conversational audio and video introductions. The model supports both Chinese and English, demonstrating high quality and naturalness in speech synthesis. This level of performance is achieved through training on approximately 100,000 hours of Chinese and English data. Additionally, the project team plans to open-source a basic model trained with 40,000 hours of data, which will aid the academic and developer communities in further research and development.