SelenaCore Update: Personalizing Voice Intelligence and Dynamic Intent Management

In our latest update, SelenaCore takes a significant step forward in making human-machine interaction more natural and adaptable. We are introducing Per-Voice Settings, allowing users to fine-tune individual voice profiles for a truly personalized experience. This is complemented by critical audio stability fixes that ensure a seamless listening experience.
A major architectural improvement has been implemented in commit ba560ca, where we have transitioned to DB-driven intents. Previously, the way the system understood commands was more rigid; now, these 'intents' are managed via the database. This allows SelenaCore to learn and adapt to new commands dynamically, making the platform more flexible for future expansions without requiring complex code changes.
Furthermore, we have integrated Dual Piper TTS support, doubling down on our high-performance local text-to-speech capabilities. This ensures faster response times and more reliable audio generation. To help users navigate these new features, we have completely rewritten our documentation, making it easier for everyone to understand how to leverage these tools.
These updates, including the voice refinements in commit 0571238, reinforce our commitment to building a smart, user-friendly, and highly customizable AI ecosystem.