SelenaCore Update: Boosting AI Performance with ONNX Runtime Integration

We have reached a significant milestone in optimizing how SelenaCore understands your commands by migrating our embedding engine to ONNX Runtime. Previously, we relied on the sentence-transformers library, which, while capable, consumed more system resources than necessary for local processing. By switching to ONNX (Open Neural Network Exchange), we have significantly reduced the memory footprint and increased the speed at which the system converts your speech or text into actionable intents. This update is particularly vital for users running SelenaCore on lightweight hardware, as it ensures the AI remains snappy and responsive without heavy overhead. This change, implemented in commit 594c479, marks a transition toward a more portable and high-performance machine learning backend. Our goal is to provide a seamless experience where the intelligence of the system never compromises the speed of your smart home.
What has changed:
- Replaced sentence-transformers with ONNX Runtime for text embeddings.
- Reduced CPU and RAM usage during command processing.
- Improved overall system responsiveness and intent recognition speed.
- Enhanced compatibility with low-power hardware like Raspberry Pi.