Releases: shubham0204/SmolChat-Android
Releases · shubham0204/SmolChat-Android
v12
- Add support for Portuguese language (alongside existing support for English and simplified
Chinese). - Clear model context/memory when clearing the chat i.e. deleting all messages from the current
chat. - Avoid model reloading when screen is rotated.
UI Improvements
- Earlier, the user had to click on the folder name to expand it. Clicking on the chevron icon now
also expands the folder. - Task names are now appended with
[Task]in the chats list to distinguish them from other chats.
v11
- Fixed a bug where the app's memory usage kept increasing after switching models i.e. the memory acquired by the previous model was not 'released' when selecting a different model
- Sync with upstream llama.cpp
- Align default inference parameters with those found in
llamaexecutable
UI Improvements
- Chat message actions like share/copy/edit are now available in a dialog which appears when the message is long-pressed
- Fix misleading/overflowing icons to enhance UX
- Preserve query text in the search box when a model is opened while browsing HuggingFace
v10
- Sync with llama.cpp upstream
- The app now uses a new set of icons for a more aesthetic/refreshed look
- Fixed a bug where the app's memory usage kept increasing after switching models i.e. the memory acquired by the previous model was not 'released' when selecting a different model
v9
- fix bug in
ChatActivitycausing aNullPointerExceptionwhen the app is launched (this is the bug causing most crashes according to Google Play) - make 'Download Models' screen scrollable to make sure it works correctly on small screen devices
- add support for 16 KB page sizes
- improve the
SmolLMAPI and document its methods
v8
- Allow grouping of chats into folders
- Add an option to show device RAM usage on the chat screen
- The app can now run on emulated Android devices
- Sync with upstream llama.cpp
UI/UX changes
- Improved wizard for adding new models (easy to follow for beginners and non-technical persons)
- Improved top app bar in the chat screen
- Improved chat list drawer with a 'marker' to indicate the currently selected chat
- New app icon
- New color scheme for the app, new font (San Francisco)
v7
- Add CPU extensions for
armv7architecture (32-bit Android devices) to improve inference latency - Allow editing the last 'user' message in the chat
- Add 'Copy' and 'Share' actions for messages posted by the user (#68)
- Sync with upstream llama.cpp
Minor UI changes
- Show model size in GBs upto 2 decimal places in HuggingFace model explorer
- Check if the selected file is a GGUF
- Show model name in model delete dialog
- Fix rendering the model's thinking response
v7-fdroid
- Add CPU extensions for
armv7architecture (32-bit Android devices) to improve inference latency - Add 'Copy' and 'Share' actions for messages posted by the user (#68)
- Sync with upstream llama.cpp
Minor UI changes
- Show model size in GBs upto 2 decimal places in HuggingFace model explorer
- Check if the selected file is a GGUF
- Show model name in model delete dialog
- Fix rendering the model's thinking response
v6
- The app can now receive the text of the query from other apps (on clicking 'Share' in other apps, SmolChat will be listed as one of the options).
- The app can create dynamic shortcuts for specific tasks. These shortcuts can also be added to the home screen for quick access. (#2)
- Migrated from ObjectBox to Room (particularly for #58)
- The app now has a custom icon (do let me know if it can improved)
- There was issue with the CI that caused the build process to not strip symbols from native libraries, increasing the size of the APK. This has been fixed, thus reducing the size of the APK.
v5
v0.0.4
- The 'Download Models' screen now includes an interface to browse HuggingFace models and download them from the app
directly (#17) - Improved error handling for errors occurring in the native code (#31)
- The 'Chat Settings' screen now includes a field to configure the model's context size (#34)
- Sync with upstream llama.cpp (particularly for DeepSeek support)