Skip to main content

· 2 min read
Yung-Hsiang Hu

Feature Updates

  1. Consolidated offline English documentation #39
  2. Support for direct integration with third-party APIs. The default model now includes Groq Llama3.1 70B.
  3. A Pre-built Docker Image is provided for faster installation.
  4. The Docker version directly supports CUDA, eliminating the need for repeated container installation.
  5. Added Kuwa Javascript Library for calling Multi-chat API.
  6. Ability to directly edit Bot settings by clicking the Bot image within the chat room.
  7. RAG supports caching Embedding models and Vector DB.
  8. SearchQA facilitates integration with third-party search engines.
  9. DocQA/WebQA offers a Failback mode, allowing direct interaction with an LLM.
  10. Pipes can now specify parameters from the prompt.
  11. Added Media Converter for editing and splicing videos or audio.
  12. The default Gemini model has been updated to Gemini 1.5 Flash.

Bug Fixes

  1. Docker WARN message "FromAsCasing: 'as' and 'FROM' keywords' casing do not match" #38
  2. The Docker Version Executor diagram disappears
  3. Chinese display garbled on the Windows version
  4. The Chinese file path is not supported when creating Vector DB on the Windows version
  5. The file name is too long and exceeds the path length limit on the Windows version
  6. ChatGPT executor does not follow custom context_window

· 2 min read
Yung-Hsiang Hu

Feature Updates

  1. Added Pipe executor, which can execute programs (tools) within a specified directory, such as directly executing Python programs output by models via Python interpreter
  2. Provided Calculator, Iconv and Python example tools that can be called via Pipe executor
  3. Added Uploader executor to allow users to upload files to a specified directory, including tools, RAG knowledge bases, or website components
  4. Supported Bot export and import, allowing export of Bot name, description, icon, and Modelfile as a single Bot file, similar to an application configuration file; installation can automatically import default Bot files
  5. Allows users to choose the sorting method for Bots in chat rooms
  6. Supported Bot icon replacement
  7. Added Kuwa API server compatible with OpenAI API
  8. Provided default examples for connecting to cloud multimodal APIs: gpt-4o-mini-vision, DALL-E, Gemini pro 1.5 vision
  9. Supported setting the upper limit of uploaded files via Web interface
  10. Supported installation and execution in environments with Web proxy within enterprises
  11. Supported acceleration of model inference using Intel GPU
  12. Added automatic installation and update scripts for Docker version, thanks to @wcwutw
  13. RAG Toolchain default Embedding model replaced with Microsoft's intfloat/multilingual-e5-small model, licensed under MIT
  14. RAG (DocQA, WebQA, SearchQA, DB QA) added display_hide_ref_content, retriever_ttl_sec parameters
  15. Increased support for downloaded tools' default models, including Meta Llama 3.1 8B with Function calling and lightweight Google Gemma 2 2B

Bug Fixes

  1. #21: Docker version does not generate https:// links after a reverse proxy, thanks to @Phate334
  2. #23: Two-minute timeout issue, thanks to @x85432
  3. #24: Modelfile parsing issue
  4. #25: Importing Prompts does not apply Modelfile
  5. windows\src\tee.bat is misjudged as a virus
  6. RAG reference data does not display original file names
  7. Updated Windows version dependency download link

· 2 min read
Yung-Hsiang Hu

Feature Updates

  1. Customized Bot Permissions: Configure the Bot's readable and executable permissions at system, community, group, and individual levels
  2. Customized Upload File Policy: Admin can set maximum upload file size and allowed file types
  3. Tool Samples: Added samples for Copycat, token counter, etc.
  4. Pre-defined Model Profiles: Provided profiles for LLaVA and other fine-tuned models
  5. UX Optimization: Beautified icons and chat lists
  6. Updated Default Models: ChatGPT Executor is connected to GPT-4o by default, Gemini Executor is connected to Gemini 1.5 pro by default

Bug Fixes

  1. File name with whitespace parsing issue when uploading
  2. Language is not saved after logout
  3. Dependency issue of Llamacpp Executor
  4. Color and line breaks not supported in Windows version logs
  5. The first message in the group chat is always sent even using multi-chat single-turn Q&A
  6. Windows version DocQA default parameters may exceed the context window

New Tutorials

Customizing RAG Parameters Tutorial: https://kuwaai.tw/blog/rag-param-tutorial
Customizing Tool Tutorial: https://kuwaai.tw/blog/rag-param-tutorial

· 5 min read
Yung-Hsiang Hu

Hi everyone, Kuwa v0.3.1 is out, and this update mainly focuses on multimodal input and output, which now supports both speech and images. Combined with the previously launched Bot system and group chat functions, this allows for practical functions such as meeting summaries, speech summaries, simple image generation, and image editing:

  1. Supports the Whisper speech-to-text model, which can output transcripts from uploaded audio files, and features multi-speaker recognition and timestamps.
  2. Supports the Stable Diffusion image generation model, which can generate images from text input or modify uploaded images based on user instructions.
  3. Huggingface executor supports integration with vision-language models such as Phi-3-Vision and LLaVA.
  4. RAG supports direct parameter adjustment through the Web UI and Modelfile, simplifying the calibration process.
  5. RAG supports displaying original documents and cited passages, making it easier to review search results and identify hallucinations.
  6. Supports importing pre-built RAG vector databases, facilitating knowledge sharing across different systems.
  7. Simplified selection of various open models during installation.
  8. Multi-chat Web UI supports direct export of chat records in PDF, Doc/ODT formats.
  9. Multi-chat Web UI supports Modelfile syntax highlighting, making it easy to edit Modelfiles.
  10. Kernel API supports passing website language, allowing the Executor to customize based on user language.
  11. The Executor removes the default System prompt to avoid compromising model performance.

· 2 min read
Yung-Hsiang Hu

Hello everyone, after receiving feedback and suggestions from the community, we have launched the official version of kuwa-v0.3.0 to better meet your needs.

The main differences from the previous version, kuwa-v0.2.1, are the addition and enhancement of features such as Bot, Store, RAG toolchain, and system updates, as well as a new chat and group chat integration interface:

  1. Bot allows users to create Bot applications with no code, and can adjust System prompt, preset chat records, User prompt prefixes and suffixes to implement different functions such as role playing, executing specific tasks, or using Ollama model files to build more powerful applications;
  2. Store allows users to build and maintain their own shared Bot application store, and users can also share Bots;
  3. RAG toolchain allows users to create their own vector databases by simply dragging and dropping local file folders, and then use the existing DBQA function to perform Q&A;
  4. The new integrated interface not only directly supports group chats and single-model chats, it can also import Prompt Sets or upload files at any time, and can also be used for related RAGs;
  5. Windows version adds SearchQA, which can be used to organize website Q&A by connecting to Google search;
  6. Added Docker startup script to simplify Docker startup;
  7. Executor can be directly connected to Ollama to use the models and applications supported by Ollama;
  8. You can use update.bat to quickly update to the latest released version without re-downloading the .exe installer
info

· One min read
Yung-Hsiang Hu

Hello everyone,

The TAIDE model released the Llama3-TAIDE-LX-8B-Chat-Alpha1 version today. Friends who use the Kuwa TAIDE version only need to update to the latest v0.2.1 version to experience the latest version of the TAIDE model. In addition to updating the TAIDE model, this version also expands the support for local models and fixes some minor problems, hoping to provide everyone with a better user experience.

info

Download link for kuwa-taide-v0.2.1 single executable file: Google Drive

· 2 min read
Yung-Hsiang Hu

Hello friends,

The TAIDE model was released today, and we are happy to release a customized Kuwa system for Windows that has the built-in TAIDE LX 7B Chat 4bit model.

info

Download link for the kuwa-taide-v0.2.0 single executable file: Google Drive
kuwa-taide-v0.2.0 documentation: kuwa-taide-0415.pdf

· 4 min read
Yung-Hsiang Hu
info

This version does not include the TAIDE model itself, and a version pre-loaded with the TAIDE model is expected to be released after the TAIDE model is publicly available.

Hello to our community friends,

After collecting everyone's feedback, we plan to roll out the long-awaited RAG feature in v0.2.0. The RAG part has been internally tested, so we are releasing v0.2.0-beta to invite everyone to test it out and see if it meets your expectations.
In addition, this update also provides a way to connect with TAIDE API and TAIDE models.
At the same time, we have also adjusted the system installation script and fixed some known bugs, making the entire system more stable, easier to extend, and easier to use.
If you have any suggestions or if you think there is room for improvement, please let us know!

· 2 min read
Ching-Pao Lin

Hello developers and users,

After receiving feedback from many users since the initial release, we are pleased to announce the stable release of v0.1.0. In this version, we have made some adjustments to the installation process for the Windows version. We have also simultaneously released a Docker version, allowing users to quickly install and adjust the environment structure as needed. Additionally, we have fixed some minor bugs that were known in previous versions.