new models, voice recognition, media search and more with AI

NVIDIA has just updated its ChatRTX AI chatbot with support for new LLM, new media search capabilities, and speech recognition technology. Check it out:

The latest version of ChatRTX supports more LLMs, including Gemma, the latest open local LLM trained by Google. Gemma was developed with the same research and technology that Google used to create the Gemini models and is designed for responsible AI development.

ChatRTX now supports ChatGLM3, an open, bilingual (English and Chinese) LLM based on the General Language Model framework. The updated version of ChatRTX now allows users to interact with image data through image pre-training in OpenAI’s contrastive language. CLIP is a neural network that, as NVIDIA explains, through training and refinement will learn visual concepts from natural language supervision: a model that recognizes what the AI ​​”sees” in image collections.

  • ChatRTX joins its growing list of supported LLMs, including Gemma, Google’s latest LLM, and ChatGLM3, an open, bilingual (English and Chinese) LLM, giving users additional flexibility.
  • New photo support allows ChatRTX users to easily search and interact locally with their photo data without the need for complex metadata tagging, thanks to OpenAI’s Contrastive Language and Image Pretraining (CLIP).
  • ChatRTX users can now talk to their data, with additional support for Whisper, an AI-powered automatic speech recognition system that now allows ChatRTX to understand verbal speech.

On the voice recognition side, ChatRTX now lets you use your voice through support with Whisper, an automatic voice recognition system that uses artificial intelligence to process spoken language, allowing users to send voice requests to the app that sees ChatRTX providing text-based responses.

Can download ChatRTX right here (11.6 GB download).