56.1 F
Newport Beach
Thursday, November 21, 2024

Brave Browser Integrates RTX-Accelerated AI with Leo AI and Ollama

Zach Anderson
Oct 02, 2024 19:53

Brave browser users can now experience enhanced AI interactions with the integration of RTX-accelerated Leo AI and Ollama, offering local processing for improved privacy and efficiency.


The Brave browser, known for its privacy focus, has introduced a powerful AI assistant, Leo AI, enhanced by RTX-accelerated local large language models (LLMs) through a collaboration with Ollama, according to the NVIDIA Blog. This integration aims to improve user experience by providing efficient, locally processed AI capabilities.

Enhanced AI Experience with RTX Acceleration

Brave’s Leo AI, powered by NVIDIA’s RTX technology, offers users the ability to summarize articles, extract insights, and answer questions directly within the browser. This is achieved through the use of NVIDIA’s Tensor Cores, which are designed to handle AI applications by processing numerous calculations simultaneously. The collaboration with Ollama allows Brave to leverage the open-source llama.cpp library, which facilitates AI inference tasks specifically optimized for NVIDIA’s RTX GPUs.

Advantages of Local AI Processing

Running AI models locally on a PC provides significant privacy benefits, as it eliminates the need to send data to external servers. This local processing approach ensures user data remains private and accessible without the necessity of cloud services. Additionally, it allows users to interact with various specialized models, such as bilingual or code generation models, without incurring cloud service fees.

Technical Integration and Performance

Brave’s integration with Ollama and RTX technology offers a responsive AI experience, with the Llama 3 8B model achieving processing speeds of up to 149 tokens per second. This setup ensures quick responses to user queries and content requests, enhancing the overall browsing experience with Leo AI.

Getting Started with Leo AI and Ollama

Users interested in utilizing these advanced AI capabilities can easily install Ollama from its official website. Once installed, Brave’s Leo AI can be configured to use local models through Ollama, offering flexibility to switch between cloud and local models as needed. Developers can explore more about using Ollama and llama.cpp through resources provided by NVIDIA.

Image source: Shutterstock


This is a paid press release Blockchainpress does not endorse and is not responsible for or liable for any content, accuracy, quality, advertising, products or other materials on this page. Readers should do their own research before taking any actions related to the company. Blockchainpress is not responsible, directly or indirectly, for any damage or loss caused or alleged to be caused by or in connection with the use of or reliance on any content, goods or services mentioned in the press release.
- Advertisement -

Latest Releases