Harnessing Local AI: Unleashing the Power of .NET Smart Components and Llama2
Published Apr 10 2024 08:00 AM 2,380 Views
Iron Contributor

Hi!

.NET Smart Components are an amazing example of how to use AI to enhance the user experience in something as popular as a combobox.

.NET Smart Components also support the use of local LLMs, so in this post I’ll show how to configure these components to use a local Llama 2 inference server. The following image, show the Smart TextArea doing completions with a local server, in the right we can check the local server journal to check the local http requests to the server.

 

2024 04 04 NET Blazor Smart Components local Ollama - blog.gif

 

Introduction to .NET Smart Components

.NET Smart Components are a groundbreaking addition to the .NET ecosystem, offering AI-powered UI controls that seamlessly integrate into your applications. These components are designed to enhance user productivity by providing intelligent features such as...1.

Smart Paste simplifies data entry by automatically filling out forms using data from the user’s clipboard. Smart TextArea enhances the traditional textarea by providing autocomplete capabilities for sentences, URLs, and more. Lastly, Smart ComboBox improves the traditional combo box by offering suggestions based on semantic matching.

These components are currently available for Blazor, MVC, and Razor Pages with .NET 6 and later, and...1.

The Importance of Local LLMs like Llama2

Local Large Language Models (LLMs) like Llama2 offer significant advantages, particularly in terms of data privacy and security. In example: running LLMs locally allows organizations to process sensitive data without exposing it to external ....

Llama2 is an open-source model that provides robust performance across various tasks, including common-sense reasoning, mathematical abilities, and general knowledge. It supports a context length of 4096 tokens, which is double that of its predecessor, Llama1. This makes Llama2 an ideal choice for organizations looking to leverage AI while maintaining control over their data and infrastructure.

How to run .NET Smart Components with a Local Ollama Inference Server

In previous posts, I shared how to run a local Ollama Inference Server in Ubuntu (blog). Lucky us, you can also do this in Windows now.

And once you clone the main Smart Component repository, you only need to add a small change to run the samples locally.

  • Open the file [RepoSharedConfig.json]

image-1.png

  • Add the following configuration to use the local ollama model
{
  "SmartComponents": {
 
   // local demo with ollama self-hosted
    "SelfHosted": true,
    "DeploymentName": "llama2",
    "Endpoint": "http://localhost:11434"
  }
}

And that’s it!, now you can run the either the Blazor or the MVC demos and they will use the local Ollama server to run the completions!

And hey, let’s keep an eye on the Smart Components, they are going to provide an amazing new user experience powered by AI!

Happy coding!

Greetings

El Bruno

 

Co-Authors
Version history
Last update:
‎Apr 10 2024 08:00 AM
Updated by: