docker
71 TopicsUpdates to the Windows Container Runtime support
Over the next year, Microsoft will transition support for the Mirantis Container Runtime (previously known as Docker Engine – Enterprise) to Mirantis support services. Windows Server containers will continue to function regardless of the runtime. The difference will be the coordination of associated technical support previously provided by Microsoft and Mirantis. The Mirantis Container Runtime will continue to be available from and supported by Mirantis. For more information, see Mirantis’s blog here.37KViews4likes2CommentsUsing Visual Studio Code from a docker image locally or remotely via VS Online
A development container is a running Docker container with a well-defined tool/runtime stack and its prerequisites. The Remote - Containers extension in the Remote Development extension pack allows you to open any folder mounted into or inside a dev container and take advantage of VS Code's full development feature set.27KViews1like1CommentHigh Performance Real time object detection on Nvidia Jetson TX2.
Real time object detection on my Nvidia Jetson TX2. The real time term here simply means, low latency and high throughput. It's a very loosely defined term, but it's used here in contrast to the store-and-process pattern, where storage is used as an interim stage.16KViews1like0CommentsUse NGINX to load balance across your Docker Swarm cluster
First published on TECHNET on Apr 19, 2017 A practical walkthrough, in six stepsThis basic example demonstrates NGINX and swarm mode in action, to provide the foundation for you to apply these concepts to your own configurations.10KViews1like1CommentDeploy a Docker multi-container application on Azure Web Apps
First published on MSDN on Oct 24, 2018 In the last post of my series about Docker we have seen how, thanks to Docker Compose, it's easy to deploy an application composed by multiple components running in different containers.10KViews1like0CommentsExploring the Advanced RAG (Retrieval Augmented Generation) Service
In the ever-evolving landscape of AI, LLM + RAG (Retrieval Augmented Generation) is a typical use scenario. Retrieving accurate related chunked data from complicated docs and then improving LLM response quality becomes challenge. There is no a silver bullet RAG can address all requirements so far. Developers need to verify different advanced RAG techs to find out which is a proper one for their scenarios considering accuracy, response speed, costs, etc. In order to solve this, with Azure Intelligent Document, Azure OpenAI, LlamaIndex, LangChain, Gradio..., I developed this AdvancedRAG service. This service is encapsulated in a Docker container, offers a streamlined way to experiment with different indexing techniques, evaluate their accuracy, and optimize performance for various RAG use cases. Whether you're building a quick MVP, a proof of concept, or simply exploring different indexing strategies, this service provides a versatile playground.9.1KViews3likes0Comments