healthcare
506 TopicsProtect patient privacy across languages with the de-identification service's preview expansion
Machine learning and analytics are transforming healthcare by streamlining clinical workflows, powering AI models and unlocking new insights from patient data. These innovations are fueled by textual data rich in Protected Health Information (PHI). To be used for research, innovation and operational improvements, this data must be responsibly de-identified to protect patient privacy. Manual de-identification can be slow, expensive, and error-prone, creating bottlenecks that delay progress and limit collaboration. De-identification is more than a compliance standard; it is the key to unlocking healthcare data’s full potential while maintaining patient privacy and trust. Today, we are excited to announce the expansion of the Azure Health Data Services de-identification service to support five new preview language-locale combinations: Spanish (United States) German (Germany) French (France) French (Canada) English (United Kingdom) This language expansion enables global healthcare organizations to unlock insights from data beyond English while continuing to adhere to regulatory standards. Why Language Support Matters Healthcare data is generated in many languages around the world, and each one comes with its own linguistic structure, formatting, and privacy considerations. By expanding support to multiple preview languages such as Spanish, French, German, and English, our de-identification service allows organizations to unlock data from a broader range of countries and regions. But language alone isn’t the whole story. Different locales within the same language (French in France vs. Canada, or English in the UK vs. the US) often format PHI in unique ways. Addresses, medical institutions, and identifiers can all look different depending on the region. Our service is designed to recognize and accurately de-identify these locale-specific patterns, supporting privacy and compliance wherever the data originates. How It Works The Azure Health Data Service de-identification service empowers healthcare organizations to protect patient data through three key operations: TAG detects and annotates PHI from unstructured text. REDACT obfuscates PHI to prevent exposure. SURROGATE replaces PHI with realistic, synthetic surrogates, preserving data utility while ensuring privacy. Our service leverages state-of-the-art machine learning models to identify and handle sensitive information, supporting compliance with HIPAA's Safe Harbor standards and unlinked pseudonymization aligned with GDPR principle. By maintaining entity consistency and temporal relationships, organizations can use de-identified data for research, analytics, and machine learning without compromising patient privacy. Unlocking New Use Cases By expanding the service's language support, organizations can now address some of the most pressing data challenges in healthcare: Reduce organizational liability by meeting evolving privacy standards. Enable secure data sharing across institutions and regions. Unlock AI opportunities by training models on multilingual, de-identified data. Share de-identified data across institutions to create larger, more diverse datasets. Conduct longitudinal research while preserving patient privacy. Proven Accuracy Researchers at the University of Oxford recently conducted a comprehensive comparative study evaluating multiple automated de-identification systems across 3,650 UK hospital records. Their analysis compared both task-specific transformer models and general-purpose large language models. The Azure Health Data Services de-identification service achieved the highest overall performance among the 9 evaluated tools, demonstrating a recall score of 0.95. The study highlights how robust de-identification enables large-scale, privacy-preserving EHR research and supports the responsible use of AI in healthcare. Read the full study here: Benchmarking transformer-based models for medical record deidentification Preview: Your Feedback Matters This multilingual feature is now available in preview. We invite healthcare organizations, research institutions, and clinicians to: Try it out Overview of the de-identification service in Azure Health Data Services | Microsoft Learn. Provide feedback to help refine the service: Azure Health Data Service multilingual de-identification Service Feedback – Fill out form. Join us in shaping the future of privacy-preserving healthcare innovation. At Microsoft, we are committed to helping healthcare providers, payors, researchers, and life sciences companies unlock the value of data while maintaining the highest standards of patient privacy. Azure Health Data Services de-identification service empowers organizations to accelerate AI and analytics initiatives safely, supporting innovation and improving patient outcomes across the healthcare ecosystem. Explore Azure Health Data Services to see how our solutions help organizations transform care, research, and operational efficiency.1.1KViews2likes0CommentsFine-Tuning Healthcare AI Models: Custom Segmentation for Your Healthcare Data
This post is part of our healthcare AI fine-tuning series: MedImageInsight Fine-Tuning - Embeddings and classification MedImageParse Fine-Tuning - Segmentation and spatial understanding (you are here) CXRReportGen Fine-Tuning - Clinical findings generation Introduction MedImageParse now supports fine-tuning, allowing you to adapt Microsoft’s open-source biomedical foundation model to your healthcare use cases and data. Adapting this model can take as little as an hour to add new segmentation targets, add new modalities or boost performance significantly on your data. We’ll demonstrate how we achieved large performance gains across multiple metrics on a public dataset. Biomedical clinical apps often need highly specialized models, but training one from scratch is expensive and data-intensive. Traditional approaches require thousands of annotated images, weeks of compute time, and deep machine learning expertise just to get started. Fine-tuning offers a practical alternative. By starting with a strong foundation model and adapting it to your specific domain, you can achieve production-ready performance with hundreds of examples and hours of training time. Everything you need to start finetuning is available now, including a ready-to-use AzureML pipeline, complete workflow notebooks, and deployment capabilities. We fine-tuned MedImageParse on the CDD-CESM mammography dataset (specialized CESM modality for lesion segmentation) to demonstrate domain adaptation on data under‑represented in pre-training. Follow along: The complete example is in our GitHub repository as a ready-to-run notebook. What is MedImageParse? MedImageParse (MIP) is Microsoft’s open-source implementation of BiomedParse that comes with a permissive MIT license and is designed for integration into commercial products. It is a powerful and flexible foundation model for text-prompted medical imaging segmentation. MIP accepts an image and one or more prompts (e.g. “neoplastic cells in breast pathology” or “inflammatory cells,”) then accurately identifies and segments the corresponding structures within the input image. Trained on a wide range of biomedical imaging datasets and tasks, MIP captures robust feature representations that are highly transferrable to new domains. Furthermore, it operates efficiently on a single GPU, making it a practical tool for research laboratories without extensive computational resources. Built with adaptability in mind, the model can be fine-tuned using your own datasets to refine segmentation targets, accommodate unique imaging modalities, or improve performance on local data distributions. Its modest computational footprint, paired with this flexibility, positions MIP as a strong starting point for custom medical imaging solutions. When to Fine-tune (and When NOT to) Fine-tuning can transform MedImageParse into your own clinical asset that's aligned with your institution’s needs. But how do you know if that’s the right approach for your use case? Fine-tuning makes sense when you’re working with specialized imaging protocols (custom equipment or acquisition parameters), rare structures not well-represented in general datasets, or when you need high precision for quantitative analysis. You’ll need some high-quality annotated examples to see meaningful improvements; more is better, but thousands aren’t required. Simpler approaches might work instead if the pre-trained model already performs reasonably well on standard anatomies and common pathologies. If you’re still in exploratory mode figuring out what to measure, start with the base model first to establish a strong baseline for your use case. Our example shows how fine-tuning can deliver significant performance gains even with modest resources. With about one hour of GPU time and 200-500 annotated images, fine-tuning showed a significant improvement across multiple metrics. The Fine-tuning Pipeline: From Data to Deployed Model To demonstrate fine-tuning in action, we used the CDD-CESM mammography dataset: a collection of Contrast-Enhanced Spectral Mammography (CESM) images with expert-annotated breast lesion masks. CESM is a specialized imaging modality that wasn’t well represented in MedImageParse’s original training data. The dataset 1 (can be downloaded from our HuggingFace location or from its original TCIA page) includes predefined splits with high-quality segmentation annotations. Why AzureML Pipelines? Before diving into the workflow, it’s worth understanding why we use AzureML pipelines for this process. Every experiment is tracked with full versioning; you always know exactly what you ran and can reproduce results months later. The pipeline handles multi-GPU distribution automatically without code changes, making it easy to scale up. The modular design lets you mix and match components for your specific needs, swap data preprocessing, adjust training parameters, or change deployment strategies independently. Training metrics, validation curves, and resource utilization are logged automatically, giving you full visibility into the process. Learn more about Azure ML pipelines. Fine-Tuning Workflow Setup: Upload data and configure compute The first step uploads your training data and configuration to AzureML as versioned assets. You’ll configure a GPU compute cluster (H100 or A100 instances recommended) that will handle the training workload. # Create and upload training data folder training_data = Data( path="CDD-CESM", type=AssetTypes.URI_FOLDER, description=f"{name} training data", name=f"{name}-training_data", ) training_data = ml_client.data.create_or_update(training_data) # Create and upload parameters file parameters = Data( path="parameters.yaml", type=AssetTypes.URI_FILE, description=f"{name} parameters", name=f"{name}-parameters", ) parameters = ml_client.data.create_or_update(parameters) Fine-tuning: The medimageparse_finetune component The fine-tuning component takes three inputs: The pre-trained MedImageParse model (foundation weights) Your annotated dataset Training configuration (learning rate, batch size, augmentation settings) During training the pipeline applies augmentation, tracks validation metrics, and checkpoints periodically. The output is an MLflow-packaged model, a portable artifact that includes the model weights, preprocessing code that is ready to deploy in AzureML or AI Foundry. The pipeline uses parameter-efficient fine-tuning techniques to adapt the model while preserving the broad knowledge from pre-training. This means you get specialized performance without catastrophic forgetting of the base model’s capabilities. # Get the pipeline component finetune_pipeline_component = ml_registry.components.get( name="medimageparse_finetune", label="latest" ) # Get the latest MIP model model = ml_registry.models.get(name="MedImageParse", label="latest") # Create the pipeline @pipeline(name="medimageparse_finetuning" + str(random.randint(0, 100000))) def create_pipeline(): mip_pipeline = finetune_pipeline_component( pretrained_mlflow_model=model.id, data=data_assets["training_data"].id, config=data_assets["parameters"].id, ) return {"mlflow_model_folder": mip_pipeline.outputs.mlflow_model_folder} # Submit the pipeline pipeline_object = create_pipeline() pipeline_object.compute = compute.name pipeline_object.settings.continue_on_step_failure = False pipeline_job = ml_client.jobs.create_or_update( pipeline_object, experiment_name="medimageparse_finetune_experiment" ) Deployment: Register and serve the model After training, the model can be registered in your AzureML workspace with version tracking. From there, deployment to a managed online endpoint takes a single command. The endpoint provides a scalable REST API backed by GPU compute for optimal inference performance. # Register the Model run_model = Model( path=f"azureml://jobs/{pipeline_job.name}/outputs/mlflow_model_folder", name=f"MIP-{name}-{pipeline_job.name}", description="Model created from run.", type=AssetTypes.MLFLOW_MODEL, ) run_model = ml_client.models.create_or_update(run_model) # Create endpoint and deployment with the classification model endpoint = ManagedOnlineEndpoint(name=name) endpoint = ml_client.online_endpoints.begin_create_or_update(endpoint).result() deployment = ManagedOnlineDeployment( name=name, endpoint_name=endpoint.name, model=run_model.id, instance_type="Standard_NC40ads_H100_v5", instance_count=1, ) deployment = ml_client.online_deployments.begin_create_or_update(deployment).result( Testing: Text-prompted inference With the endpoint deployed, you can send test images along with text prompts describing what to segment. For the CDD-CESM example, we use text prompts: “neoplastic cells in breast pathology & inflammatory cells”. The model returns multiple segmentation masks for different detected regions. Text-prompting lets you switch focus on the fly (e.g., “tumor boundary” vs. “inflammatory infiltration”) without retraining or reconfiguring the model. Results Fine-tuning made a huge difference in how well the model works. The Dice Score, which shows how closely the model’s results match the actual regions, more than doubled, from 0.198 to 0.486. The IoU, another measure of overlap, nearly tripled, going from 0.139 to 0.383. Sensitivity jumped from 0.251 to 0.535, which means the model found more real positives. Metric Base Fine-tuned Δ Abs Δ Rel Dice (F1) 0.198 0.486 +0.288 +145% IoU 0.139 0.383 +0.244 +176% Sensitivity 0.251 0.535 +0.284 +113% Specificity 0.971 0.987 +0.016 +1.6% Accuracy 0.936 0.963 +0.027 +2.9% These improvements really matter in practice. When the Dice and IoU scores go up, it means the model is better at outlining the exact shape and size of problem areas, which helps doctors get more accurate measurements and track changes over time. The jump in sensitivity means the model is finding more actual lesions, while keeping specificity above 98% makes sure there aren’t a lot of false alarms. The improvement accuracy is impressive, but the more significant upgrades in overlap and recall are most impressive and matter most for getting precise results in medical images. Try It on Your Own Data To successfully implement this solution in your organization, focus first on the core requirements and resources that will ensure a seamless transition. The following section outlines these essential steps so you can move efficiently from planning to deployment and set your team up for optimal results. Dataset size: Start with 200-500 annotated images. This is enough to see meaningful performance improvements without requiring massive data collection efforts. More data generally helps, but you don’t need thousands of examples to get started. Annotation quality: High-quality segmentation masks are critical. Invest in precise boundary delineations (pixel-level accuracy where possible), consistent annotation protocols across all images, and quality control reviews to catch and correct errors. Annotation effort: Budget enough time per image for careful annotation. Apply active learning approaches to focus effort on the most informative samples and start with a smaller pilot dataset (100-150 images) to validate the approach before scaling up. Training compute: A100 or H100 recommended (one device with multiple GPUs is sufficient for a few hundred image runs). For the CDD-CESM dataset, we used NC-series VMs (single-node) with 8 GPUs and training on 300 images took around 30 minutes for 10 epochs. If you’re training on larger datasets (thousands of images), consider upgrading to ND-series VMs, which offer better multi-node performance and allow you to train on large volumes of data faster. Where to Go from Here? So, what does this mean for your workflows and clinical teams? Foundation models like MedImageParse provide significant power and performance. They’re flexible with text-prompted multi-task capabilities that can integrate into existing workflows without retooling and are relatively cheap to use for inference. This means faster review, more precise assessments, and independence from vendor development timelines. But these models are not adapted to your institution and use cases out of the box, but developing a foundation model from scratch is prohibitively expensive. Fine-tuning bridges that gap: you can boost performance on your data and adapt it to your use case at a fraction of the cost. You control what the model learns, how it fits your workflow, and its validation for your context. We’ve provided the complete tools to do that: the fine-tuning notebook walks through the entire process, from data preparation to deployment. By following this workflow and collecting annotated data from your institution (see “Try It on Your Own Data” above for requirements), you can deploy MedImageParse tailored to your institution and use cases. References Khaled R., Helal M., Alfarghaly O., Mokhtar O., Elkorany A., El Kassas H., Fahmy A. Categorized Digital Database for Low energy and Subtracted Contrast Enhanced Spectral Mammography images [Dataset]. (2021) The Cancer Imaging Archive. DOI: 10.7937/29kw-ae92 https://www.cancerimagingarchive.net/collection/cdd-cesm/Azure OpenAI GPT model to review Pull Requests for Azure DevOps
In recent months, the use of Generative Pre-trained Transformer (GPT) models for natural language processing (NLP) has gained significant traction. GPT models, which are based on the Transformer architecture, can generate text from arbitrary sources of input data and can be trained to identify errors and detect anomalies in text. As such, GPT models are increasingly being used for a variety of applications, ranging from natural language understanding to text summarization and question-answering. In the software development world, developers use pull requests to submit proposed changes to a codebase. However, reviews by other developers can sometimes take a long time and not accurate, and in some cases, these reviews can introduce new bugs and issues. In order to reduce this risk, During my research I found the integration of GPT models is possible and we can add Azure OpenAI service as pull request reviewers for Azure Pipelines service. The GPT models are trained on developer codebases and are able to detect potential coding issues such as typos, syntax errors, style inconsistencies and code smells. In addition, they can also assess code structure and suggest improvements to the overall code quality. Once the GPT models have been trained, they can be integrated into the Azure Pipelines service so that they can automatically review pull requests and provide feedback. This helps to reduce the time taken for code reviews, as well as reduce the likelihood of introducing bugs and issues.45KViews4likes13CommentsFine-Tuning Healthcare AI Models: Discovering the Power of Finetuning MedImageInsight on Your Data
This post is part of our healthcare AI fine-tuning series: MedImageInsight Fine-Tuning - Embeddings and classification (you are here) MedImageParse Fine-Tuning - Segmentation and spatial understanding CXRReportGen Fine-Tuning - Clinical findings generation Introduction That’s the promise of MedImageInsight (MI2), Microsoft’s open-source foundation model that’s revolutionizing medical imaging analysis. Developed by Microsoft Health and Life Sciences, MedImageInsight is designed as a "generalist" foundation model, offering capabilities across diverse medical imaging fields. MI2 achieves state-of-the-art or human expert-level results in tasks like classification, image search, and 3D medical image retrieval. Its features include: Multi-domain versatility: Trained on medical images from fourteen different domains such as X-Ray, CT, MRI, dermoscopy, OCT, fundus photography, ultrasound, histopathology, and mammography. State-of-the-art (SOTA) performance: Achieves SOTA or human expert-level results in tasks like classification, image-image search, and fine-tuning on public datasets, with proven excellence in CT 3D medical image retrieval, disease classification for chest X-ray, dermatology, OCT imaging, and even bone age estimation. Regulatory-ready features: When used on downstream tasks, MI2 allows for sensitivity/specificity adjustments to meet clinical regulatory requirements. Transparent decision-making: Provides evidence-based decision support through image-image search, image-text search, enhancing explainability. Efficient report generation: When paired with a text decoder, it delivers near state-of-the-art report generation using only 7% of the parameters compared to similar models. 3D capability: Leverages 3D image-text pre-training to achieve state-of-the-art performance for 3D medical image retrieval. Fairness: Out-performs other models in AI fairness evaluations across age and gender in independent clinical assessments. MI2 is available now through the Azure AI Foundry model catalog (docs) and has already demonstrated its value across numerous applications. We’ve made it even easier for you to explore its capabilities with our repository full of examples and code for you to try. It covers: Outlier detection: Encoding CT/MR series to spot anomalies. Zero-shot classification with text labels: Identifying conditions without prior training. Adapter training: Specializing in specific classification tasks. Exam parameter detection: Normalizing MRI series and extracting critical details. Multimodal adapter analysis: Merging insights from radiology and pathology Image search: Finding similar cases to aid diagnosis using both 2D images and 3D volumes (cross-sectional imaging). Model monitoring: Ensuring consistent performance over time (code coming soon). While these capabilities are impressive on their own, the true potential of MI2 lies in its adaptability. This is where fine-tuning comes in: the ability to customize this powerful foundation model for specific clinical applications at your institution. Fine-tuning currently available in public preview, can transform this foundation model into production-ready, clinical-grade assets tailored to your specific needs and workflow while maintaining regulatory compliance. Note: This blog post demonstrates how MedImageInsight can be fine-tuned for new data. This example is illustrative; however, the same process can be used to develop production-ready clinical assets when following appropriate regulatory guidelines. Teaching an Old (Actually New) AI New Tricks MedImageInsight’s architecture offers distinct advantages for fine-tuning: Lightweight design: MI2 utilizes a DaViT image encoder (360M parameters) and language encoder (252M parameters) Efficient scale: With a total of only 0.61B parameters compared to multi-billion parameter alternatives, MI2 requires significantly less computational resources than comparable models. Training flexibility: The model supports both image-text and image-label pairs for different training approaches. Solid foundation: Pre-trained on 3.7M+ diverse medical images, MI2 starts with robust domain knowledge. MI2 is ideal for fine-tuning to specific medical imaging domains, allowing for clinical applications that integrate into healthcare workflows after validation. The model maintains its strengths while adapting to specialized patterns and requirements. Using AzureML Pipelines for an MI2 Glow Up The Azure Machine Learning (AzureML) pipeline streamlines the fine-tuning process for MI2. This end-to-end workflow, available now as a public preview, manages everything from data preparation to model registration in a reproducible manner: To finetune MI2, we use an AzureML pipeline which streamlines the fine-tuning process with distributed training on GPU clusters. We’ve released five components into public preview to enable you to fine-tune MI2 and simplify related processes like generating a classifier model: MedImageInsight model finetuning core component (component) is the core component of the fine-tuning process that trains the MedImageInsight model. This component requires four separate TSV files as input: an image TSV and a text TSV for training, plus the same two files for evaluation, TSV file of the all the possible text strings and a training configuration YAML file. This component supports distributed training on a multi-GPU cluster. MedImageInsight embedding generation component (component) creates embeddings from images using the MedImageInsight model. It allows customization of image quality and dimensions, and outputs a pickled NumPy array containing embeddings for all processed images. MedImageInsight adapter finetune component (component) takes NumPy arrays of training and validation data along with their associated text labels (from TSV). It trains a specialized 3-layer model designed for classification tasks and optimizes performance for specific domains while maintaining MI2's core capabilities. MedImageInsight image classifier assembler component (component) combines your fine-tuned embedding model with a label file into a deployable image classifier. This component takes the finetune MI2 embedding model, text labels and an optional adapter model then packages them into a unified MLFlow model ready for deployment. The resulting model package can operate in either zero-shot mode or with a custom adapter model. MedImageInsight pipeline component (component) provides an end-to-end pipeline component that integrates all components into one workflow. It is a simple pipeline that trains, evaluates, and outputs the embedding and classification models Example Dataset: GastroVision To demonstrate MI2's fine-tuning capabilities, we're using the GastroVision dataset [1] as a real-world example. It's important to note that our goal here is not to build the ultimate gastroenterology classifier, but rather to showcase how MI2 can be effectively fine-tuned. The techniques demonstrated can be applied to your institution's data to create customized embedding models that support not only classification, but all the applications we’ve mentioned, from zero shot classification and outlier detection to image search and multimodal analysis. The GastroVision dataset offers an excellent test case for several reasons: Open-access dataset: 8,000 endoscopy images collected from two hospitals in Norway and Sweden Diverse classes: Spans 27 distinct classes of gastrointestinal findings with significant class imbalance Real-world challenges: High similarity between certain classes, multi-center variability, and rare findings with limited examples Recent publication: Published in 2023 and wasn't included in MI2's original training data. With approximately 8,000 endoscopic images labeled across 27 different classes, this dataset provides a practical context for fine-tuning MI2's embedding capabilities. By demonstrating how MI2 can adapt to new data, we illustrate how you might fine-tune the model on your own data to create production-ready, clinical-grade specialized embedding models tailored to your unique imaging environments, equipment, and patient populations. The Code: Getting the Data Prep’d The first step in fine-tuning MI2 is preparing your dataset. For the GastroVision dataset, we need to preprocess the images and structure the data in a format suitable for training: def gastro_folder_to_text(folder_name): label = folder_to_label[folder_name] view = labels_to_view[label] return f"endoscopy gastrointestinal {view} {label}" gastrovision_root_directory = "/home/azureuser/data/Gastrovision" text_to_label = {} folders = os.listdir(gastrovision_root_directory) for folder in folders: label = folder_to_label[folder] text = gastro_folder_to_text(folder) text_to_label[text] = label data = [] files = list(glob.glob(os.path.join(gastrovision_root_directory, "**/*.jpg"), recursive=True)) for file_path in tqdm(files, ncols=120): folder = os.path.basename(os.path.dirname(file_path)) filename = os.path.basename(file_path) text = gastro_folder_to_text(folder) with Image.open(file_path) as img: img = img.resize((512, 512)).convert("RGB") buffered = BytesIO() img.save(buffered, format="JPEG", quality=95) img_str = base64.b64encode(buffered.getvalue()).decode("utf-8") data.append( [f"{folder}/{filename}-{os.path.basename(file_path)}", img_str, text] ) df = pd.DataFrame(data, columns=["filename", "image", "text"]) This preprocessing pipeline does several important tasks: Resizing and standardizing images to 512x512 pixels Converting images to base64 encoding for efficient storage. Then we convert the encoded images into the TSV: # Function to format text as JSON def format_text_json(row): return json.dumps( { "class_id": text_index[row["text"]], "class_name": row["text"], "source": "gastrovision", "task": "classification", } ) # Filter the dataframe to only include the top 22 text captions df_filtered = df[df["text"].isin(df["text"].value_counts().index[:22])].reset_index( drop=True ) # Get unique texts from the filtered dataframe unique_texts = df_filtered["text"].unique() # Save the unique texts to a text file with open("unique_texts.txt", "w") as f: for text in unique_texts: f.write(text + "\n") # Create a dictionary to map text labels to indices text_index = {label: index for index, label in enumerate(unique_texts)} # Apply the formatting function to the text column df_filtered["text"] = df_filtered.apply(format_text_json, axis=1) # Split the dataframe into training, validation, and test sets train_df, val_test_df = train_test_split( df_filtered, test_size=0.4, random_state=42, stratify=df_filtered["text"] ) validation_df, test_df = train_test_split( val_test_df, test_size=0.5, random_state=42, stratify=val_test_df["text"] ) # Create separate dataframes for images and labels and save the dataframes to TSV files def split_and_save_tsvs(aligned_df, prefix): image_df = aligned_df[["filename", "image"]] text_df = aligned_df[["filename", "text"]] text_df.to_csv( f"{prefix}_text.tsv", sep="\t", index=False, header=False, quoting=csv.QUOTE_NONE, ) image_df.to_csv(f"{prefix}_images.tsv", sep="\t", index=False, header=False) split_and_save_tsvs(train_df, "train") split_and_save_tsvs(validation_df, "validation") split_and_save_tsvs(test_df, "test") Filtering to include only classes with sufficient samples. Creating label mappings for classification. Splitting data into training, validation, and test sets. Exporting processed data as TSV files for AzureML. After preparing the datasets, we need to upload them to AzureML as data assets: name = "gastrovision" assets = { "image_tsv": "train_images.tsv", "text_tsv": "train_text.tsv", "eval_image_tsv": "validation_images.tsv", "eval_text_tsv": "validation_text.tsv", "label_file": "unique_texts.txt", } data_assets = { key: Data( path=value, type=AssetTypes.URI_FILE, description=f"{name} {key}", name=f"{name}-{key}", ) for key, value in assets.items() } for key, data in data_assets.items(): data_assets[key] = ml_client.data.create_or_update(data) print( f"Data asset {key} created or updated.", data_assets[key].name, data_assets[key].version, ) These uploaded assets are versioned in AzureML, allowing for reproducibility and tracking of which specific data was used for each training run. The Code: Cue the Training Montage In the notebook, we demonstrate a straightforward example of finetuning using the pipeline component, but you can integrate these components into larger pipelines that train more complex downstream tasks such as exam parameter classification, report generation, analysis of 3D scans, etc. conf_file = "train-gastrovision.yaml" data = Data( path=conf_file, type=AssetTypes.URI_FILE, description=f"{name} conf_files", name=f"{name}-conf_files", ) data_assets["conf_files"] = ml_client.data.create_or_update(data) # Get the pipeline component finetune_pipline_component = ml_registry.components.get( name="medimage_insight_ft_pipeline", label="latest" ) # Get the latest MI2 model model = ml_registry.models.get(name="MedImageInsight", label="latest") @pipeline(name="medimage_insight_ft_pipeline_job" + str(random.randint(0, 100000))) def create_pipeline(): mi2_pipeline = finetune_pipline_component( mlflow_embedding_model_path=model.id, compute_finetune=compute.name, instance_count=8, **data_assets, ) return { "classification_model": mi2_pipeline.outputs.classification_mlflow_model, "embedding_model": mi2_pipeline.outputs.embedding_mlflow_model, } pipeline_object = create_pipeline() pipeline_object.compute = compute.name pipeline_object.settings.continue_on_step_failure = False pipeline_job = ml_client.jobs.create_or_update(pipeline_object, experiment_name=name) pipeline_job_run_id = pipeline_job.name pipeline_job This pipeline approach offers several advantages: Access to modular components (you can use only parts of the pipeline if needed) Distributed training across multiple compute instances Built-in monitoring and logging Seamless integration with the AzureML model registry The Code: Saving and Deploying your Models After the training job is completed, we register the model in the AzureML registry and deploy it as an online endpoint: # Create a Model to register run_model = Model( path=f"azureml://jobs/{pipeline_job.name}/outputs/classification_model", name=f"classifier-{name}-{pipeline_job.name}", description="Model created from run.", type=AssetTypes.MLFLOW_MODEL, ) # Register the Model run_model = ml_client.models.create_or_update(run_model) # Create endpoint and deployment with the classification model endpoint = ManagedOnlineEndpoint(name=name) endpoint = ml_client.online_endpoints.begin_create_or_update(endpoint).result() deployment = ManagedOnlineDeployment( name=name, endpoint_name=endpoint.name, model=run_model.id, instance_type="Standard_NC6s_v3", instance_count=1, ) deployment = ml_client.online_deployments.begin_create_or_update(deployment).result() This deployment process creates a scalable API endpoint that can be integrated you’re your workflows, with built-in monitoring and scaling capabilities. Results and Making Sure It Works After fine-tuning MI2 on the GastroVision dataset, we can validate the quality of the resulting embeddings by evaluating their performance on a classification task. Method Macro Average Micro Average MCC mAUC Prec. Recall F1 Prec. Recall F1 ResNet-50 [2] 0.437 0.437 0.433 0.681 0.681 0.681 0.641 - Pre-trained DenseNet-121 [3] 0.738 0.623 0.650 0.820 0.820 0.820 0.798 - Greedy Soup (GenAI Aug)[4] 0.675 0.600 0.615 0.812 0.812 0.812 0.790 - Greedy Soup (Basic Aug) [4] 0.762 0.639 0.666 0.832 0.830 0.830 0.809 - MI Finetune 0.736 0.772 0.740 0.834 0.860 0.847 0.819 0.990 Using a KNN classifier, we achieve an impressive mAUC of 0.990 and SOTA in the other metrics. Though our goal was not to create the ultimate gastroenterology classifier, these results demonstrate that with minimal fine-tuning, MI2 produces embeddings that can power a state-of-the-art using only a KNN classifier. The real potential here goes far beyond classification. Imagine applying this same fine-tuning approach to your institution's specific imaging data. The resulting domain-adapted model would provide enhanced performance across all MI2's capabilities: More accurate outlier detection in your specific patient population More precise image retrieval for similar cases in your database Better multimodal analysis combining your radiology and pathology data Enhanced report generation tailored to your clinical workflows MI2's efficient architecture (0.36B/0.25B parameters for image/text encoder respectively) can be effectively adapted to specialized domains while maintaining its full range of capabilities. The classification metrics validate that the fine-tuning process has successfully adapted the embedding space to better represent your specific medical domain. Your Turn to Build! Fine-tuning MedImageInsight represents a significant opportunity to extend the capabilities of this powerful foundation model into specialized medical imaging domains and subspecialties. Through our demonstration with the GastroVision dataset, we have shown how MI2’s architecture, with just 0.36B and 0.25B parameters for the image and text encoder respectively, can be efficiently adapted to new tasks with competitive or superior performance compared to traditional approaches. The key features of fine-tuning MI2 include: Efficiency: Achieving high performance with minimal data and computational resources Versatility: Adapting to specialized domains while preserving multi-domain capabilities Practicality: Streamlined workflow from training to deployment using AzureML The fine-tuning process described here provides a pathway for healthcare institutions to develop production-ready, clinical-grade AI assets. By finetuning MedImageInsight and incorporating appropriate validation, testing, and regulatory compliance measures, the model can be transformed from a foundation model into specialized clinical tools optimized for your specific use cases and patient populations. With your finetuned, you gain several distinct advantages: Enhanced domain adaptation: Models that better understand the unique characteristics of your patient population and imaging equipment Improved rare condition detection: Higher sensitivity for conditions specific to your specialty or patient demographics Reduced false positives: Better differentiation between similar-appearing conditions common in your practice Customized explanations: More relevant evidence-based decisions through image-image search from your own database As healthcare institutions increasingly adopt AI for medical imaging analysis, the ability to fine-tune models for specific patient populations, imaging equipment, and clinical specialties become crucial. MedImageInsight’s efficient architecture and adaptability make it an ideal foundation for building specialized medical imaging solutions that can be deployed in resource-constrained environments. We encourage you to try fine-tuning MedImageInsight with your own specialized datasets using our sample Jupyter Notebook as your starting point. The combination of MI2’s regulatory-friendly features with domain-specific adaptations opens new possibilities for transparent, efficient, and effective AI-assisted medical imaging analysis. [1] Jha, D. et al. (2023). GastroVision: A Multi-class Endoscopy Image Dataset for Computer Aided Gastrointestinal Disease Detection. ICML Workshop on Machine Learning for Multimodal Healthcare Data (ML4MHD 2023). [2] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR). pp. 770–778 (2016) [3] Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 4700–4708 (2017) [4] Fahad, M. et al. (2025). Deep insights into gastrointestinal health. Biomedical Signal Processing and Control, 102, 107260.Microsoft Azure continues to expand scalability for Healthcare EHR Workloads
Microsoft Azure has reached a new milestone for Epic Chronicles Operational Database (ODB) scalability with the Standard_M416bs_v3 (Mbv3) VM. It can now scale up to 110 million GRefs/s (Global References per second) in the ECP configuration and up to 39 million GRefs/s in the SMP configuration, improving upon the previous Azure benchmarks of 65 million GRefs/s and 20 million GRefs/s respectively. Microsoft Azure now can host 96% of the Epic customer base, enabling healthcare organizations to run their EHR systems on Azure. New VM Size Purpose-Built for Epic’s Chronicles ODB The Standard_M416bs_v3 VM, newly added to Azure’s Mbv3 series, is purpose-built to meet the growing performance and scalability demands of large healthcare EHR environments. With higher CPU capacity, expanded memory and improved remote storage throughput, it delivers the reliability needed for mission-critical workloads at scale. Key specifications include: Mbv3 Processor Performance: Built on 4th Gen Intel® Xeon® Scalable processors, the Mbv3 series is optimized for high memory and storage performance, supporting workloads up to 4 TB of RAM with an NVMe interface for faster remote disk access. Compute Capacity: The Standard_M416bs_v3 delivers 416 vCPUs - more than twice the capacity of previous Mbv3 sizes, delivering stronger performance. Storage Performance: Achieves up to 550,000 IOPS and 10 GBps remote disk bandwidth using Azure Ultra Disk. Performance Optimization: Enhanced by Azure Boost, the M416bs_v3 provides low-latency, high remote storage performance, making it ideal for storage throughput-intensive applications such as Epic ODB, relational databases and analytics workloads. Available Regions: M416bs_v3 is available in 4 regions - East US, East US 2, Central US, and West US 2. Explore Epic on Azure to learn more. Epic and Chronicles are trademarks of Epic Systems Corporation.1.7KViews2likes1CommentTransforming Clinical Workflows with Microsoft Dragon Copilot’s Partner-Driven AI Innovations
Healthcare software integration is notoriously complex, demanding technical rigor, regulatory compliance, organizational alignment, and workflow transformation. Clinicians face an overwhelming cognitive load—during a typical primary care visit, a physician may juggle over 150 discrete data points and make a dozen crucial decisions in less than 15 minutes. In this environment, AI must do more than offer intelligence—it must deliver the right information, at the right time, without increasing administrative burden or disrupting the clinician’s focus on patient care. Microsoft is expanding Dragon Copilot by enabling an open ecosystem of developer-built AI solutions for specific tasks and specialties, allowing healthcare organizations to access clinical intelligence at the point of care while further streamlining workflows, reducing administrative burden, boosting productivity, and enhancing revenue integrity. Dragon Copilot’s partner-driven AI extensibility empowers technology providers to deliver specialized solutions directly into clinical workflows, accelerating AI adoption at the point of care and enabling better outcomes for patients and clinicians. By opening Dragon Copilot, Microsoft enables the creation of seamless, scalable solutions that help clinicians reduce administrative tasks and improve patient care. Developers gain an opportunity to collaborate with industry leaders, shape healthcare technology, and deliver impactful AI applications for clinicians and patients. Partner Success Spotlight: Canary Health’s Innovative Solution Canary Speech is a Utah-based, AI-powered voice biomarker health tech company, utilizing patented real-time vocal analysis to screen for mental health and neurological disorders. Canary’s technology swiftly captures and analyzes speech data supporting clinicians in identifying cognitive and behavioral changes. Recently, Canary Speech launched Canary Ambient™, an API-first solution for real-time voice analysis in healthcare and contact centers. This software provides actionable insights from patient-clinician conversations by tracking speech patterns for real-time assessments of cognitive and behavioral health conditions. Canary Speech advances speech and language applications across health systems, payers, and pharmaceutical markets. The collaboration with Dragon Copilot is seamless and secure. Canary Speech receives patient audio—ideally during the encounter—via infrastructure designed to enable HIPAA compliance and newly developed endpoints. Their AI processes the audio, then surfaces summarized cognitive and behavioral assessments using Microsoft’s Adaptive Card framework. These interactive cards appear natively within Dragon Copilot, enabling clinicians to quickly review findings and to initiate follow-up assessments or treatment plans—all without leaving their workflow. The Canary Speech solution with Dragon Copilot is currently and being validated with real clinicians and patients. It demonstrates how Dragon Copilot can share not only encounter audio, but also clinical notes, patient context, and relevant history with trusted partners—expanding the universe of AI-powered insights available at the point of care. How AI Applications Work: Technical Enablement for Continuous Innovation Dragon Copilot’s extensibility framework supports partners in surfacing AI insights and enrichments at multiple points in the clinical workflow: Pre-encounter – Provide AI-generated insights to clinicians before they see the patient, supporting preparation and personalized care planning. During encounter – Surface real-time, contextual recommendations while the provider is examining the patient, ensuring decisions are timely and informed. Post-encounter – Deliver follow-up insights after the clinical note is generated, enabling comprehensive care coordination and outcome tracking. Microsoft’s Adaptive Card framework makes it easy for developers to build interactive platform-agnostic content blocks that integrate directly into Dragon Copilot. These cards support rich text, reference links, action buttons, and in-card feedback, creating a unified and intuitive user experience for clinicians. The following diagram shows the data flow between Microsoft Dragon Copilot and partner endpoints that provides extensions such as Canary, via infrastructure designed to enable HIPAA compliance, access to the encounter audio data. In addition to audio, other data elements such as turn transcript and the clinical note can also be made available to extensions. Driving Innovation Together through Partnership Our commitment to driving innovation through collaboration is sustained by actively engaging with partners and continually analyzing the market to keep our solutions effective and aligned with evolving needs. At this time, we have a growing list of early partners that have signaled interest in bringing their solutions to the point of care, through Dragon Copilot and help to represent the leading edge of healthcare technology, each bringing specialized expertise and transformative solutions to the table. Their work exemplifies how collaborative innovation can address real-world challenges across the care continuum. Artisight’s Smart Hospital Platform, brings hands-free, device-free documentation directly to the bedside. By pairing a clinician’s unique voice profile with their secure RTLS badge, Artisight eliminates logins and shared device interactions to allow care teams to focus on patients, not paperwork. Ambient documentation runs seamlessly in the background, generating accurate clinical notes while freeing clinicians to deliver more meaningful, face-to-face care. This integration streamlines workflows, reduces administrative burden, and drives measurable improvements in efficiency, compliance, and the overall patient experience. Atropos Health is the leader in translating real-world clinical data into personalized real-world evidence (RWE) and insights. Atropos Health is the developer of GENEVA OS®, the operating system for rapid healthcare evidence across a robust network of real-world data. Healthcare and life science organizations work with Atropos Health to close evidence gaps from bench to bedside, improving individual patient outcomes with data-driven care, expediting research that advances the field of medicine, and more. We aim to transform healthcare with timely, relevant real-world evidence. Cohere Health’s prior authorization application automates care requests, eliminating the administrative burden for clinicians and enabling real-time care approvals. Cohere's solution provides critical transparency right at the point of care while collecting relevant clinical documentation, patient summaries, and orders. It shares prior authorization requirements and offers in-the-moment guidance to physicians, helping them secure faster approvals and ensure patients receive the care they need more quickly. Elsevier's ClinicalKey AI delivers fast, reliable clinical insights at the point of care—combining proprietary medical content, leading textbooks, top scientific journals, government publications, and clinical guidelines into one trusted platform. Powered by advanced AI and backed by expert human review, it ensures answers are clinically relevant and backed by research. With privacy built in and citations clinicians can trust, ClinicalKey AI helps healthcare professionals make confident, informed decisions faster. Ensemble’s revenue cycle intelligence engine, EIQ, helps health systems proactively prevent payment denials, improve revenue integrity, and accelerate cash flow. Drawing on insights from more than 80,000 denial audit letters and 80 million annual claim transactions, EIQ analyzes clinical documentation against historic payer patters to detect nuanced issues that could trigger DRG downgrades or medical necessity rejections. It flags issues for operator intervention with context-aware prompts designed to ensure diagnoses are backed by precise clinical evidence and meet payer documentation requirements. Beyond denial prevention, EIQ strengthens the documentation foundation needed for downstream audits, ensuring that every claim is defensible and complete. hellocare.ai is a leading provider of AI-assisted virtual care solutions. Headquartered in Clearwater, FL, the company supports more than 80 health systems across the U.S. and is rapidly expanding globally. hellocare.ai helps health systems deliver high-quality, patient-centered care while improving clinical efficiency and staff wellbeing. Its fully integrated platform includes AI-Assisted Virtual Nursing, Virtual Sitting, Patient Engagement, Digital Whiteboard, Digital Room Signage, Ambient Documentation, Hospital-at-Home, Remote Patient Monitoring (RPM), and Digital Clinic, seamlessly embedding into existing healthcare EHRs, infrastructure, and care delivery models to power the next generation of healthcare. Humata Health is revolutionizing clinical clearance, moving beyond the narrow confines of traditional prior authorization to address the entire patient journey, from referrals and specialty drugs to post-acute care. By bringing providers and payers together on a single, shared infrastructure, its technology uses powerful AI and automation to streamline fragmented approvals. This collaborative approach unlocks faster, smarter, and more confident decisions, ensuring patients receive the appropriate care they deserve, exactly when they need it. Driven by a vision to see care move forward, Humata Health is built for yes. Lightbeam Health Solutions helps healthcare organizations manage and scale population health programs to improve performance within risk-based contracts. Leveraging clinical, health plan, and social determinants of health data, Lightbeam identifies and delivers actionable insights to clinicians and patients for proactive intervention. Proven outcomes include quality improvement, risk adjustment accuracy, reduced cost of care and avoidable utilization, and a better experience for clinicians and patients alike. Optum is collaborating with Microsoft to integrate ambient listening technology into clinical settings with Optum Real, helping physicians reduce documentation time and focus more on patient care. This integration supports smarter, real-time clinical workflows through AI-powered automation. Pangaea Data’s AI platform helps close care gaps by finding untreated and under-treated patients – including those who are undiagnosed, misdiagnosed, or miscoded – across hard-to–diagnose conditions. Seamlessly integrated with electronic health record (EHR) systems, the platform processes patient’s medical records along with their conversation with the treating clinician at the point of care, empowering clinicians to address care gaps without disrupting workflows. This enables better outcomes, lower costs through earlier diagnosis, smarter triaging, and more appropriate levels of revenue through faster pre-authorizations and treatment initiation. Built and deployed on Microsoft Azure, the platform ensures full compliance with privacy standards. Press Ganey turns patient-clinician conversations into actionable insights. By analyzing both content and tone alongside patient experience data, the platform helps organizations address concerns early and gives clinicians a more complete understanding of their patients. As AI reshapes healthcare, Regard is working towards its mission to ensure every patient receives exceptional care while giving physicians more time with their patients. Regard’s Proactive Documentation platform reviews all data in the medical record, and from physician-patient conversations, to recommend diagnoses to providers and surface critical clinical context. This diagnosis-first approach delivers accurate documentation at the point of care, supporting both clinical and revenue cycle teams, by ensuring no critical information is missed. Rhyme eliminates prior authorization through touchless workflows and a new sustainable approach to gold carding. They help 89 of the largest hospital systems and over 300 payers work together to process over 5 million auths per year, save millions of dollars, reduce claim denials, lower patient costs, and accelerate time to care. RhythmX AI enables clinicians to deliver hyper-personalized treatments that expand access and capture measurable value. The platform optimizes medical economics in both fee-for-service and value-based care settings. Built on more than $1B in R&D and validated through coding reviews and 25,000+ clinical assessments, it unifies 8+ data sources – including clinical, financial, payer, and formulary data – and can incorporate health system–specific guidelines and pathways into real-time care recommendations. RhythmX AI drives care orchestration across primary, specialty, and emergency settings, enabling comprehensive diagnoses, optimized specialist referrals and measurable ROI (e.g., $57M annually for 200 PCPs). With access to 300M patient records, 4.4B annual claims, 1.8M clinicians and 300K facilities, the platform provides broad data coverage, consistent accuracy in suspected condition identification and scalable precision-care capabilities across diverse care environments. Through these strategic collaborations, Dragon Copilot continues to expand its reach and deepen its impact, supporting a dynamic environment where developers, clinicians, and organizations can thrive. The collective expertise of our partners is not only shaping the future of healthcare technology but also driving meaningful change for patients and clinicians everywhere. Join Us in Shaping the Future of Healthcare AI Enabling partner-driven AI applications into Dragon Copilot ushers in a transformative period of collaborative innovation in healthcare. Through Dragon Copilot’s extensibility framework, we’re accelerating the delivery of clinical intelligence, reducing friction, and ensuring that every provider has the insights they need to deliver the highest quality care. Whether you’re building the next breakthrough AI application or seeking to bring transformative solutions to clinicians worldwide, we invite you to join our community. Ready to shape the future of healthcare AI? Explore our developer resources, connect with our team, and become a Dragon Copilot extensibility partner today. Read through our documentation on how extensions for Dragon Copilot work and how to build your own - https://learn.microsoft.com/en-us/industry/healthcare/dragon-copilot/extensions Check out the sample repo with sample code, and more – https://github.com/microsoft/dragon-copilot-extension-samples Contact dragon_extensions@microsoft.com Medical device disclaimer: Microsoft products and services (1) are not designed, intended or made available as a medical device, and (2) are not designed or intended to be a substitute for professional medical advice, diagnosis, treatment, or judgment and should not be used to replace or as a substitute for professional medical advice, diagnosis, treatment, or judgment. Customers/partners are responsible for ensuring solutions comply with applicable laws and regulations. Generative AI Disclaimer: Generative AI does not always provide accurate or complete information. AI outputs do not reflect the opinions of Microsoft. Customers/partners will need to thoroughly test and evaluate whether an AI tool is fit for the intended use and identify and mitigate any risks to end users associated with its use. Customers/partners should thoroughly review the product documentation for each tool.Healthcare agent service in Microsoft Copilot Studio is now Generally Available
Healthcare organizations continue to face immense challenges: workforce shortages, rising costs, and growing demands for patient care. The clinical staff is overburdened, leading to stress, burnout, and staff shortages. Generative AI presents a powerful opportunity when it can automate administrative workflows, surface relevant insights, and assist the clinical staff with contextual, credible and up-to-date information. With that opportunity, we are excited to announce General Availability (GA) of healthcare agent service in Microsoft Copilot Studio. Building responsible, AI-powered healthcare agents With healthcare agent service, organizations can create healthcare-specialized AI applications that use generative AI within a framework that promotes trust, compliance, and real-world clinical scenarios. Agents combine built-in credible medical sources, such as FDA, CDC, MedlinePlus, MSD Manuals, DailyMed and more, with the organization’s own knowledge sources and plugins, while leveraging healthcare-specific actions. Customers can define the intended healthcare roles, such as healthcare professionals or patients, so the behavior is relevant and appropriate for the audience and use case. Pre-built use cases include clinical documentation assistance, patient self-service, helping healthcare professionals triage by organizing information, finding medication information, accessing recent clinical guidelines information, and more. Because responsible AI in healthcare is a top priority, healthcare agent service is infused with safeguards that are reinforced by a healthcare-adapted orchestrator optimized for safety. Clinical, chat, and compliance safeguards help keep interactions evidence-based and trustworthy, increasing the reliability and accuracy of generated responses and adherence to the highest standards of safety, privacy, and regulatory compliance. Healthcare agent service underscores our ongoing commitment to responsible AI in healthcare, by offering customers a reliable, production-ready foundation for healthcare solutions that can be used to help support patients and medical professionals. Extending Dragon Copilot with conversational solutions Healthcare agent service provides a framework for building conversational AI applications that can be integrated directly into Dragon Copilot, giving partners and healthcare organizations the ability to extend its functionality in a scalable, compliant way. Today, Information Assist in Dragon Copilot, built on healthcare agent service, delivers safeguarded generative AI answers grounded in trusted sources and enriched with patient history and context, ensuring clinicians receive accurate, timely, and context-aware insights. Clinicians can effortlessly access a broad range of clinical topics directly within their workflow using natural language, surfacing insights from leading, trusted healthcare content partners that promote more informed clinical decisions with less administrative work. Partners and healthcare organizations can use healthcare agent service to create tailored solutions with built-in safeguards that help ensure output meets healthcare standards and supports safe decision-making at the point of care. These solutions can be integrated directly into Dragon Copilot to enhance both clinical and financial performance. Real-world impact with customers Healthcare organizations are already adopting healthcare agent service to bring generative AI into real-world care settings. Early adopters are seeing meaningful impact in reducing administrative burden, improving patient experience, and empowering clinicians with trusted information. Bayer Pharmaceuticals has recently worked with Microsoft to enable new agentic AI workflows for drug submission using healthcare agent service in Copilot Studio: “We have collaborated with Microsoft to build an AI-powered multi-agent decision board using the healthcare agent service in Copilot Studio. This multi-agent decision board revolutionizes how we strategize drug submissions, pricing, and patient targeting for global market access. By simulating expert board discussions and synthesizing diverse data—from regulatory approvals to health economics and real-world evidence—the system streamlines the complex process of securing drug reimbursement. Healthcare agent service helped us get results quicker, empowering teams to make smarter, data-driven decisions without replacing human expertise, which would enable better access to life-changing therapies for patients worldwide. Importantly, this tool is not limited to pharmaceutical companies. It also supports decision-making for health authorities, NGOs, and other stakeholders across the healthcare ecosystem—enabling more informed, collaborative, and impactful choices that benefit public health at large.” — Shay Zohar, local Market Access Director and member of Bayer Pharmaceutical’s global Early Access team Allgemeines Krankenhaus (AKH) Wien, the largest hospital in Vienna, Austria and the Medical University of Vienna collaborated with Microsoft to extend Dragon Copilot with healthcare agent service, to automate pre-anesthesia intake. “Transforming pre-anesthesia assessments with AI agents for greater efficiency has a great potential to decrease the administrative burden on anesthesiologists. In this project we used healthcare agent service to extend Dragon Copilot with AI-powered agents that automate pre-anesthesia intake to enhance clinical documentation, significantly reducing the administrative workload for anesthesiologists. By orchestrating conversational and workflow agents, the solution interacts with patients, completes assessments, checks for data conflicts, and generates clinical notes, all consolidated for physician review in Dragon Copilot.” — Dr. Oliver Kimberger, Professor for Perioperative Information Management at the Department of General Anesthesia and Intensive Care Medicine, AKH Wien. Empowering healthcare innovation Healthcare agent service offers a low-code interface for building and deploying custom AI solutions with chat, compliance and clinical safeguards that support safety and accuracy in generative AI. With seamless integration and the ability to extend the capabilities of Dragon Copilot, you gain the flexibility to tailor solutions to your organization’s evolving needs. Learn more in healthcare agent service in Copilot Studio documentation Explore the possibilities with Microsoft Copilot Studio Expand your knowledge about Microsoft for Healthcare Discover how we are shaping the future of health with cutting-edge solutions and collaborative efforts here Medical Device Disclaimer: Microsoft products and services (1) are not designed, intended or made available as a medical device, and (2) are not designed or intended to be a substitute for professional medical advice, diagnosis, treatment, or judgment and should not be used to replace or as a substitute for professional medical advice, diagnosis, treatment, or judgment. Customers/partners are responsible for ensuring solutions comply with applicable laws and regulations. Generative AI Disclaimer: Generative AI does not always provide accurate or complete information. AI outputs do not reflect the opinions of Microsoft. Customers/partners will need to thoroughly test and evaluate whether an AI tool is fit for the intended use and identify and mitigate1.6KViews0likes0CommentsAgentic AI in Healthcare
Healthcare organizations are at a crossroads where rising patient loads, complex data, and administrative burdens demand new solutions. Agentic AI – AI systems capable of autonomous action – is emerging as a catalyst for transformation, promising to act not just as tools but as collaborative digital team members. Microsoft’s ecosystem of AI technologies provides a robust foundation to harness agentic AI in healthcare. This report offers a comprehensive overview of agentic AI, distinguishes it from traditional AI, and explores its role in clinical workflows, administrative efficiency, patient engagement, and data governance. It also examines how Microsoft’s offerings (Microsoft 365 Copilot, Azure Health Data Services, Microsoft Fabric, Copilot Studio, and more) enable these advances responsibly and in compliance with healthcare regulations like HIPAA.Copilot Chat: Prompting
To start a new prompt, head over to Copilot Chat and hit the blue chat button in the upper right corner. 🔄 When should I start a new chat? A good rule of thumb: hit that button whenever you're switching contexts or subject areas. This helps keep Copilot focused and prevents information from getting muddled.🍸 🧪 How do I improve my prompts? To get the best results, use the GCSE Formula—that’s: Goal: What do you want Copilot to do? Context: What background info will help? Source: Where should Copilot pull from? Expectations: What kind of output do you want? 🧩 Example Here’s a basic prompt: Give me a concise summary of recent news about Pfizer. Now let’s expand it using the GCSE Formula: Summarize the latest news about Pfizer from reputable sources like Reuters or Bloomberg. Focus on developments in their vaccine pipeline and financial performance. Keep it concise—under 150 words. Let's see how this might look like in practice. This is my prompt: Give me a concise summary of recent news about Pfizer. Let's try expanding this to include our other key ingredients: 🎯 Challenge Try using the GCSE Formula in your next prompt and compare it to using just the goal. See how your results stack up!Copilot Chat: Downloads
On PC and Mac: Follow the download links below to install the Copilot Chat desktop app. Double-click the installer when prompted, and you're in. Windows: Microsoft 365 Copilot - Free download and install on Windows | Microsoft Store MacOS: Microsoft 365 Copilot on the App Store On Mobile: Scan the QR code to download the app to your device. In Your Browser: Prefer not to download anything? You can also access Copilot Chat from Microsoft 365 Copilot Chat. Once you're in, try starting a conversation in the prompt box. Not sure where to begin? No worries—use or tweak one of the suggested prompts to get going. Here are a few other handy entry points:1.4KViews4likes0Comments