Browse Machine Learning Community (25)
Gapandey lays out a practical, end-to-end MLOps template on Azure: train a scikit-learn model from data in Azure Blob Storage, package it as a self-contained pickle bundle, register it in an Azure ML Registry with auto-versioning, and deploy it to an Azure ML Managed Online Endpoint via an Azure DevOps multi-stage pipeline.
AnjaliSadhukhan argues that AI agents fail on enterprise questions mainly due to fragmented data and missing semantics, and outlines how Microsoft Fabric (OneLake, semantic models, Data Agents) and Azure AI Foundry can work together to provide governed, agent-ready access to business data.
ShivaniThadiyan explains how Azure SQL Managed Instance is evolving from a SQL Server-compatible PaaS into an AI-enabled platform, covering built-in operational intelligence, vector search, in-database Python/R machine learning, and Copilot-assisted diagnostics with security and governance considerations.
Vaibhav Pandey shares a production-oriented “Bring Your Own Model” (BYOM) pattern for Azure AI applications, showing how to package, register, and deploy a custom model on Azure Machine Learning with secure identity, networking, and scalable managed endpoints.
In this post, robece explains how to route Stripe events into Azure Event Grid to build scalable, real-time payment workflows, and how to extend those streams into Microsoft Fabric Real-Time Intelligence for live analytics.
ashish-chhabria argues that Azure Event Hubs is the practical default for Kafka-style streaming on Azure, focusing on its Kafka-compatible endpoint, managed scaling, tier capabilities (Standard/Premium/Dedicated), and integrations like Capture to Azure Data Lake Storage and streaming into Microsoft Fabric for real-time analytics.
Connected-Seth shares March 2026 updates for Azure Event Grid MQTT Broker, covering protocol support (MQTT v3.1.1/v5, HTTP publish), security options (Entra ID/OAuth JWT, X.509, webhook auth, TLS 1.2+), scaling characteristics, and native routing into Azure services like Fabric Eventstreams, Azure Data Explorer, Event Hubs, Functions, and Logic Apps.
AnaviNahar walks through a near-real-time ingestion and transformation setup on Azure Databricks using Lakeflow (Connect, Spark Declarative Pipelines, and Jobs), covering CDC from SQL Server, streaming telemetry ingestion, Bronze/Silver/Gold modeling, Unity Catalog governance, and monitoring via system tables.
AbhishekTiwari (with Azure Networking leaders) explains how Azure Front Door improved recovery time objectives by hardening its local configuration cache, avoiding fleet-wide rebuilds, and introducing ML-driven lazy loading so recovery scales with active traffic rather than total tenants.
Coryskimming from Microsoft introduces the packed line-up for Azure at KubeCon Europe 2026, spotlighting hands-on AKS labs, AI/ML workload sessions, security, cloud-native DevOps practices, and open-source solutions from Microsoft's top engineers.
damocelj offers a practical walkthrough on securely deploying LLM inferencing with vLLM and NVIDIA NIM microservices in air-gapped Azure Kubernetes Service clusters, tackling network isolation, GPU configuration, and model artifact challenges.
bobmital shares a hands-on playbook for optimizing enterprise LLM inference on Azure, guiding technical teams through architecture, hardware selection, quantization, and model serving best practices across AKS, Ray Serve, and vLLM.
bobmital examines the architectural and economic challenges of large language model inference at enterprise scale, with a focus on Azure and Anyscale’s Ray integration for distributed AI workloads.
bobmital examines the unique challenges of enterprise-scale LLM inference, focusing on the interplay of accuracy, latency, and cost in Azure deployments using Anyscale Ray and AKS. This article provides actionable insights for architects and engineers deploying AI workloads in the cloud.
AnaviNahar introduces Azure Databricks Lakebase, now generally available, highlighting its serverless architecture and AI-native features for building real-time, intelligent applications on Azure.
bobmital presents a comprehensive and practical guide for deploying and optimizing large language model inference on Azure Kubernetes Service, focusing on engineering tradeoffs, GPU efficiency strategies, open-source model evaluation, and robust enterprise security architecture.
Chunlong Yu and co-authors present GenRec Direct Learning (DirL), a Microsoft-driven approach that transforms traditional ranking pipelines by leveraging end-to-end token-native sequence modeling, with experiments and production deployment on Azure Machine Learning.
Yongguang Zhang presents an in-depth view of Microsoft’s AI-powered RAN and intelligent edge strategy, showing how AI, Azure, and advanced platforms are set to revolutionize the future of telecom networks through automation, edge intelligence, and innovative new services.
Sally Dabbah explains how to orchestrate Azure Synapse Analytics pipelines for predictable execution on shared Spark pools. Key techniques include workload prioritization and adaptive orchestration strategies.
AnaviNahar introduces the general availability of Serverless Workspaces in Azure Databricks, detailing their architecture and guidance for when to choose Serverless or Classic models.