Secure, Policy-Driven Automation
- Mark Thompson
- 1 day ago
- 4 min read
SECURITY BRIEF
This paper explains how Fabrics integrates with Azure Monitor and ARM to use existing telemetry, without touching production workloads, to enforce customer-defined optimization policies that automatically adjust resources in real time, maintaining ideal performance and capacity at the lowest possible cost.
Executive Summary
Microsoft Azure collects resource‑level telemetry such as CPU, memory, disk I/O, network activity, and storage consumption from Azure services. This information is automatically captured within Azure Monitor, a time‑series metrics platform that stores and exposes these measurements through secure APIs, the Azure portal, and command‑line tools.
Fabrics integrates with Azure Monitor to leverage this data without ever connecting directly to production workloads. Rather than scraping or polling virtual machines or databases, Fabrics relies on the metrics that Azure already gathers.
From those metrics, Fabrics enforces dynamic optimization policies defined by the customer or their technical engineers. These policies establish precise performance targets and define the exact actions Fabrics should take when consumption metrics indicate that a resource has fallen outside its desired performance range.
In this way, Fabrics continuously maintains capacity that is optimal to meet real‑time demand - achieving desired performance at the lowest possible cost.
How Microsoft Captures and Stores Metrics
Azure Monitor acts as the centralized data telemetry layer for the platform. Each Azure resource emits platform metrics - numerical, time‑stamped values that describe usage and performance. These metrics are collected, aggregated, and written into Azure’s managed time‑series store. The data is retained by default for approximately ninety‑three days, though organizations can export it to Log Analytics workspaces, Azure Storage, or Event Hubs for longer retention or analytics.
Access to Azure Monitor is strictly governed through role‑based access control (RBAC). Engineers can query metrics using the Azure portal, the Azure CLI or PowerShell, or programmatically through the Azure Monitor Metrics REST API. Importantly, this means that anyone - or anything - reading metrics is confined to a read‑only view of telemetry data, entirely separate from the live compute or storage resources themselves.
How Fabrics Uses This Data
Fabrics functions as an automation engine rather than a recommendation system. It does not suggest or advise on potential changes - it acts automatically within the boundaries defined by its users. Each customer establishes detailed policies that describe expected performance targets, and the responses Fabrics should execute when Azure Monitor metrics show deviation from those targets.
When Azure indicates that CPU utilization, memory consumption, storage capacity, or another metric has drifted outside its threshold, Fabrics immediately applies the corrective actions specified in its policy. These actions might include scaling virtual machines up or down, resizing storage accounts or managed disks, scaling vCores in a database, scaling Power BI SKU’s, or changing SKU’s altogether for resources that demand significantly less horsepower during off peak hours. In every case, the logic originates from the customer’s configuration; Fabrics simply ensures that the desired state is continuously maintained.
Because Fabrics reads only from Azure Monitor, it never maintains persistent connections to resources and never needs in‑guest agents or credentials for telemetry collection. When action is required, a separate automation identity - granted minimal, scoped permissions - is used to perform the change via the Azure Resource Manager (ARM) management plane. Observation and execution are therefore isolated by design: Fabrics observes safely through read‑only APIs and acts securely through tightly controlled permissions.
Policy‑Driven Optimization in Practice
A policy in Fabrics can be thought of as a contract between performance expectations and cost objectives.
For example:
A technical team might specify that a production web service should maintain CPU utilization below sixty percent.
Another policy could define how storage accounts automatically expand when capacity thresholds are reached or transition to lower‑cost tiers when utilization declines.
Databases may scale up to meet latency or transaction‑per‑second goals, then scale down to conserve budget.
These policies are built from predictable triggers - metrics that Azure provides - and deterministic responses. Fabrics executes them consistently and verifiably. Every change it makes is logged through ARM activity logs and Fabrics’ own audit trail, documenting who authorized the policy, when the automation occurred, and why it was triggered. The result is an always‑on optimization cycle that adjusts capacity only when metrics justify it.
Security and Governance
From a security standpoint, Fabrics’ architecture emphasizes privilege and data minimization. Its monitoring function operates outside production workloads using Azure Monitor, while the optimization engine carries only the specific permissions required to execute authorized actions through the ARM Management plane. All credentials are securely stored within Azure Key Vault, and every operation is auditable both in Azure and within Fabrics itself.
By reading metrics exclusively from Azure Monitor, Fabrics eliminates the need for continuous data‑plane access. This drastically reduces potential attack vectors and simplifies compliance oversight. Customers retain full control over what metrics are exposed, how long they are retained, and which identities are permitted to query them. In short, Fabrics complements Azure’s security model rather than extending beyond it.
Business Value and Impact
This model delivers tangible business benefits. First, it ensures that organizations pay only for the capacity they truly require at any moment in time, eliminating the waste inherent in static provisioning. Second, it maintains consistent performance because automation reacts immediately when metrics move outside the approved range - without waiting for human intervention or scheduled review cycles. Finally,
it upholds strong governance and security standards: no unnecessary credentials, no agents inside customer workloads, and complete visibility of every change.
For enterprises operating on a scale, this approach transforms cloud operations from reactive to self‑regulating. Fabrics becomes an extension of Azure’s own control plane - dynamically aligning resource supply with real‑time demand, all while operating within the customer’s existing compliance and cost frameworks.
Conclusion
Fabrics’ use of Azure Monitor exemplifies secure and intelligent automation. Microsoft provides telemetry; Fabrics provides policy enforcement and orchestration. By separating observation from actuation and by grounding every decision in Azure’s own metrics, Fabrics ensures reliable performance, optimal cost, and uncompromising security. It is automation that behaves exactly as the customer designs it - no more, no less - creating a self‑managing environment that delivers maximum efficiency across the enterprise cloud.
CONTACT FABRICS
