TL;DR: When an Azure Local network intent gets stuck in a failed provisioning state, blocking updates or live migration, reset it with Set-NetIntentRetryState -Name <intent> -NodeName <node> per affected node and verify with Get-NetIntentStatus before retrying the operation.
Recommended action:
Check intent status across the cluster:
$ClusterName = (Get-Cluster).Name Get-NetIntentStatus -ClusterName $ClusterName | Format-Table IntentName, Host, Error, ConfigurationStatus, ProvisioningStatusLook for any row where
ConfigurationStatus = Failed,Error = ProvisioningFailed, orRetryCount > 0.Reset the failed intent on each affected node:
Set-NetIntentRetryState -Name <intentName> -NodeName <nodeName>Run once per node where the intent is failed. Example for a 3-node cluster with a stuck
storageintent:Set-NetIntentRetryState -Name storage -NodeName node01 Set-NetIntentRetryState -Name storage -NodeName node02 Set-NetIntentRetryState -Name storage -NodeName node03Wait a few minutes, then re-run
Get-NetIntentStatus. The intent should reportConfigurationStatus: SuccessandProvisioningStatus: CompletedwithRetryCount: 0and an emptyErrorfield.Only then retry the original operation (cumulative update, SBE update, etc.).
Why:
Azure Local uses Network ATC to apply intents (Management, Compute, Storage, or combinations) as a goal state per node. When provisioning fails — usually because of a transient adapter, driver, or firmware-related condition — the intent stays in a RetryRequired state and Network ATC stops attempting it. Many higher-level operations, including cumulative updates and live migrations, depend on a healthy intent state and will fail or behave erratically until it's resolved. Set-NetIntentRetryState clears the failure flag so Network ATC will attempt provisioning again from a clean slate.
Going forward:
Check intent health before starting any cumulative update or SBE update — Get-NetIntentStatus is a one-line pre-flight that prevents most of these failure modes from surfacing mid-update. After an SBE update in particular, watch for previously-disabled or unused NICs being re-enabled; that condition can put a converged intent into a failed state because the intent now sees adapters it doesn't expect. If that happens, disable the spurious NICs and reset the intent.
Optional details:
Set-NetIntentRetryStateis per-node — running it on a single node only resets that node's local goal state. For cluster-wide intents, run it on every node where the intent showsFailed.The cmdlet supports
-Waitto block until the operation completes, and a-ClusterName/-NodeNameparameter set for running against a remote cluster.If
Get-NetIntentStatusstill showsProvisioningFailedafter a reset and a wait period, the underlying cause is not transient — collect logs (CollectHCI orGet-NetIntentLogs) and investigate adapter, driver, or VLAN configuration before retrying further.
References: