TL;DR: When an Azure Local network intent gets stuck in a failed provisioning state, blocking updates or live migration, reset it with Set-NetIntentRetryState -Name <intent> -NodeName <node> per affected node and verify with Get-NetIntentStatus before retrying the operation.

Recommended action:

  1. Check intent status across the cluster:

    $ClusterName = (Get-Cluster).Name
    Get-NetIntentStatus -ClusterName $ClusterName |
        Format-Table IntentName, Host, Error, ConfigurationStatus, ProvisioningStatus

    Look for any row where ConfigurationStatus = Failed, Error = ProvisioningFailed, or RetryCount > 0.

  2. Reset the failed intent on each affected node:

    Set-NetIntentRetryState -Name <intentName> -NodeName <nodeName>

    Run once per node where the intent is failed. Example for a 3-node cluster with a stuck storage intent:

    Set-NetIntentRetryState -Name storage -NodeName node01
    Set-NetIntentRetryState -Name storage -NodeName node02
    Set-NetIntentRetryState -Name storage -NodeName node03
  3. Wait a few minutes, then re-run Get-NetIntentStatus. The intent should report ConfigurationStatus: Success and ProvisioningStatus: Completed with RetryCount: 0 and an empty Error field.

  4. Only then retry the original operation (cumulative update, SBE update, etc.).

Why:

Azure Local uses Network ATC to apply intents (Management, Compute, Storage, or combinations) as a goal state per node. When provisioning fails — usually because of a transient adapter, driver, or firmware-related condition — the intent stays in a RetryRequired state and Network ATC stops attempting it. Many higher-level operations, including cumulative updates and live migrations, depend on a healthy intent state and will fail or behave erratically until it's resolved. Set-NetIntentRetryState clears the failure flag so Network ATC will attempt provisioning again from a clean slate.

Going forward:

Check intent health before starting any cumulative update or SBE update — Get-NetIntentStatus is a one-line pre-flight that prevents most of these failure modes from surfacing mid-update. After an SBE update in particular, watch for previously-disabled or unused NICs being re-enabled; that condition can put a converged intent into a failed state because the intent now sees adapters it doesn't expect. If that happens, disable the spurious NICs and reset the intent.

Optional details:

  • Set-NetIntentRetryState is per-node — running it on a single node only resets that node's local goal state. For cluster-wide intents, run it on every node where the intent shows Failed.

  • The cmdlet supports -Wait to block until the operation completes, and a -ClusterName/-NodeName parameter set for running against a remote cluster.

  • If Get-NetIntentStatus still shows ProvisioningFailed after a reset and a wait period, the underlying cause is not transient — collect logs (CollectHCI or Get-NetIntentLogs) and investigate adapter, driver, or VLAN configuration before retrying further.

References: