TL;DR: When a Windows host or Azure Local node is running out of TCP ports, this PowerShell one-liner groups all current connections by state and owning process, so you can quickly see which process is holding the most sockets.

Recommended action:

  1. Open an elevated PowerShell session on the affected host.

  2. Run:

    Get-NetTCPConnection | Group-Object -Property State, OwningProcess | Select -Property Count, Name, @{Name="ProcessName";Expression={(Get-Process -PID ($_.Name.Split(',')[-1].Trim(' '))).Name}}, Group | Sort Count -Descending
  3. Read the output top-down. Each row is a unique (State, ProcessName) pairing with a Count. The rows that matter most:

    • High Count in TimeWait state — usually indicates a process churning through outbound connections faster than the OS can recycle them.
    • High Count in Established — a process holding many concurrent live connections (could be legitimate for a busy service, or a leak).
    • High Count in CloseWait — connections the local process never closed cleanly. Often points to an application-level socket leak.
  4. Cross-reference the offending ProcessName against expected behavior for that service. If it's unexpected, that process is your investigation target.

Why:

Windows has a finite pool of ephemeral TCP ports (default range 49152–65535, about 16,000 ports). When that pool is exhausted, new outbound connections fail with errors like WSAEADDRINUSE or symptoms like RPC failures, agent registration timeouts, or cluster comms instability. The grouping by state and process collapses thousands of raw Get-NetTCPConnection rows into a short list that points directly at the noisy process.

Going forward:

Re-run the command periodically (or capture it on a schedule) when investigating intermittent connectivity issues — port exhaustion is often transient and can clear before traditional troubleshooting starts. If you find a consistent offender, check the application's vendor documentation for connection-pool tuning or known socket-leak fixes before assuming an OS-level problem.

Optional details:

The current ephemeral port range and dynamic allocation behavior can be inspected with:

Get-NetTCPSetting | Select SettingName, DynamicPortRangeStartPort, DynamicPortRangeNumberOfPorts

For deeper analysis of a specific PID's socket usage, pair the diagnostic above with Get-NetTCPConnection -OwningProcess <PID> to list every individual connection that process holds.