Jump to content


All Activity

This stream auto-updates

  1. Yesterday
  2. Last week
  3. Introduction Last month I posted about my CMG (Cloud Management Gateway) going AWOL (absent without leave) and staying broken, this was in response to a tweet from Panu, and I documented the sorry story here. I tried many things including the hotfix that was available at the time of posting, but nothing helped. My CMG remained broken and stayed in a state of disconnected. I had planned on removing it entirely and recreating the CMG but time got the better of me. Today however I got a notification from Linkedin that someone had responded to one of my posts about that problem so I took a look. Steven mentioned a hotfix rollup (HFRU) and it was a new one. https://learn.microsoft.com/en-us/mem/configmgr/hotfix/2409/30385346 The issues fixed with this hotfix rollup are pasted from that article below, with the important CMG bits highlighted in bold italic: Issues that are fixed Internet based clients using the alternate content provider are unable to download content from a cloud management gateway or cloud distribution point. Deployment or auto upgrade of cloud management gateways can fail due to an incorrect content download link. Internet based clients can fail to communicate with a management point. The failure happens if the SMS Agent Host service (ccmexec.exe) on the management point terminates unexpectedly. Errors similar to the following are recorded in the LocationServices.log file on the clients. Console [CCMHTTP] ERROR INFO: StatusCode=500 StatusText=CMGConnector_InternalServerError The Configuration Manager console displays an exception when you check the properties of a Machine Orchestration Group (MOG). Membership of the MOG can’t be modified; it must be deleted and recreated. The exception happens when the only computer added to a MOG doesn’t have the Configuration Manager client installed. Hardware inventory collection on a client gets stuck in a loop if the SMS_Processor WMI class is enabled, and the processor has more than 128 logical processors per core. If a maintenance window is configured with Offset (days) value, it will fail to run on clients if the run date happens on the next month. Errors similar to the following are recorded in the UpdatesDeployment.log file. Console Failed to populate Servicewindow instance {GUID}, error 0x80070057 The spCleanupSideTable stored procedure fails to run and generates exceptions on Configuration Manager sites using SQL Server 2019 when recent SQL cumulative updates are applied. The dbo.Logs table contains the following error. Console "ERROR 6522, Level 16, State 1, Procedure spCleanupSideTable, Line 0, Message: A .NET Framework error occurred during execution of user-defined routine or aggregate "spCleanupSideTable": System.FormatException: Input string was not in a correct format. System.FormatException: at System.Number.StringToNumber(String str, NumberStyles options, NumberBuffer& number, NumberFormatInfo info, Boolean parseDecimal) at System.Number.ParseInt64(String value, NumberStyles options, NumberFormatInfo numfmt) at Microsoft.SystemsManagementServer.SQLCLR.ChangeTracking.CleanupSideTable(String tableToClean, SqlInt64& rowsDeleted) ." Multiple URLs are updated to handle a back-end change to the content delivery network used for downloading Configuration Manager components and updates. The Configuration Manager console can terminate unexpectedly if a dialog contains the search field. This gave me some hope so I powered up that lab and took a look. As you can see the hotfix rollup is ready to install. Note: Yes I’m aware one of my app secrets is about to expire, that’s not part of this blog post or problem so I’ll ignore it for now. Before installing the hotfix rollup however I checked on the status of my CMG. And to my great surprise after weeks (a month even) of being broken and disconnected it was now………. connected. Uh… what ? Ok, that’s weird. I ran the connection analyzer as well just for giggles, and for the first time in a long time it passed with flying colours. As the server log files have unfortunately rolled over, I cannot see ‘when’ it self-fixed itself or whether that was on the backend (the CMG in Azure) or on the CM server itself. Just as a reminder, my now working CMG is in this state without me doing anything further after my original blog post, so it self fixed itself after many weeks of being broken. Installing the hotfix rollup To wrap things up, I decided I’ll install the hotfix rollup, to see what if anything it can do. And after some time it was done, however as you can see I still have 2 Configuration Manager 2409’s listed (one was the early ring upgrade). Well that’s it for this blog post, thanks Steven for the heads up on the hotfix rollup, however it didn’t resolve my issue, which seemed to solve itself prior to the rollup.
  4. Earlier
  5. have you verified that the cert in your dp is updated also ? see step 5 here
  6. In our SCCM environment we use HTTPS only for communication. Two weeks ago I replaced our existing certificate for our IIS server with a new since the existing one was set to expire on 3/2. On Monday clients started getting the following error message when trying to PXE for an image: (See 1st image, I have removed the IP address information). When I looked at the SMSPXE.LOG I found the following: (See 2nd image) When I looked at the SMSPXE.LOG I found the following: I have verified that the HTTPS bindings for the default website is using the correct certificate and that certificate does not expire till 2026. The certificate was replaced before the old one was set to expire. I also did a NETSH HTTP SHOW SSLCERT IPPORT=0.0.0.0:443 and verified that the certificate hash matched the certificate thumbprint of the SSL certificate in the site binding. I have removed the old certificate and restarted the server but clients are still getting that error message and the same WINHTTP_CALLBACK_STATUS error message is still the same. I have spent the last two days reviewing different articles and suggestions regarding this error and I am not sure how to proceed. I would be very grateful for any suggestions. Thanks!
  7. 5 years later and this bug is still alive. I am on Version 2309, and I still had to use the tip above to get the task sequence to display. So, the 2012 thing can be put to bed, it's still a bug
  8. ah great to hear it and thanks @Cerberus24 for posting your findings, i'm sure it'll help someone !
  9. @anyweb, thank you for your availability and troubleshooting. I have figured out the problem. The issue stemmed from the fact that I was primarily working with headless computers in my environment. Even though I was running MSTSC with the /admin or /console switch, I never properly checked if the established session was, in fact, a console session. This is why the policy would fail, and we would see errors in the logs when the policy attempted to launch the MBAM UI. I didn’t realize this until I revisited my VMs and accessed them through the console, instead of connecting via MSTSC. On existing, imaged devices, I was able to resolve the issue by manually interacting with a few devices using SCCM's remote control viewer, which establishes a console session. After logging in via SCCM’s remote viewer, the BitLocker policy executed and encrypted the drives without any issues. For anyone else in your community facing a similar problem, I addressed the headless computer issue by creating a task sequence during OS deployment. This ensures that devices are imaged with the appropriate BitLocker settings that align with my BitLocker policy. This way, all imaged devices are compliant from the start, and SCCM can still report compliance for these devices, since the policy settings are consistent and encryption is not required. Since I will mostly be accessing the headless computers via MSTSC, this solution works well for my environment.
  10. OK, I have added the domain to hosts file and it started working. So, finally, so far so good. Now the hard part - to replicate the issue from prod ConfigMgr environment, where driver updates from Lenovo catalog do not install at all on devices. The driver updates on HP install with no issue though. 😐
  11. Hi Niall, I can ping from host to the laptop and other way successfully. I can ping from/to DC and CM VMs to the laptop, using the 192.168 IPs. The VMs have additionally the 10. addressing with private switch. When trying to join the domain I get an error: An ADDS for the domain corp.contoso.com could not be contacted.
  12. Be aware that if you install WDS on the 😄 drive, then want to move it to another drive, the smspxe.dll file will remain registered on the 😄 drive. RemoteInstall will not have a folder called SMSBoot. I had a server where Config Mgr had been deployed to the 😄 drive but was supposed to be on E:. The roles were removed and "NO_SMS_ON_DRIVE.SMS" was put onto the 😄 drive then Config Mgr was redeployed. There did not seem to be a way to get WDS to install and put SMSBoot into the RemoteInstall folder. Digging around in the registry I found entries for it on the 😄 drive and edited them to point to the E: drive (HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\WDSServer\Providers\WDSPXE\Providers\SMSPXE). This resolved the issue. I suspect it will help those who are looking for a solution for an empty SMSBoot folder, too.
  13. hi @Martinez in my #11 lab (domain controller) I have a DHCP server running, so any device that connects into that lab will receive a valid ip address I hope that helps cheers niall
  14. Hi Niall, As always, great guide! I am trying this on a laptop with the MECM lab from Microsoft [Windows 11 and Office 365 Deployment Lab Kit]. I have connected the lab and client device the same way. How did you address the devices, same subnet 198.162 on both as on the screenshot and type external, right?
  15. Introduction Panu Sakku posted the following tweet recently asking if anyone noticed their CMG (Cloud Management Gateway) was broken after it got a recent update. I checked my lab, and sure enough, it was also dead in the water, and could not start. After checking the logs I replied to Panu. The errors in the SMS_CLOUD_PROXYCONNECTOR.log file in red were many, and here’s a paste of some of them to help others find out how to resolve this problem. ERROR: Web socket: Failed to online with Proxy server CLOUDATTACHCMG.AZURENOOB.COM:443. System.AggregateException: One or more errors occurred. —> System.Net.WebSockets.WebSocketException: Unable to connect to the remote server —> System.Net.WebException: Unable to connect to the remote server —> System.Net.Sockets.SocketException: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond 20.126.223.196:443~~ at System.Net.Sockets.Socket.InternalEndConnect(IAsyncResult asyncResult)~~ at System.Net.Sockets.Socket.EndConnect(IAsyncResult asyncResult)~~ at System.Net.ServicePoint.ConnectSocketInternal(Boolean connectFailure, Socket s4, Socket s6, Socket& socket, IPAddress& address, ConnectSocketState state, IAsyncResult asyncResult, Exception& exception)~~ — End of inner exception stack trace —~~ at System.Net.HttpWebRequest.EndGetResponse(IAsyncResult asyncResult)~~ at System.Threading.Tasks.TaskFactory`1.FromAsyncCoreLogic(IAsyncResult iar, Func`2 endFunction, Action`1 endAction, Task`1 promise, Boolean requiresSynchronization)~~— End of stack trace from previous location where exception was thrown —~~ at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()~~ at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)~~ at System.Net.WebSockets.ClientWebSocket.<ConnectAsyncCore>d__21.MoveNext()~~ — End of inner exception stack trace —~~ at System.Net.WebSockets.ClientWebSocket.<ConnectAsyncCore>d__21.MoveNext()~~ — End of inner exception stack trace —~~ at System.Threading.Tasks.Task.ThrowIfExceptional(Boolean includeTaskCanceledExceptions)~~ at System.Threading.Tasks.Task.Wait(Int32 millisecondsTimeout, CancellationToken cancellationToken)~~ at Microsoft.ConfigurationManager.CloudConnection.ProxyConnector.WebSocketConnection.Online()~~—> (Inner Exception #0) System.Net.WebSockets.WebSocketException (0x80004005): Unable to connect to the remote server —> System.Net.WebException: Unable to connect to the remote server —> System.Net.Sockets.SocketException: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond 20.126.223.196:443~~ at System.Net.Sockets.Socket.InternalEndConnect(IAsyncResult asyncResult)~~ at System.Net.Sockets.Socket.EndConnect(IAsyncResult asyncResult)~~ at System.Net.ServicePoint.ConnectSocketInternal(Boolean connectFailure, Socket s4, Socket s6, Socket& socket, IPAddress& address, ConnectSocketState state, IAsyncResult asyncResult, Exception& exception)~~ — End of inner exception stack trace —~~ at System.Net.HttpWebRequest.EndGetResponse(IAsyncResult asyncResult)~~ at System.Threading.Tasks.TaskFactory`1.FromAsyncCoreLogic(IAsyncResult iar, Func`2 endFunction, and ERROR: Failed to build WebSocket connection 1800a2f4-5e7c-4aa7-9c5d-0b4027ab939d with server CLOUDATTACHCMG.AZURENOOB.COM:443. Exception: System.Net.WebException: Failed to online~~ at Microsoft.ConfigurationManager.CloudConnection.ProxyConnector.WebSocketConnection.Online()~~ at Microsoft.ConfigurationManager.CloudConnection.ProxyConnector.ConnectionBase.Start()~~ at Microsoft.ConfigurationManager.CloudConnection.ProxyConnector.ConnectionManager.MaintainConnections() and ERROR: Failed to build HttpV2 connection 1800a2f4-5e7c-4aa7-9c5d-0b4027ab939d with server CLOUDATTACHCMG.AZURENOOB.COM:443. Exception: System.Net.WebException: Unable to connect to the remote server —> System.Net.Sockets.SocketException: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond 20.126.223.196:443~~ at System.Net.Sockets.Socket.InternalEndConnect(IAsyncResult asyncResult)~~ at System.Net.Sockets.Socket.EndConnect(IAsyncResult asyncResult)~~ at System.Net.ServicePoint.ConnectSocketInternal(Boolean connectFailure, Socket s4, Socket s6, Socket& socket, IPAddress& address, ConnectSocketState state, IAsyncResult asyncResult, Exception& exception)~~ — End of inner exception stack trace —~~ at Microsoft.ConfigurationManager.CloudConnection.ProxyConnector.HttpConnectionV2.SendInternal(HttpMethod method, String path, String payload, Int32& statusCode, Byte[]& responsePayload)~~ at Microsoft.ConfigurationManager.CloudConnection.ProxyConnector.HttpConnectionV2.SendInternal(HttpMethod method, String path, Byte[] payload)~~ at Microsoft.ConfigurationManager.CloudConnection.ProxyConnector.HttpConnectionV2.Online()~~ at Microsoft.ConfigurationManager.CloudConnection.ProxyConnector.ConnectionBase.Start()~~ at Microsoft.ConfigurationManager.CloudConnection.ProxyConnector.ConnectionManager.MaintainConnections() Shortly after I replied, Johnny Radeck posted an update, he solved it by uninstalling an extension and then making a change to the CMG properties. But let’s see why he did that. If you go to the Azure portal and locate your CMG, you’ll see it’s got a Failed status (1). If you click Restart (2) after a few minutes it’ll be failed again but you’ll get a notification (3) explaining what failed. Failed to restart virtual machine scale set Failed to restart virtual machine scale set ‘cloudattachcmg’. Error: VM has reported a user failure when processing extension ‘InstallCMG’. Please correct the error and try again. (publisher ‘Microsoft.Compute’ and type ‘CustomScriptExtension’). Error code: ‘2’. Error message: ‘Command execution finished, but failed because it returned a non-zero exit code of: ‘1”. Detailed error: ”. More information on troubleshooting is available at https://aka.ms/VMExtensionCSEWindowsTroubleshoot. So it’s clear that Azure has problems starting the CMG due to “VM has reported a user failure when processing extension ‘InstallCMG’.” I wonder what the ‘user failure’ means ? Let’s try Johnny’s advice then. Fixing the problem ? Click on Settings, select Extensions + applications and then place a checkmark in InstallCMG, it’ll bring up it’s properties and you can now select Uninstall. The settings in that extension are listed here, just to see if they change after the fix. “commandToExecute”: “powershell.exe -File cmgsetup.ps1 -storageAccountName cloudattachcmg -storageEndpointSuffix core.windows.net -serviceName cloudattachcmg -serviceCName cloudattachcmg.azurenoob.com -certStoreName My -certThumbprint 2D2F89A0F44335C0D57678DA5AC80663660B0250 -crlAction enable -tls12Enforced True -nodeName localhost -bDisabledSharedKey True”, “fileUris”: [ “https://cloudattachcmg.blob.core.windows.net/stageartifacts/cmgsetup.ps1” ] } After a while it’ll be uninstalled and you’ll get a notification telling you that it’s done. After changing Client revocation settings, and changing the maintenance window to be in the future (otherwise you’ll get an error) before clicking Apply. A quick look at the CloudMgr.log reveals it’s updating the CMG and the status of the CMG in SCCM changes to Upgrading. while in Azure, the CMG has a status of Updating. and after a while everything should hopefully be fixed. Note: If it works for you, then don’t forget to set the client revocation option back on again. Oops In my case however, no matter how many times I tried my CMG remained well and truly broken. It’s still broken. I’ll update this post if/when I come up with a solution that works for me, but for now, this is just where I’m at with this problem and I’m blogging this as I’ve spent so many hours on it already.
  16. if you have access to teams we can do a session to talk about this, ping me there niall AT windowsnoob DOT com, i'm based in Europe.
  17. I replicated the same process on my test VM (decrypting with the policy and then re-encrypting using the policy), but for some reason, I’m encountering the same error message as I did on the other devices. Initially, I deployed the BitLocker policies in a smaller controller environment with VMs, and everything worked fine. I’m not sure what might be causing this issue now. The only difference I can think of is that the previous environment was running with self-signed certificates, whereas now it’s running with proper certificates. - Device status before the decryption policy is deployed. - Device status after the decryption policy was deployed and enforced. - Trying to re-encrypt after a successful decryption. - BitlockerManagemetHandler.log file after encryption policy is deployed - After rebooting the VM, the next console login via the MSTSC.exe client shows the following: It’s important to mention that I was able to successfully encrypt and decrypt this same VM while the system was running under self-signed certificates.
  18. The endpoints are running Windows 11 24H2. The severs infrastructure for the MP and DP are running Windows server 2019 1809.
  19. "Apologies for not seeing your reply sooner. This doesn’t apply to the current environment, as there was no previous MBAM infrastructure. However, I did decrypt the devices using the decryption policy, since the machine was imaged and BitLocker had been enabled with a weaker encryption algorithm. The decryption policy worked flawlessly, but as previously mentioned, the encryption policy is not functioning as expected. I have made sure that the targetted devices are no longer targetted by the decryption policy.
  20. @Cerberus24 what client OS are you testing this on as a matter of interest? I'm, happy to do a remote session to compare my lab to yours but it would be good to get more info about your setup
  21. and it's encrypting without any interaction from me
  22. before getting bitlocker policy added the device to my bitlocker policy collection the client has determined it is 'non compliant' for Encryption
  23. ok imaging done, device is NOT encrypted (as I wanted), next up, i'll add it to a collection targeted by BitLocker Encryption policy and see what happens
  24. i'm imaging a VM now and will let it complete, once done i'll drop it un-encrypted into a collection targeted with BitLocker policy, i'll share my results here once done
  25. i'll double check in my https 2409 lab this evening and report back have you verified that these devices are not targeted by any gpo from your 'old' mbam infrastructure ?
  1. Load more activity
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.