-
Posts
1009 -
Joined
-
Last visited
-
Days Won
26
Everything posted by Rocket Man
-
Cannot connect to site SCCM 2012 R2 - Server 2008 R2
Rocket Man replied to OliAdams's topic in Configuration Manager 2012
Also check event viewer to see if it throws up anymore info. Check logs smsexec.log and sitecomp.log to see if there are any clues and make sure that all CM services are running also. -
Cannot connect to site SCCM 2012 R2 - Server 2008 R2
Rocket Man replied to OliAdams's topic in Configuration Manager 2012
Also: Maybe you have done this, but the very first thing to check when your console won't fire up is to verify that the actual SQL service is running, if this is stopped it will cause the console not to connect. If it is running then, here is a link with similar problems to yours, there is a troubleshooting part, the first step can be easily carried out as it just requires a WBEMTEST on the site server. -
WDSNBP started using DHCP referral. Are you working across VLANs? Have you configured DHCP options? (Unsupported by Microsoft) If any of the above are true, you will have to configure IP helpers, DHCP relays on your switches. Also I see it is requesting you to press f12 for network boot, I assume you have pressed F12 at this stage?
-
TBH the only way I can tell you to check this quickly is to check too see what public DNS servers are specified in your forwarders in DNS, this should be set to your ISP DNS servers (best practices) or in some cases like I mentioned with sites that this problem occurred for me was customers using Open DNS so the forwarders were set to Open DNS servers, which has a content filtering management portal, and Microsoft activation was blocked due to some category specified in the un-allowed list.
-
Have you tried creating a new domain join account? Also I know it connects the first time round but just to be sure test whether you can actually log into a system on the domain with this account (maybe password expired) or is looking to change the password to a new one after expiry. Hopefully this is the problem. Another thing this is normal behaviour when you edit a task sequence after the initial domain verify that it does not connect the second time, but if you re-enter the password in the password field it should connect. Just tested it and have the same symptoms, so I don't think this has anything to do with the problem your having.
-
How to Push a Package with Windows Compatibility Mode
Rocket Man replied to julius's topic in Configuration Manager 2012
Is it that it has to be installed using compatibility mode or after it is installed configure the shortcut to run in compatibility mode? If it is the latter then you could simply deploy your app as normal and then also have a copy shortcut that has already been configured to run in compatibility mode out to all users desktops and override the original shortcut.- 2 replies
-
- SCCM 2012 R2
- package
-
(and 1 more)
Tagged with:
-
I too use VL activation with run command line. The only time I ever experienced something like this is when the key was expired and the other is when some remote sites in the hierarchy use open DNS as their public DNS for content filtering and had blocked a category in the content filtering system that prevented the clients from communicating with Microsoft activation. Once the content filter was lifted to allow this everything was good again.
-
Just reporting back... the included collections seem to be working fine ...when a user is added to say the staff included collection the cloudusersync.log states Total received users to add from SCCM = 1 , Total successfully added users to cloud = 1. Just have a query.....When you add users to this collection and they sync up as users allowed to enrol devices, are they supposed to have the windows intune tick box ticked in their accounts in the Intune management portal? The reason I ask is that 2 users were manually licensed for intune using the intune portal before integration took place and these 2 objects have the tick box ticked. The user I added using SCCM does not have the box ticked, the intune trial license count still remains at only 2 used also. Maybe it takes time? Thanks
-
Thanks Niall I would like to think that it would work just like device collections do with included collections, not unless there may be some weirdness with the intune connector not liking it. I will report back early next week when implementation takes place.....and sure if it doesn't work I guess I can populate the global collection with users and remove the included collections.... I hope In the meantime if anyone else has tried this or has any thoughts on the proposed collection design as to why it may or may not work please do leave a comment. Thanks
-
Hi Niall Quick question in relation to the intune user collection that is needed for intune ready users. Is it possible to create a global collection (Windows Intune users) and then create 2 more collections i:e staff and students and include these in the global collection, then populate the included collections accordingly with staff and students? I ask simply because of cosmetic management from within the CM Console. Thanks.
-
You don't delete these default unknown computer objects. Have you searched the all systems collection for unknown objects(normally shows up with failed deployments) or if you are not naming them during OSD and they are successful they will be named something like MININTXXXXX. If you find any and it is safe to delete them...do so... then you can PXE boot them again as unknown systems. Generally PXE Boot aborted booting to next device means you have to clear the PXE flag as SCCMentor mentioned but what I think you are doing is all unkown deployments at the moment and you haven't actually deployed a TS out to a known (already OSD'd) collection via PXE by looking at your workspace in your snippets as there are no custom collections designed yet....... so until then you will not get the PXE flag set.
-
It states He has obviously deployed it out to some sort of collection as he has specified to only media and PXE option, so you won't get this until you actually deploy the TS. It sounds very similar to your problem but as you say you have dependicies and not supercedence, but it actually is doing the overall same job, upgrading versions, so maybe this applies to dependicies also? Is this a bug that needs to be addressed, if not a bug then it still needs to be addressed as I am sure a lot of SCCM users deploy TS with application tasks attached with similar scenarios(supercedence/dependicies). What version of SCCM are you running? Maybe this has been sorted in R2... I am running this version but do not use apps in OSD I just use the standard reliable package model. I just use the app model for the self service application portal...had way too many unsuccessful deployments when trying the app model with OSD when I was running CM2012 non service pack...haven't tried it with R2 since so can't be sure if there has been any improvement.
-
This should be the normal behaviour. Is this happening on all systems that is contained within the collection you have advertised the TS to? OR Is it only happening on systems that have been recently OSD'd or OSD'd at any stage using this TS? If it is the latter then maybe the application is stuck and is continually trying to run.... check the task manager on the systems to see if there is any sign of the app running... reboot them and check task manager again to see if it comes back in..... if so the application is not working as it should during the OSD and maybe does not know it has finished properly.
-
Not sure of your Asset design, but you could create collections based on roles... for example you could have collections like so: IIS servers Print servers DHCP servers and so on. If your systems are bare metal systems you could use the import computer configuration method and specify which collection to add them to. At the collection level you could create collection variables for example: The IIS server collection have a collection variable like InstallWebServerRole and set to True The Print server collection have a collection variable like InstallPrintManagement and set to true and so on. Then in your Task sequence create groups and at the root of each group put a task sequence variable to match that of the collection variable. So for example if you create a group in your TS called deploy IIS role and below this the steps you need to actually configure this role and on the root (the group folder itself) you add a task sequence variable of InstallWebServerRole and set to true also then when you deploy this task sequence out to this collection it will check against the conditions and see that the imported system is indeed a member of IIS servers collection and the conditions set match each other so the IIS role will get installed on this system. The beauty about this will be that all the groups (roles to be installed) can be attached to the one TS and the collection and task sequence variables look after what gets installed. There is of course probably a more automated way of achieving what you are looking for, but if you don't mind importing bare metal systems into the desired collections then this should work a treat and give you the outcome you are looking for. TBH not sure if trying it the Computer Name will actually work as I think that SCCM still thinks that the systems been OSD'd is still the generic name of MININTxxxxx right up until the end of the deployment, by which at this time in the deployment it is too late! This is of course in bare metal deployments anyway!
- 7 replies
-
- Deploy Server
- SCCM 2012 R2
-
(and 1 more)
Tagged with:
-
Do you mean you captured the image with this folder already present before the capture process? If so how did you build the image initially, via SCCM or manually using an ISO? If via SCCM then this is not the way to do it if only using the capture process, it's fine to do it via SCCM if it is a build and capture process as this will strip the client among other things before the capture. If it was initially created manually, left on Workgroup etc etc and then captured using capture media process then this folder could not have been present. I would'nt personally go changing any keys in the Registry and re-capture. Either create a new image manually and use capture media or use build and capture to automate it to get your golden image for mass deployment. By any chance did you activate the initial image prior to capture and then when deploying back out via Task sequence you have specified a product key again in the task sequence? If so don't activate the initial image prior to capturing, let the task sequence embed the key in the image and then use a run command line to activate it anywhere in the task sequence after WinPE environment. Also can you post a snippet of your actual Task sequence. Is there any customisations in it, conditions, task sequence variables etc etc.. You say the task sequence never bombs out with error and everything seems fine only for this folder. Is the agent reporting back properly and fully provisioned?
-
SCCM 2012 R2 - Software Update folders deleting
Rocket Man replied to Kazi's topic in Configuration Manager 2012
Have you tried creating a new updates source folder, perhaps on a different partition or even one outside of the one you are using now? Just to rule out there is nothing malicious running at the root of the source directory you have at the moment that could be removing these directories. Never seen this before but here is something kinda similar ... Link EDIT: just seen that you have seen this link -
It may seem like it is successful but due to the fact the SMSTaskSequence folder still exists on the OS partition means it did not emit the successful log ID which is 11171. An unsuccessful message ID is 11170. You could check the deployment in the compliance/monitoring node to see what message ID it is actually emitting. Also if you could post a snippet of your Task sequence would be beneficial.
-
Had to this some time back when DHCP was not active. As far as I remember the Capture network settings will gather the NICs info and apply network settings will re-apply the configuration.
-
Strange PXE behaviour after upgrade to SP1
Rocket Man replied to Rocket Man's topic in Configuration Manager 2012
Okay just an update on this if anybody else runs into a similar problem. After some study of the remote install folder in particular the SMSBoot folder and the x86 & x64 directories it became apparent something was not right. Inside these folders are the required files i:e pxeboot.com, wdsnbp.com etc etc.. The pxeboot.com, pxeboot.f12 and the wdsnbp.com files had a completely different time stamp than that of the rest of the files inside the directories (light bulb time). I removed the boot.wim files from the DP and cut out all the files from the SMSBoot/x86 and SMSBoot/x64 directories to a temp folder (just incase). I then restarted the WDS service then redistributed the boot wim files back out. After this procedure I could see all files in these directories had the same time stamp, the correct time stamp!! Now the strange behaviour is gone and all is good again, a few less hairs on the head though. Apexes thanks for the help anyway and your idea is something I may indeed need for future reference. -
Nice Pic lol.. Good to hear it's sorted. How did you get the image across, standard distribution or Prestage content file? Also I see your downloading everything first. If you specify in the properties of each package that is associated within your task sequence to copy to content share you would have an option in the deployment to access content directly from DP, which will make the overall deployment a considerable lot less time consuming and also less taxing on the network. By doing this answers your question. If you could tell me where to look to confirm where the CM00005.wim is & I see ini files and what look like hashes for it, but not the whole 8GBwim. You will then find all the distributed files in their entirety in the SMSPKGC$ directory on the DPs where C is the drive the content folders are installed on. If the shares are on an E drive partition the share will be named SMSPKGE$ Hope this answers your questions.
-
This is the correct way. Yes again this is correct. The packages will be in the content share at the remote DP, so any client looking for content at this boundary will look at it's local DP for the content. Not sure if I am getting you correct here. But the UNC path for the image file (the data source) should always be the the sources folder in the central site and from here you distribute this out as a package to your remote content shares which are created during thr DP installation/configuration. How do you specify in the TS to point to a remote DP for content?. You can specify not to look for content at another DP if the local DP has not get the content, which I think is what is happening at the moment with yourself. A single TS can do many remote sites aslong as the content that is attached in the indvidual tasks of the sequence is present at the remote DPs. No you should never move packages manually, only distribute them as there are hash enteries associated with each distribution and recorded for SCCM reference. Initially when I was only starting to use SCCM i had a few remote DPs and I distributed a small standard image out to them successfully, which created a folder associated with it's package ID in the content folder of the DPs. I then had a huge golden image which was approx 16GB in size, so what I done was manually copied this image into the smaller images content folder at all DPs and changed the the image file in the TS to point to this imported larger image. I thought this woud trick SCCM and yes it did in a way and yes it work but after some research I found that this is not a good thing to do as there are hash enteries recorded for each distribution. We now have sufficient links between sites so distributing large images is not a problem now. Yes it will. There are some bloggs on the net about it if you look for them. i never had to this or even tried, but from what i understand you have to choose the package on the site server and create a content file for it. The content file will then have the larger part of the package. Then you have to get this content file to the remote DP by whatever means (robocopy would do if the link can copy it) and run a command that extracts the binaries of the package and then it links back up to the central site (don't hold me to this but this is more or less along the lines what it does). Have you specified on this package to copy to a content share?. If so does this CCM00055 folder exist and also contain the file(s) associated with it on the DP content share at this location?