Automating Linux in Azure

Automation is one of my major areas of work, and most of my automation revolves around System Center Orchestrator. I also do a fair amount of work in Azure and thought it was time to dust off my Automation account and do something entertaining.

The image below is a PowerShell Workflow inside an Azure Automation Runbook that is connecting to a Linux server (in Azure) and reading the contents of a file.

AzureAutomation-Linux

Oh the possibilities…

DSC + WinRM + GPO

Ok, so I’m working on Desired State Configuration at work, and I had created a GPO to manage the WinRM settings a long while ago. This allows me to control how WinRM works and so forth and was needed for PowerShell to just work on our systems.

Fast forward to today, I’m joining some new servers to the domain, copy my configuration down, and then run Start-DscConfiguration and receive a nasty error:

The WinRM client sent a request to an HTTP server and got a response saying the requested HTTP URL was not available. This is usually returned by a HTTP server that does not support the WS-Management protocol.

That’s no good, it appears as though DSC is unhappy with WinRM, so I run through the usual set of commands

Enable-PsRemoting -Force

Still get the error

Disable-PsRemoting -Force |Enable-PsRemoting -Force

Still get the error

WinRM quickconfig

Still get the error, see WinRM is configured, it turns out while it’s configured just enough for PowerShell to work, it’s not configured enough for DSC to work. I found this thread on Technet

https://social.technet.microsoft.com/Forums/systemcenter/en-US/d3286893-3d3c-4991-a7ba-a9fd07e58288/scvmm-2008-r2-install-error-2927-0×80338113?forum=virtualmachingmgrsetup

The context is for Virtual Machine Manager but the errors are the same, it linked to this TechNet blog article

http://blogs.technet.com/b/scvmm/archive/2011/09/23/vmm-2012-rc-understanding-the-hyper-v-host-addition-operation-if-window-remote-management-winrm-is-configured-using-group-policy-gpo-settings.aspx

The good part is at the bottom under supported configurations. In my GPO I had only the https listener enabled, so I enabled the legacy listener. Additionally I did NOT have the ipv6 filter set to ‘*’ so I did that as well.

The really confusing thing for me was where to find the ipv6 setting “Allow automatic configuration of listeners” it appears that I did not have that in my GPO. Another quick search and I found this TechNet thread

https://social.technet.microsoft.com/Forums/en-US/e4aa3b95-608f-46c3-af06-06f57b02b455/why-dont-i-have-the-allow-automatic-configuration-of-listeners-group-policy-option-for-winrm?forum=winserverGP

I don’t have it, because it was renamed, ‘Allow remote server management through WinRM’. I tried to comment on the article, but I think it’s too old so I decided that I will most likely run into this again at some point.

So, here we are, another blog posting down.

DISM…’because reasons’

I don’t know why this is a thing, it shouldn’t be a thing. I’m going to post a link to the page on TechNet, and then just paste in the content.

https://technet.microsoft.com/en-us/library/dn482069.aspx

You can use the Deployment Image Servicing and Management (DISM) command-line tool to create a modified image to deploy .NET Framework 3.5.

ImportantImportant
For images that will support more than one language, you must add .NET Framework 3.5 binaries before adding any language packs. This order ensures that .NET Framework 3.5 language resources are installed correctly in the reference image and available to users and applications. 

 

In this topic:

 

 

 

  1. Open a command prompt with administrator user rights (Run as Administrator) in Windows 8 or Windows Server 2012.
  2. To Install .NET Framework 3.5 feature files from Windows Update, use the following command:
    DISM /Online /Enable-Feature /FeatureName:NetFx3 /All

    Use /All to enable all parent features of the specified feature. For more information on DISM arguments, see Enable or Disable Windows Features Using DISM.

  3. On Windows 8 PCs, after installation .NET Framework 3.5 is displayed as enabled in Turn Windows features on or off in Control Panel. For Windows Server 2012 systems, feature installation state can be viewed in Server Manager.

 

  1. Run the following DISM command (image mounted to the c:\test\offline folder and the installation media in the D:\drive) to install .NET 3.5:
    DISM /Image:C:\test\offline /Enable-Feature /FeatureName:NetFx3 /All /LimitAccess /Source:D:\sources\sxs

    Use /All to enable all parent features of the specified feature.

    Use /LimitAccess to prevent DISM from contacting Windows Update/WSUS.

    Use /Source to specify the location of the files that are needed to restore the feature.

    To use DISM from an installation of the Windows ADK, locate the Windows ADK servicing folder and navigate to this directory. By default, DISM is installed at C:\Program Files (x86)\Windows Kits\8.0\Assessment and Deployment Kit\Deployment Tools\. You can install DISM and other deployment and imaging tools, such as Windows System Image Manager (Windows SIM), on another supported operating system from the Windows ADK. For information about DISM-supported platforms, see DISM Supported Platforms.

  2. Run the following command to look up the status of .NET Framework 3.5 (offline image mounted to c:\test\offline):
    DISM /Image:c:\test\offline /Get-Features /Format:Table

    A status of Enable Pending indicates that the image must be brought online to complete the installation.

You can use DISM to add .NET Framework 3.5 and provide access to the \sources\SxS folder on the installation media to an installation of Windows® that is not connected to the Internet.

  1. Open a command prompt with administrator user rights (Run as Administrator) in Windows 8 or Windows Server 2012.
  2. To install .NET Framework 3.5 from installation media located on the D: drive, use the following command:
    DISM /Online /Enable-Feature /FeatureName:NetFx3 /All /LimitAccess /Source:d:\sources\sxs

    Use /All to enable all parent features of the specified feature.

    Use /LimitAccess to prevent DISM from contacting Windows Update/WSUS.

    Use /Source to specify the location of the files that are needed to restore the feature.

    For more information on DISM arguments, see Enable or Disable Windows Features Using DISM.

On Windows 8 PCs, after installation, .NET Framework 3.5 is displayed as enabled in Turn Windows features on or off in Control Panel.

Bulk URL Monitoring

How did I not know about this before? So I’m working on creating a Management Pack for Advanced Group Policy Management, and hunting for the utility to seal a MP. I find this utility buried in the Tools folder. I’m going to link the article I found on using it here as well as scrape the text in case it goes away.

Original Article

Bulk URL Manager

The Bulk URL Editor was introduced in SCOM 2007 R2.  I don’t often see this tool used as most customers don’t even know it exists or don’t understand the benefits of it.  The first benefit of the Bulk URL editor is that it scales to thousand of URLs.  If you were to try to create hundreds of URLs with the Web Application Templates it won’t work.  I have tried this in the past and there are so many workflows running at the same time that the agent fails and you end of not monitoring anything.   The second benefit of the tool is that you can add a bunch of websites in a few minutes.

The Bulk URL Editor is not very intuitive, but once you understand how to use it the process is pretty easy.  If you haven’t used the tool I highly recommend giving it a try.

TechNet has some good documentation here. http://technet.microsoft.com/en-us/library/dd788987.aspx

To use the Bulk URL Editor I copy the tool from the installation media.  The file is stored in the “SupportTools\AMD64” directory

On my computer that has the SCOM console installed, I copy the “BulkUrlManager.exe” file to “C:\Program Files\System Center 2012\Operations Manager\Console” (If you copy it anywhere else it won’t work)

I double click on the “BulkUrlManager.exe” file.

On the Connect to Server dialog box I type in the name of my Management Server and click connect

I click the New Icon

I then type in a name of my website template.  I choose “Standard URL Monitoring”

Now I click Create a new Management Pack,

I then give the management pack the name. I choose “BUE Website Monitoring” and click OK

I click OK on the Add New Template Screen

On the next dialog box I click Yes, then OK

Under Templates I click the template I created called “Standard URL Monitoring”

Now I click Add

Now I simply add the URLs that I want to monitor (Note: You need to add http:// or https:// or it will fail)

I click OK and I see all of my URLs are attached to my Standard URL Monitoring Template

Now I simply hit save. I click yes to save the changes to the selected web template.

I am done in the Bulk URL Editor for now. But I am not finished setting up my URL monitoring.  I need to select where I want the URLs to be monitored from.

I launch the Operations Manager Console and go to the Authoring screen.

I expand out Web Application and right click to refresh the screen.

Now I see the website I created using the Bulk URL editor

Under the Actions pane, Custom Actions I click Edit web application settings

My website opens and all I see is a string of text that looks like a ugly variable.  (Don’t panic this is how it is supposed to look)

Now under the Actions Pane, under Web Application I click Configure settings

I click the Watcher Node tab and select the server I want to run the website monitoring.  I choose my Management Server.

I click OK and then click Apply at the bottom of the Web Application Editor screen

I then close the screen with the red X at the top right

Now I go back into the Bulk URL Editor.

I select the Template I have been working with and hit Synchronize (you may need to refresh before the Synchronize button lights up)

I click Yes,

I close the Bulk URL Editor as I am done with it.

Now I open the SCOM Monitoring Console and look for our Web Application, Standard URL Monitoring Instances.

I can see all my websites are now being monitored.

As you can see each website is its own object.  This I nice for putting them into maintenance mode putting them into groups.

I go to groups I can see that the Bulk URL editor also created a group.

I have an addiction problem

At first I thought I had a drinking problem. As evidence by the picture below, I drink A LOT of coffee!

That's a lot of stoppers!

Then as I’m driving in to work I’m listening to NPR. They are talking about the health rankings for Kansas, apparently it’s an annual report of the state of the health system for a given state. As I’m listening to this I find myself thinking, where did they pull their data from? I wonder if I could get at that data? I could easily write a Management Pack to monitor….holy crap!

Yes, I found myself roughing out how I would write a Management Pack in Operations Manager to report on the state of the health system.

Hello, my name is Jeff. I see the world in varying states of health Green (Happy), Yellow (Unhappy) and Red (Angry).

Week In Review : 07/27/2014

Another week coding the app, the nice thing is that the MVC re-write is pretty much done. In fact the idea of utilizing the DB to store information is actually in place, something I wound up having to do. I have decided to store a snapshot of the relevant data in some tables at login. The actual collection of data and table insert takes roughly 20 seconds, so it’s pretty quick. This speeds up everything quite a bit, the entire site feels a lot more responsive.

I’m also having to cron an insert for the proteus data every 24 hours. There isn’t a simple way of to query the underlying properties of a given ip4Network, so I’m just going to collect them all. There are roughly 4,000 ip4Networks defined and it takes about 2 minutes to enum each one and write the underlying properties to a table. Since that data is much more static than the VMWare data, 24 hours on insert is sufficient.

I’m currently making much better use of my classes, and passing data back and forth between views with them. On two of them I have over-ridden the ToString() method so that I can just class.ToString() to get my confirmation email, it looks really slick in code. The added benefit of what how I’m doing this now is that I only store the friendly names in the class, so displaying the information back out is super simple. Then just a couple of simple queries to return the information I need.

I have tweaked a few things on some of the projects that are used.

  • I have written a slightly better way of getting to the cloned network adapter than what I was doing in the past.
  • A lot of my custom validation code has slimmed down, since for these I return a bool, I don’t need to define an object to hold the result and check that, I simply check the call.

As the rest of the code is already done, I figure I have a couple more days then we can call it done. The only major thing left for me is to build the method to create a new virtual machine, and the little I’ve looked into it, it appears it’s nearly identical to how you clone a vm.

See you next week, if not sooner!

Week In Review : 07/20/2014

I had intended these to be weekly, and I got off track, sorry about that. So I missed the week of Jun 29, Jul 6 and Jul 13. I can tell you that the week of the 29th ended on July 5th, and I was pretty wiped out from all the fireworks and stuff. The week of the 6th we were in Florida on vacation and the last couple of weeks (Jul 13 and this week Jul 20) were about the same.

This week has been all about the code. My boss asked me to focus on getting the revised provisioning web app up and running, so I’m happy to report that it is on par with the previous version. The previous version was just a basic web app, nothing fancy, it had some pages that worked like a form and it was all driven by click events and such. It worked really well, but it lacked flexibility.

See, everything was written on a single page, the various “pages” were in reality Asp Panels separated by div’s. In order to add a feature I would have to figure out where to place it, then adjust all the code to handle it. Some of that was problem, I should have broken things out more earlier into separate functions, that may have made it easier, but I doubt it.

The new version is using MVC, each page has a model on the back to represent the data. So each page is strongly typed, For most things I use the built-in validation provided to me from that, for a few things I wrote some custom validation, for example to make sure someone chooses a valid option from a drop down and not the text “–Select an item–”. There are a few places where the custom validation just wasn’t appropriate. So I was able to create a couple of functions to handle that, I must say that in this new format it is a lot easier to work.

The way I have it now I just pass around the models from one page to the next, updating as we click along. I can pass those models to my custom validation and return a simple error message quite easily. Displaying the validation error’s previously was a pain, so I really enjoy the way this works.

Another nice feature is I can re-use a lot of what I do so I can provide new features. For example, I want to provide a way for admins to create a bare VM. I can use all the models I already have, as well as just about all the logic behind how that is wired up. If I want to add a new feature it’s as simple as writing up a model and then building a page for it, then writing a controller to handle it all, or updating an existing controller.

Some thoughts I’m having for the next version is to use the database, currently I’m not doing that at all, I’m actually querying VMware directly, this introduces some lag that the previous version didn’t have. I think that some of that can be resolved by me working through my code to get it optimized, but a lot of that is just inherent in how it all works.

So the thought was a separate job that would at regular intervals poll VMware for clusters, datastores, vm’s and so forth. Then the app becomes almost instantaneous, since instead of asking VMware we’ll ask the database. Then before we provision we will validate everything against VMware, which may introduce some entertaining issues, but as our change process is fairly slow I think this will work nicely.

The nice thing about storing this data I can add tables to give me logic that I had to code for before. I will be able to access some data without calling VMware since it will be stored in the database and some of the code I wrote to pull data from VMware I can dump as it won’t be needed.

So what all code am I using?

mod-VMware : allows users to connect to, query and clone (soon create) virtual machines
mod-proteus : allows users to connect to, query and add host records to Proteus and retrieve IP information for new hosts
mod-servicenow : allows users to connect to, query and create service tickets and configuration items
mod-Zenoss : allows users to connect to, query and add devices to Zenoss monitoring
mod-ads : allows users to connect to, query and create objects in Active Directory

I feel that I’m getting closer to my idea of being modular, where we can plug in the various things we want. I still haven’t worked that out yet, but in my head it works ;-)

enjoy!

Week In Review : 06-22-2014

Not a terribly eventful week, I’ve been working on tuning Ops down and clearing out the errors, and lowering the signal to noise ratio. One of the hardest ones has been the domain trust monitor in the AD MP. So we use the firewall to isolate VLAN’s from one another and I don’t know that I totally agree with that, but one thing at a time I suppose. Our DC’s couldn’t resolve another domain to which we have a trust established, you read that right. So, I tried several different requests to make that go. I created their zone as a secondary in my DNS, had to allow TCP/UDP 53 through for that to work, still no go. Then I noticed that LDAP was being dropped on one of the firewalls so had to allow that through, still no go. Finally took out the big hammer and had those DC’s added to the same firewall group as mine. That worked, sort of, yesterday I noticed I’m still getting some errors, so I’ve decided that Monday I’m going to work on trying to get that and as much of everything else sorted out as I can.

I did get some code written this week, I have a proof of concept wired up that will allow us to copy profile information from one system over to SharePoint Online. I would feel better about it if we weren’t forced to use a deprecated web service. Which I need to bring up in our next meeting about this. The other code I worked on was the Arin Whois-RWS interface. I didn’t realize this, but Arin has a REST interface that allows you to query their Whois database.

So I’ve been working on that and wrote up a very nice little piece of code that is suitably generic for me, which I like!

So, if I want to start using the .net HTTP stuff on the Windows Phone and tablets, it looks like I have start using the async stuff. Also, I think a lot of their HTTP request stuff is moving to that anyway, so I’ll really need to keep that in mind as I progress in code writing. At any rate, I really like this GetResource function. A lot of the code I write I try to make as reusable as possible, I hate having to write code that more or less does something another piece of code already does. So we send in a string URL and cast it against the XML classes we created to return objects that we can easily work with. The disappointing part of this for me was I should be able to pass as part of the request that I want a JSON response, but the server seemed to ignore my ContentType request and kept spitting back XML.

As I was working through this I started thinking about the other code that I’ve written for VMware, Hyper-V, ServiceNow and so on. I think I’m going to re-write my provisioning web app so that I can be a little more pluggable than I currently am. For the VMware stuff I think this will be easier as I’ve already re-written that code to use the ManagedObjectReference for everything, and since at it’s root, that’s just a string this may be pretty simple. I’ll just need to create a couple of simple interfaces that I can pass strings or arrays into, as well as some functions to take the VMware objects and pass them up as strings or arrays.

Also I got notified from Microsoft that they are doing away with domains.live.com and that if you use that service for mail and so on, it will stop working very soon. So, I decided I would not wait around and moved my wife and daughter over to my Office365 subscription getting their email to work was simple, but my account to forward to the patton-tech.com one…not so much. Hopefully I can figure this out, because just adding that address to my account was not an option.

Week In Review : 06-15-2014

It’s time for another exciting edition of WIR! This week was filled with updates! Rolled updates to our Domain Controllers and one of them took nearly two hours to come back from a reboot! Normally not a big deal, but when your 30mi away…a little stressful! I also rebuilt my work laptop this week, earlier this year I had done something stupid with an external drive and wound up with Windows installed on Partition 2, on a disk with just one partition! Needless to say, rebooting my laptop didn’t happen all that often at all!

Speaking of Active Directory Domains, we are moving ever closer to having just one domain on campus. The internal private Edwards domain went away this week! It’s always just a little nerve wracking when running through dcpromo to remove stuff, but it went well. Didn’t appear to leave any unsightly meta data floating around AD!

Also spent a fair amount of time talking with the guys at Edwards to go over how they image machines. They routinely call us to have a workstation DNS entry removed, and needless to say it’s a little annoying. They ought to be able to do this themselves, but since it’s not their DNS they don’t have rights. Not to mention they way they do their image is a little different.

This is how it goes, a user is up for a new computer. In an effort to minimize the inconvenience this can sometimes to be, they image the new computer, load their software, and finally join it to the domain. This last part is what gets them, they tack on a “-1″ to the new workstation name. Normally not a big deal, but the last part is where it gets hairy.

The new workstation is delivered to the user, the old workstation is unjoined from the domain, the new computer is renamed to the old computer name…and boom. Sometimes this works (they say) but I can’t imagine how. So, the first comment was hey, how about using service tags, or mac addresses to identify these machines uniquely, then you will never get hit with this issue. Nope, they like usernames as computernames, it makes it easy to correlate user to workstation. Apparently it’s too difficult to track that down in SCCM? Not likely, but oh well.

So, what to do, well we could just have them call every time, but that’s a hassle, not to mention there’s no code involved! My solution, create an Orchestrator runbook, that is provided a computername. With that information it scrubs AD and removes the DNS entry as well. This Runbook would run in the context of a service account that has rights to do this. They would simply login to it with their admin account, we would use their group information to verify that the computer they want removed lives in their OU and then remove it and the DNS entry. If it doesn’t live in their OU it fails. Sounds elegant to me ;)

A final solution, which will take much longer to implement, will be an appliance from BlueCat that sits between AD DNS and Proteus DNS. This appliance will use the Proteus web service and the MS RPC to translate information between AD and DNS. This will get us to a very similar place as my Runbook idea, but the one advantage is this will also get us to a place where we can pull our AD DNS out of the public facing DNS, effectively hiding thousands of servers and workstations.

Another fun one that happened, you can’t push the ops client to a Domain Controller using SCCM Client Push. If someone tells you they can, they are lying to your face! I’m going to write up a post, but the short of it is, Client Push relies on a local administrator to work, how do you do that on a Domain Controller?

OH! I also polished off my SQL PowerShell, so I’ll write about that as well. It works pretty well, created some new functions to let me more accurately find SQL Instances, still don’t have a good way to talk to the WID but it’s kicking around in the back of my head.

I also broke Active Directory Certificate Services..see you next week!

Oh, I suppose we should talk about that? So, I’ve been slowly pulling servers out of the old Ops servers and bringing them over to the new. Doing pretty well, 230+ servers in the new and growing, and under 50 in the old. The Domain Controllers got pulled in this week as well as the Certificate servers.

So, I’m working through the alerts, tuning Ops so I only hear what I need to. So, I started getting alerts about ADCS (Active Directory Certificate Services) and started working on that issue. I was seeing errors about the CRL Distribution Point being offline.

As part of the troubleshooting I had already decided to stand up a vhost to hold CRL’s among other things. So I reconfigured the CA to use that, after restarting the service as prompted by Windows, Certificate Services failed to start. The net result here was that the CRL’s were out of date and just needed to be published and then copied to the web location.

The only bit left here is to automate both the publishing and the copying of the files over to the web server. Of course this seems well suited to creating a PowerShell solution, check back later for that!

See you next week!

Week In Review : 06/08/2014

Still a lot of programming this week, but like I said before I think anymore that is more the norm than not. We did some interesting Active Directory stuff this week. We had a handful of servers get their AD objects deleted at some point, and we found out about the beginning of this week. Now my guess is these were deleted close to about 3mo ago, and they either rebooted recently or attempted to change their password recently.

About a year or more ago we changed our audit policy and started using Advanced Auditing. We were concerned about user account and group account management, but it turns out we should also have put in computer account management as well. When a computer object is deleted event 4743 is logged in the security log of the domain controller. We searched and couldn’t find that entry anywhere, when I started researching that event is when I found you need to tick the boxes for computer management.

Along those lines we had a similar issue, our admin accounts in our QA domain were disabled, since we do very little auditing at all in there, I enabled the same features so we can see when that happens. When a user account is disabled event 4725 is logged. To go along with both of these events I’m going to update our reporting in Ops on things like this.

While doing all this I found a very nice support article listing our the various event id’s and what they mean.

All of the servers that are supposed to report in to System Center Advisor, are now doing so. I feel rather stupid about the issue originally. So my first problem is that I wasn’t patched up to where I needed to be in order to even use the preview, so that was step 1. The next part is where it gets a little fuzzy, I don’t actually recall patching the clients on any of the agents reporting in, yet all 3 domain controllers reported an update agent. Coincidentally all 3 domain controllers were the only servers showing up. After some investigating with the SCA guys from Microsoft they quickly realized I had not patched my agents. So, I must have patched the DC’s, I just don’t recall doing it, hence the stupid.

So the result of this is a working program in SCCM that will patch outdated clients, which is good as my next step in this whole saga is to patch production. It’s either patch, or move over to R2 and currently I’m leaning towards patching. So currently in QA when a server gets discovered the ops client gets pushed down to them, now it will also get patched. Then the only manual part of this process left is to add them to the advisor management pack.

It’s been lots of fun talking with these guys about stuff, I’ve been invited to participate in an SCA board to go over new features and talk about how things work. My recent experiences dealing with some of the internal folks with Microsoft really make me want to work there more.

I’ve done some fun things with PowerShell this week. A new SQL module has been fleshed out and validated against just about all instances of SQL. I’m still having a hard time working with a connectionstring for the Windows Internal Database, but it will come. I’ll most likely write about this module after I’m done with this WIR.

I’ve updated the Orchestrator module. The Start-scoRunboook function worked incredibly well if you only ever had one parameter, as soon as you throw more than one it freaks out. How I originally handled it was dumb, so now the function accepts requires a hashtable object, it then compares the key (property name) field against what the Parameter object returns. This worked out extremely well, again probably a topic for a whole blog post as well.

One last pure PowerShell item is a function that writes functions. It’s not too terribly complicated and I *WILL* post about this later, but basically the idea is that Orchestrator contains Runbooks that perform some action, my module reads those Runbooks in, gets their parameters and allows you the admin to run them. What if we could have a function that would build cmdlets based on that information on the fly…

SharePoint Online! How much fun is it working with UserProfiles in SPO? Well, let me tell you, in order to do anything meaningful it appears you have to access a 10yr old web service that must be ripe for deprecation but has been forgotten about! I’d really like to get some more information direct from Microsoft about that. At any rate, I’ve got some POC code that will allow me to programmatically populate a SharePoint user’s profile with information that we glean from another source. The next step down this rabbit hole is using a 7yr old SDK (Office Server 2007) to see if I can create UserProfile subtypes! I’ve got some examples of how this works, but I’ve not written anything up yet to see if it will go, fun times ahead!

Keeping in line with the SharePoint Online topic, creating admin cloud accounts. So we have an Azure subscription that allows us to get into Azure AD for our tenant, which isn’t anything special. If you have an Office365 subscription, you can create an Azure account, hook the two together and boom…Azure AD! So I created an admin account for me, and one of the other guys on the project. After that I enabled the Multi-Factor Authentication on these two accounts. Now, when I login with my admin account, I receive a txt message with a verification code. So we have looked at this as THE way to secure access to these accounts as we begin to think about the cloud.

With that out of the way, I can talk about the Orchestration. I’ve created a Runbook that will connect to our tenant and provision a user. This came out of the Provisioning project for the larger SPO project. This code takes a single parameter, samaccountname, and then provisions that user in o365 with the appropriate licensing. There are two differences between an o365 user and a cloud admin. The first is licensing, a cloud admin gets none by default (our design), second the all important UPN, user@tenant.onmicrosoft.com. The idea is these accounts live solely in the cloud, and are used specifically for administering cloud things. I have a couple modification in mind, first I need to populate the AlternateAddresses field, as well as the MobilePhone field. Then I need to see if I can enable MFA in Azure for these accounts automatically.

Lots of Orchestrator this week, but now that I’m ready and the network is ready it’s time to start working on Orchestrating Windows Updates. I’ve started a rough draft of that at the moment:

  1. basically get a list of servers (or service)
  2. for each server start maintenance mode (ops and Zenoss)
  3. get the applicable updates (SCCM perhaps)
  4. apply the updates
  5. reboot if needed
  6. make sure the server is back online
  7. check if required services are running
  8. leave maintenance mode
  9. and move on to the next server

If one server in a group fails then we need to stop the update process and throw an alert in ops and Zenoss. This will prevent an entire service from going offline if the updates cause an issue.