Windows has several built-in local security groups that are designed to manage permissions and access rights on a computer. These groups are predefined by Windows, and each group has specific rights and permissions. The exact groups available can vary depending on the version of Windows you’re using or the features that are enabled, but here’s a general overview of the most commonly found built-in local security groups in Windows systems:
In an enterprise environment, usually only the following groups are used:
Users – This group is intended for regular users who do not need administrative privileges. Members can run installed applications and perform basic tasks but cannot make significant changes to the system settings or the security configuration.
When the device is joined to an Active Directory domain or Entra ID, the users account is automatically added to this group during their first login.
Administrators – Members of this group have full control of the computer and can make any changes, including adding other users to the administrator’s group, changing security settings, installing software, and accessing all files on the computer.
When the device is joined to Active Directory, only the Domain Admins group is added to the local Administrator group, when the device is joined into Entra ID, the following security principals are added to the local Administrators group:
For more details read: How to manage the local administrators group on Microsoft Entra joined devices
Remote Desktop Users – This group is for users who need to access the computer using Remote Desktop. Members can log on remotely but do not have administrative rights unless explicitly granted.
In a managed enterprise environment, you want to have control over who has privileged access on your devices. When possible, avoid granting users permanent administrative rights by adding their account to the local Administrators group. Instead for ad-hoc activities that require elevated permissions consider the use of Windows LAPS, Microsoft Intune Endpoint Privilege Management (EPM) or manage the local Administrator group membership with Group Policy Preferences so that you have central control over these permissions.
The risk of users with local administrative and/or remote access are obvious but let me summarize some of them.
Assuming you have processes in place for managing access to local security groups, you also want to monitor what is going on in your environment to identify real threat actors or internal staff, that, let’s put this nicely, bypasses your security controls.
With Microsoft Defender for Endpoint deployed, we can use advanced hunting to detect when a user was added to a security-enabled local group and look for the ActionType UserAccountAddedToLocalGroup, in simple words, a translation of the Event 4732 A member was added to a security-enabled local group.
DeviceEvents
| where ActionType == ‘UserAccountAddedToLocalGroup’
When looking at the event details we see the following
Now if you run the above script in your lab or a small environment, you might recognize the account names, and maybe event the SIDs 😊 But what if you run this in a real enterprise environment?
Also keep in mind, that there are quite a few scenarios for adding users to a local group, provided the user has the permission to do so.
You will end up with something like shown in the example below.
As mentioned above, we don’t get the friendly name of the user that was added to the group, but only their AccountSID. When you look closely, you’ll notice different patterns of the SID, this is because as mentioned previously there are several scenarios that can occur, i.e. whether the added user is a local user, an Active Directory User or an Entra ID user.
I wanted to have something that is easier to read, so I started working on a KQL query that enriches the information accordingly.
Let me walk you through the query in detail.
Important: The below query example is for use in Microsoft Sentinel, you will find the link to both queries for Defender XDR and Sentinel at the end of the post.
Retrieve all Identities from the IdentityInfo table and store them in a variable, we use this information later to join it with the results. Note that you must have Defender for Identity enabled in Defender XDR and when using the query in Microsoft Sentinel, you must configure the synchronization within the UEBA options.
Here we trying to determine the Active Directory Domain identifiers, we use this later to find out whether the account is an AD based account.
Here we’re retrieving information about local accounts that were created so that we can later enrich the SIDs that relate to local accounts, since we don’t have information about them in the IdentityInfo table.
Here we define the SIDs and Group Names of the Windows built-in local security groups.
Now we are getting all events where any of the defined groups was changed. We exclude any actions that originate from the SID S-1-5-18, so we avoid the noise from local group membership changes originating from Windows LAPS or Group Policy.
And finally, we add some other attributes that should help to provide context whether the added account is a local, domain or Entra Account and the source of the account who performed the action.
Let’s take a look at the results.
In the first record, where the AccountSource is Entra ID, we can’t see the name of the User that was added, this is because the event only stores the SID, but we can’t find that SID in the IdentityInfo table, so the only way to identify the user is to convert the user’s Entra ID SID to the Object ID. Since we can’t do this in KQL, we have to do this elsewhere like https://erikengberg.com/azure-ad-sid-to-object-id/
When we have the ObjectID, we can do a further search to find the Users friendly name.
When looking at the records where the AccountSource is Local, we see one record with a Username and one without. For the record without the name, we were unable to retrieve the information from historical user creation events (unless we would increase the lookback period which would consume a lot of query resources). In this case you will have to search for the Account with the corresponding SID locally on the device. This can by collecting an investigation package or running a live response session in Microsoft Defender for Endpoint.
For Active Directory accounts its usually quite simple to correlate the SID with the actual user, provided the Account Information is visible within the IdentityInfo table.
To enrich the Actor information (so the Identity that added the user), we basically do the same as described above.
I hope this will help you to monitor or proactively hunt for Windows built-in local security group changes. In an upcoming post, we’ll investigate monitoring Active Directory and Entra ID group changes.
You can find the queries in my GitHub repository here.
Additional References
Security identifiers | Microsoft Learn
Local Accounts – Windows Security | Microsoft Learn
Security guidance for remote desktop adoption | Microsoft Security Blog
Azure AD SID to Object ID Converter – ErikEngberg.com
Identify internet-facing devices in Microsoft Defender for Endpoint | Microsoft Learn
]]>In this blog post we look at a new setting within the Azure AD portal. “Users can create Azure AD tenants“. Unfortunately, the setting is enabled by default. Not sure why, but I guess most organizations will want to turn this off. You can find the setting within the Azure AD portal, Settings / Users / User settings / Tenant creation.
‘Yes’ allows default users to create Azure AD tenants. ‘No’ allows only users with the global administrator or tenant creator roles to create Azure AD tenants. Anyone who creates a tenant will become the global administrator for that tenant.
Let’s look at what a standard user can do when the setting is enabled and when they have access to the Azure AD portal. Because there’s another setting that allows you to Restrict access to the Azure AD administration portal.
Select Manage tenants
Then select Create
Select a tenant type
And finally enter the name of the tenant
…. And after a few minutes Sam has its own tenant.
We also get an audit log for this activity with the activity type ‘Create Company‘
And at least we also get the Tenant ID that was created.
If you haven’t disabled the setting yet, here’s q KQL query to find out whether someone in your organization already created a tenant.
And here’s another query to find out who enabled the feature again, after you had disabled it.
If you use Microsoft Sentinel, you can create Analytic rules for both activities.
Below are the KQL queries.
// New Azure AD Tenant created
AuditLogs
| where OperationName == "Create Company"
| extend InitiatedByUser = tostring(parse_json(tostring(InitiatedBy.user)).userPrincipalName)
| extend InitiatedByIP = tostring(parse_json(tostring(InitiatedBy.user)).ipAddress)
| extend TenantId = tostring(TargetResources[0].id)
| project TimeGenerated, OperationName,TenantId, InitiatedByUser, InitiatedByIP
// AzureAD - Allow users to create tenants - enabled
AuditLogs
| where OperationName == "Update authorization policy"
| extend Settings = parse_json(tostring(TargetResources[0].modifiedProperties))
| mv-expand Settings
| where Settings.displayName == "DefaultUserRolePermissions.AllowedToCreateTenants"
| extend Setting = tostring(Settings.displayName)
| extend newValue = tostring(Settings.newValue)
| extend oldValue = tostring(Settings.oldValue)
| extend InitiatedByUser = tostring(parse_json(tostring(InitiatedBy.user)).userPrincipalName)
| extend InitiatedByIP = tostring(parse_json(tostring(InitiatedBy.user)).ipAddress)
| project TimeGenerated, OperationName,Setting, newValue, oldValue, InitiatedByUser, InitiatedByIP, SourceSystem
| where newValue == "true"
]]>In July 2021 Microsoft announced that starting with MDI version 2.156 they included the OEM version of the Npcap executable in the Sensor deployment package. The reason for doing so is because WinPcap is no longer supported and since it’s no longer being developed, the driver cannot be optimized any longer for the Defender for Identity sensor. Additionally, if there is an issue in the future with the WinPcap driver, there are no options for a fix. More details can be found here.
Since version 2.184 released on July 10th 2022 the Defender for Identity installation package will now install the Npcap component instead of the WinPcap drivers.
Although the MDI Sensor does update itself, you will need to plan for this change and act yourself. If you haven’t installed the Npcap driver already, you will notice that within the Microsoft Defender for Identity portal, sensors that use WinPcap show up as ‘Not healthy’.
When opening the status page, you’ll see the following information.
You can use this advanced hunting query to get a quick overview of your domain controllers that have the WinPcap driver installed.
Okay, now that you have identified the domain controllers that require an update, here’s what you need to do after you have received an internal approval for the change.
If you already installed the sensor with WinPcap and need to update to use Npcap:
For other scenarios see: How do I download and install or upgrade the Npcap driver?
Have a great day
Alex
]]>A browser extension is a small software module for customizing a web browser. An extension improves a user’s browsing experience. It usually provides a niche function that is important to a target audience.
Well known browser extensions are used for:
When you install a browser extension, the browser stores the content in the following locations:
Below is an example of the content of the NordVPN extension
The permissions a browser extension uses are defined within the manifest.json file. Below are the permissions defined for the LastPass browser extension.
Detailed information about the chrome API permissions can be found here: Declare API permissions in extension manifests
If you are responsible for the security of your company you should pay attention to the permissions a browser extension is using, I recommend reading the whitepaper Understand the risks of permissions for Chrome extensions.
Extensions for Edge Chromium and Google chrome are packed in a file that ends with CRX, hence we can use advanced hunting in Microsoft Defender for endpoint to identify devices that download extensions.
Last month Microsoft announced the public preview of Microsoft Defender for Threat and vulnerability management. Defender Vulnerability Management’s browser extensions inventory provides detailed information on the permissions requested by each extension and identifies those with the highest associated risk levels.
Let’s take a look at the LastPass extension permissions, within the manifest.json file the permissions are defined as following:
Microsoft Defender Threat and Vulnerability management nicely translates these permissions as shown below.
We can use advanced hunting to query the browser extension data.
Let’s take a look at all the extensions that have the proxy permission
Now that we have an overview of the extensions in use, you might to start taking control over what extensions you allow to be used and what extensions you want to block. We can use Active Directory Group Policy or Microsoft Endpoint Manager Intune configuration profiles to control the use of Browser extensions.
Here, by default we do not allow the user to install any extension, except for those that are explicitly defined.
For more details see: Use group policies to manage Microsoft Edge extensions
Browser extensions can be very useful, if you don’t allow users to install software themselves, i.e. don’t grant them local administrative rights, you should also consider to actively manage the use of browser extensions.
I hope you find this post useful, as always I welcome your feedback.
In the past months I have deployed a number of Microsoft Sentinel instances and in many cases the root cause for reaching the daily cap was related to data ingested into the AADNonInteractiveUserSignInLogs table. When analyzing the data we often found an individual user that created an unusual high amount of events. This can happen for various reasons such as:
Okay, let’s start at the beginning
Data Cap
To avoid a bill shock, we set a daily cap.
Analytics Rule
If we want to get alerted, we can setup an analytics rule within Microsoft Sentinel as shown in the example below.
The Alert
Whit the analytics rule in place, we get an alert as shown below when the daily data cap is reached.
Now that we have an alert , we have to investigate, what caused the high data volume. Logon to the Azure Portal and navigate to the Usage and estimated costs blade within the Microsoft Sentinel Log Analytics Workspace. Here we can already identify what Solution caused the data ingestion increase, Select the Open chart in analytics button
Log Analytics is opened with a predefined query that shows the usage. Here we see that LogManagement had an increase in data ingestion. Remove the start date and set the time range to 24 hours.
Usage
| where IsBillable == true
| summarize TotalVolumeGB = sum(Quantity) / 1000 by bin(StartTime, 1d), Solution
| render columnchart
Change the query to display DataType instead of Solution, then re-run the query
Usage
| where IsBillable == true
| summarize TotalVolumeGB = sum(Quantity) / 1000 by bin(StartTime, 1d), DataType
| render columnchart
Next remove the | render instruction from the query to see the details
Usage
| where IsBillable == true
| summarize TotalVolumeGB = sum(Quantity) / 1000 by bin(StartTime, 1d), DataType
Now let’s find the user(s) that cause the high event volume.
AADNonInteractiveUserSignInLogs
| summarize count() by UserPrincipalName
Next we drill down into the events just for the user that triggers the most events.
AADNonInteractiveUserSignInLogs
| where UserPrincipalName == “john.doe@foocorp.com”
| summarize count() by UserPrincipalName, ClientAppUsed, AppDisplayName
Here we see that we have a lot of Windows Sign in events. Next lets drill into the details to identify the device.
AADNonInteractiveUserSignInLogs
| where UserPrincipalName == “john.doe@foocorp.com”
| where AppDisplayName == “Windows Sign In”
| extend DeviceName = tostring(parse_json(DeviceDetail).displayName)
| extend trustType = tostring(parse_json(DeviceDetail).trustType)
| extend deviceId_ = tostring(parse_json(DeviceDetail).deviceId)
| extend operatingSystem = tostring(parse_json(DeviceDetail).operatingSystem)
Next let’s see how many devices are involved and add the following KQL line.
| summarize count() by DeviceName
That’s it for today, I hope you found this useful. I’m currently working on an early detection when logs start to unusually grow, this so that IT operations or Security teams can take an immediate action and prevent the daily cap being reached.
Bye
Alex
These days everyone is trying to identify devices that are vulnerable to the Log4Shell Vulnerability (CVE-2021-44228). If your only systems management tool is Microsoft Endpoint Configuration Manager this blog is for you.
You can of course create device collections based on installed programs, however log4j-core.jar files can be found in several locations in and outside the Program files folder. So in order to identify these files, we have to search for them on the entire disk. Here’s the script I prepared for that.
Note that I have intentionally limited the drive letters to a-e, adjust this if you know of systems with more drive letters.
You can find the script here: https://gist.github.com/alexverboon/0a7a32b8f1267f4a9ac34b5e1c5b1ba5
The script produces the following output.
Next, import the script into the Microsoft Endpoint Configuration Manager Script library. Then select a device collection and run the script.
Next, we are going to extract the Run Script results with PowerShell. I wrote about this method earlier in this blog post Extract ConfigMgr Script Status Results with PowerShell – Anything about IT (verboon.info)
Open PowerShell from the ConfigMgr console and then load the Export-CMScriptResults function that you copied from the blog post mentioned above or from here: Export-CMScriptResults (github.com)
We now have all the results in our PowerShell variable $log4 so we can further review the data
And as a little bonus, let’s compare the identified files with some log4j-core.jar file hash references available on GitHub
The above code snippets can be found here: https://gist.github.com/alexverboon/13a5defd8ebfac491ab9313491d995a4
If you have a match, it will show the output as following:
I hope you enjoyed this blog post, have a great day and good luck with identifying vulnerable devices.
Credits / References
SCCM scan for Log4J : SCCM (reddit.com)
Log4Shell: RCE 0-day exploit found in log4j 2, a popular Java logging package | LunaSec
mubix/CVE-2021-44228-Log4Shell-Hashes: Hashes for vulnerable LOG4J versions (github.com)
]]>In my previous post (Part1) I provided an overview of the new Microsoft Defender for endpoint unified solution for Windows Server 2012-R2 and 2016 and how to deploy the solution manually to a new provisioned server. In this blog post I would like to walk you through the process of migrating a Windows 2016 server to the new unified solution using Microsoft Endpoint Configuration manager.
For this we will be using the upgrade script that Microsoft provides. But let’s go through this step by step.
Within Microsoft Endpoint Configuration Manager we need a package or application that deploys the new universal solution to servers. In my lab I created a package. The package content is as following:
Install.ps1 – The script provided by Microsoft hosted here on GitHub https://github.com/microsoft/mdefordownlevelserver/blob/main/Install.ps1 The script can be used for various scenarios, but in our case it will do the following:
You can find more details about the installer script here: https://github.com/microsoft/mdefordownlevelserver
Md4ws.msi and WindowsDefenderATPOnboardingScript.cmd – Both files can be downloaded from the Microsoft 365 Security portal.
Within the Microsoft 365 Security portal, select Settings / Endpoints / Device Management / Onboarding, then select Windows Server 2012 R2 and 2016 Preview and for the deployment method select Microsoft Endpoint Configuration Manager.
Next, leave the Operating system, but now select the deployment option that mentions ‘using Microsoft Monitoring Agent).
Then note down the workspace ID , we’re going to use this later.
Here’s the configuration of my MDE upgrade package in Microsoft Endpoint Configuration Manager.
Next let’s look at the Program properties of the package.
The command line is as following, please replace <ADD YOUR WORKSPACE ID HERE> with the workspace ID that you noted down previously.
“%Windir%\sysnative\WindowsPowerShell\v1.0\powershell.exe” -ExecutionPolicy Bypass -Command .\Install.ps1 -RemoveMMA <ADD YOUR WORKSPACE ID HERE> -log -etl -OnboardingScript “.\WindowsDefenderATPOnboardingScript.CMD”
Great now that you have prepared the package, let’s deploy it. …but do not forget to distribute the content of the package to your distribution points (back in the days when I used to support ConfigMgr that would have been my first question I asked people when calling about a package not installing).
Now where to deploy? I guess for your initial deployment you know exactly what system you want to upgrade. But before moving on with the deployment let me know you a handy tip how you can identify systems that have the MMA Agent deployed with the Endpoint Manager workspace ID configured.
Run CMPrivot on a device collection that includes your existing MDE onboarded servers and then add the Workspace ID to the query as shown below.
In this example, Server2016-03 was identified to have the MMA Agent pointing to the MDE workspace. This is important to know, because the MMA Agent can point to multiple workspaces, as for example you might also be using the agent to collect Windows security event logs or performance data. Knowing your current MMA configuration will help you to identify the systems where you can completely remove the MMA agent later or leave it running.
Okay, now let’s deploy the upgrade package. For this I created a collection within Microsoft Endpoint Configuration manager and added the server to the collection.
Note that I have set the Deployment to ‘Available’ for demo purposes, to run this automatically in production, set this to ‘Required’.
Here’s our system before the upgrade.
Open the Microsoft Endpoint Configuration Manager Software Center and install the package.
Remember we added the -log and -etl command line options to the install.ps1 script? You will find the log files within the ccmcache folder where the package was downloaded.
Here’s our system after the upgrade
That’s it for today, thanks for reading my blog
Alex
Onboard Windows servers to the Microsoft Defender for Endpoint service | Microsoft Docs
Server migration scenarios for the new version of Microsoft Defender for Endpoint | Microsoft Docs
]]>Just in case you missed this, earlier in October, Microsoft announced the public preview for the Microsoft Defender for endpoint, unified solution for Windows Server 2012 R2 and 2016 that enables additional protection features and brings a high level of parity with Microsoft Defender for endpoint on Windows Server 2019. The unified solution also provides a much simpler onboarding experience.
Before taking a closer look at the new unified solution, let’s briefly look at how things worked until now. Onboarding Windows 10 and Windows Server 2019 is simple, all you need to do is run an onboarding script that basically enables the Microsoft Defender for Endpoint component that is already built-in the operating system, i.e. there’s no need to deploy and install any additional software. Things are different with Windows Server 2012-R2 and Windows Server 2016 though.
As you can see, the onboarding experience for Server 2012-R2 and Server 2016 was a bit complex but with the new unified solution this complexity is removed. Let’s try this out.
When you select the onboarding options for Servers within the Microsoft Defender for Endpoint portal, you will now see two options.
Today we will look at the local script option (other options will be discussed in a future post).
The md4ws.msi installation package includes all the components you need to run Microsoft Defender for Endpoint on Server 2012-R2 and Server 2016. Now let’s install this on a Windows Server 2012-R2 device.
Once completed, Windows Defender and Defender for endpoint is installed.
Now that we have ‘component’ parity with Windows 10 and Windows Server 2019, all we need to do for activating Microsoft Defender for endpoint is to run the onboarding script.
While when using the Log Analytics agent to deliver defender for endpoint the ‘Process’ mssense.exe was running, we now have it running as a Service.
The new unified solution also enables the following protection capabilities for Server 2012-R2 and Server 2016.
When looking at the device actions , you will notice that the unified solution now enables additional capabilities.
That’s it for today, in the next blog post we will look at migrating servers currently running the SCEP/Log Analytics agent to use the new unified solution.
]]>In today’s blog post I want to share with you an advanced hunting query to detect audit policy modifications using Microsoft Defender 365 advanced hunting. Following the MITRE ATT&CK framework this would be T1484.001 Domain Policy Modification: Group Policy Modification.
Microsoft Defender for Endpoint can help us detect audit policy modifications by running the following query:
Detailed information about the audit policy changes is displayed in the AdditionalFields data. Now all we need to do is to translate these values into human readable data.
AuditPolicyChanges – This field describes the changes that were made. Within the query I first removed the % and blanks, then used the following case statements to translate the values.
These relate to when you configure auditing settings as shown in the example below.
CategoryId – is the ID of the auditing Category which subcategory was changed. The values are translated as following:
SubcategoryGuid – the unique subcategory GUID. A complete list of the GUIDs can be found here: https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-gpac/77878370-0712-47cd-997d-b07053429f6d or you can also run the following command:
Within the query , the values are translated as following:
Great, so now that we have done all the translation work, let’s run the query:
Now this query by itself will return a lot of results, what you want to look for are audit policy changes where Success and/or failure is Removed.
Here’s another query, assuming that you have also onboarded your domain controllers into Defender for Endpoint, you can use the following advanced hunting query to find audit policy changes, by searching for the audit.csv file where the audit policy settings are stored.
And lastly when can use the following query to look for any changes of the audit.csv file on clients.
I hope you enjoyed this blog post, you can find all the advanced hunting queries here on my GitHub
]]>In this blog post I am going to show you how you can quickly (in 5 mintes) deploy Windows 11 in Hyper-V using the AutomatedLab PowerShell module. In fact the process is no different than when deploying other Windows operating systems, but just in case you haven’t heard of the AutomatedLab yet and plan to install Windows 11 in a VM, this might be a good opportunity to get familiar with it.
I am just going to assume that you have the Hyper-V role already enabled on your Windows 10 device. Follow the next steps to install the AutomatedLab PowerShell module, download the ISO and deploy your first VM.
Open Windows PowerShell as Administrator and run the following command to install the Automated lab.
Install-Module AutomatedLab -AllowClobber
Next run the following command to create the Lab sources folder
New-LabSourcesFolder -Drive C
Now we have to download the Windows 11 ISO file and save it in the lab sources \ ISO folder as shown below. Note to access the Windows Insider download page, you must be a member of the Windows Insider program.
Now because the generic Product key isn’t known yet and the Automated lab looks for product keys here “C:\ProgramData\AutomatedLab\Assets\ProductKeys.xml” we have to tweak one script within the AutomatedLab module to skip the product key check. Depending on when you read this blog post, this step might no longer be necessary.
“C:\Program Files\WindowsPowerShell\Modules\AutomatedLab\5.39.0\AutomatedLabDisks.psm1”
And comment out line 28 – 34 as shown below
Okay, now we’re good to go. Here’s the Windows 11 Installation script that I created from the sample script: “C:\LabSources\SampleScripts\HyperV\Single 10 Client.ps1”. Note the value for the –OperatingSystem parameter.
To check what operating systems you can deploy, simply run the following command which will list all the OS versions and editions available in the AutomatedLab ISO source folder.
Get-LabAvailableOperatingSystem
Now let’s run our script and the deployment of Windows 11 starts
Note when you run AutomatedLab for the first time you will see some prompts related to PowerShell remoting.
Also the very first time you install a certain version of Windows, AutomatedLab will create a base Image, this can take a while but speeds up future installations. See below, the second deployment only took 5 minutes and 10 seconds.
Next connect to the VM, you’ll notice that the user Install is already logged on.
Before we can use the client for further testing we have to configure a few settings that were used for the AutomatedLab deployment.
Reboot the device and continue using Windows 11 as you like. I hope I could demonstrate how easy it is to deploy Windows 11 or any other Windows OS into a VM within just a few minutes.
If you want to learn more about the AutomatedLab I suggest to check out the following sites:
https://automatedlab.org/en/latest/
https://github.com/AutomatedLab/AutomatedLab
https://sysmansquad.com/2020/06/15/getting-started-with-automatedlab/
Enjoy Windows 11!
]]>Now in my opinion it must be the IT infrastructure operations team’s responsibility to ensure that devices get their patches installed and defender gets its platform and definition updates. But sometimes the reason for devices not getting updates is because the platform used to manage the deployment of these updates might have an issue, be on the backend or client side.
The good news is that if you have Microsoft Defender for Endpoint deployed we can monitor the health of Microsoft Defender (and more) also through the information collected by Microsoft Defender for Endpoint. We can easily identify devices with outdated defender definition updates by using the Threat and Vulnerability portal or by using advanced hunting.
When opening the Threat and Vulnerability portal within Microsoft Defender for endpoint, select the recommendations blade and search for ‘Update Microsoft Defender’. You will see the recommendation as shown in the example below.
When selecting the exposed devices tab, you get a list of all the devices where definitions are outdated.
Now while you can see the devices, we do not see the date of the currently installed definition update. Are the definitions 2 weeks old, 4 weeks or did the system never install definition updates at all?
KQL to the rescue ! Through advanced hunting we can gather additional information. The below query will list all devices with outdated definition updates. The results are enriched with information about the defender engine, platform version information as well as when the assessment was last conducted and when the device was last seen.
The following query allows you to search for devices where the last signature updated happened within a certain time period.
You can find both advanced hunting queries in my GitHub repository here: https://github.com/alexverboon/MDATP/blob/master/AdvancedHunting/MDE%20-%20Outdated%20Defender%20Signatures.md
Credits! I would like to thank Jan Geisbauer @janvonkirchheim for the inspiration, Jan shared the initial KQL query that served as the basis for the further development on this topic.
]]>In my earlier post Use Microsoft Endpoint Configuration Manager to stop the Windows Print Spooler Service – Anything about IT (verboon.info)
I explained how to stop the Print Spooler service using Microsoft Endpoint Configuration Manager leveraging CMPivot to identify servers where the Print Spooler is running and the Run Script function to stop and disable the service. This method was intended as a first response action, however as new servers get deployed, we want to make sure the print spooler remains disabled, so we need a more permanent solution.
In this blog post I will explain how we can use Microsoft Endpoint Configuration Manager and a Configuration Baseline to ensure the Print Spooler is stopped and disabled. And yes, this blog post is intended for those who for whatever reason cannot or do not want to use AD Group Policy.
First download the scripts from my GitHub repo https://github.com/alexverboon/PowerShellCode/tree/main/PrintSpooler/MEMCMBaseLine and save the locally as shown in the example below.
Next open the Microsoft Endpoint Configuration Manager and then Launch PowerShell ISE from the Console.
Next load the function that is included in New-CMCIPrintSpoolerService.ps1 and then run the function that creates the Configuration Item in Microsoft Endpoint Configuration Manager.
. .\New-CMCIPrintSpoolerService.ps1
New-CMCIPrintSpoolerService -SiteCode P01 -SiteServer cm01.corp.net -Verbose
When all goes well , you now have a new Configuration Item.
The CI has both the discovery script and remediation script embedded.
Next create a configuration baseline and include the newly created configuration item.
And finally deploy the configuration baseline to a device collection that includes all servers where the print spooler must be disabled. As soon as the device picks up the configuration baseline, you can verify the status on the device.
Test the configuration baseline by setting the print spooler to automatic and/or start it, and then run the evaluation again.
If all works as expected, the service is stopped and set to disabled.
You can find the scripts mentioned in this blog post here on GitHub: https://github.com/alexverboon/PowerShellCode/tree/main/PrintSpooler/MEMCMBaseLine
I would also like to refer to another blog post from Thijs Lecomte, where he describes how to use MEM to deploy Print Spooler patches and configuration through Microsoft Intune.
Have a great day
Alex
]]>I guess by now, everyone has heard of the Windows Print Spooler Remote Code Execution Vulnerability (CVE-2021-34527). At this time Microsoft recommends disabling the Print Spooler service on domain controllers and on servers where it is not needed or to Disable inbound remote printing through Group Policy. In this short blog post I will demonstrate how you can use Microsoft Endpoint Configuration Manager to identify systems where the print spooler service is running and how to stop and disable the service.
Disclaimer! I have only tested this in my lab so far.
We can leverage CMPIvot to find systems where the print spooler service is running and configured to start automatically by running the following query:
Services | where Name == 'spooler' | project Device,Startmode,State,Name
Import the following script into the Script library
<# .Synopsis Disable Print Spooler Service .DESCRIPTION Disable Print Spooler Service to mitigate the Windows Print Spooler Remote Code Execution Vulnerability https://msrc.microsoft.com/update-guide/vulnerability/CVE-2021-34527 .NOTES 03.07.2021, v1.0.0, alex verboon #> Begin{ $PrintSpoolerState = (Get-Service -Name Spooler).Status $PrintSpoolStartMode = (Get-Service -Name Spooler).StartType } Process{ If ($PrintSpoolerState -ne "Stopped"){ Write-host "Print Spooler is not stopped, stopping it now" Stop-Service -Name Spooler -Force } If ($PrintSpoolStartMode -ne "Disabled"){ Write-host "Print Spooler is not disabled, disabling it now" Set-Service -Name Spooler -StartupType Disabled } } End{}
Now that we have our script within the script library, we can execute it on the device.
Once executed when we run the query in CMPIvot again, we see that the Print Spooler service is now stopped and startup is disabled.
Okay, we always need a rollback plan, so just in case something stops working and you need to revert the change, here’s how to set the start mode back to automatic and start the print spool service. You might want to import this script as well into the script library.
<# .Synopsis Enable Print Spooler Service .DESCRIPTION Enable Print Spooler Service .NOTES 03.07.2021, v1.0.0, alex verboon #> Begin{ $PrintSpoolerState = (Get-Service -Name Spooler).Status $PrintSpoolStartMode = (Get-Service -Name Spooler).StartType } Process{ If ($PrintSpoolStartMode -ne "Automatic"){ Write-host "Print Spooler is not set to autostart, configuring that now" Set-Service -Name Spooler -StartupType Automatic } If ($PrintSpoolerState -ne "Running"){ Write-host "Print Spooler is stopped, starting it now" Start-Service -Name Spooler } } End{}
Hope this helps you with your mitigation actions.
Alex
]]>Should you ever run into an issue with onboarding devices, I recommend checking the guidance provided here: Troubleshoot Microsoft Defender for Endpoint onboarding issues. Now if you have just a couple of devices to manage you will most likely spot any missing device within the Defender for Endpoint management portal, but what If you have several hundred or even thousands of devices, how would you find out that that particular device Computer0073 in Building D1 on the 6th floor isn’t correctly onboarded?
If we take security seriously and apply good IT infrastructure hygiene, we must ensure that every managed device on the network is properly onboarded in Defender for Endpoint.
In this blogpost I will share a solution that we have put together recently to remediate onboarding devices that are managed by Microsoft Endpoint Configuration manager.
When managing devices with Microsoft Endpoint Configuration you are most likely using a Microsoft Defender for Endpoint policy to onboard devices into Microsoft Defender for Endpoint.
Microsoft Endpoint Configuration Manager the pushes down the onboarding policy just like any other configuration baseline and when executed the device is onboarded into Defender for Endpoint. You can verify the state on a client as shown in the example below.
Another way to check the onboarding state is to use CMPivot, run the following query to retrieve the MDE onboarding state.
You also want to check the sate of the services
Now when it comes to onboarding issues, I have seen a couple of situations:
On the troubleshooting page mentioned previously, Microsoft describes that this can happen when:
Sometimes just restarting the service works, another option is to just rerun the ConfigMgr compliance evaluation on the client either locally or by invoking the compliance evaluation remotely. But I have also seen devices where the onboarding policy on the device was broken.
When all of the above does not work, the final action that in most cases will always solve these issues is to re-run the onboarding script manually. But again, with hundreds or thousands of clients to manage you do not want to rely on a manual task, what we need is automation.
With Microsoft Endpoint Configuration manager, you have several options to identify systems that are not onboarded in Defender for Endpoint. When using manually created collections you will need to create two collections, one that has all the devices where the onboarding state value is set to 1 and another collection that excludes the collection with onboarding devices. This is because when the device is not onboarded there is no onboarding state attribute in the device’s inventory.
Below is the collection query for devices that are onboarded
Great we now have visibility on devices that are not onboarded into defender for endpoint, so let us move on. To re-run the onboarding script on devices that have onboarding issues, we leverage the capability of the Microsoft Endpoint Configuration Manager compliance baselines.
My first idea was to simply embed the onboarding file , which is a batch script into a configuration item, but that turned out to be a cumbersome approach , so my colleague Athi (@AKugaseelan) came up with the idea to convert the onboarding script into a base64 string that we then embed into the remediation script.
To convert the onboarding file into the base64 string, download the onboarding file form the Defender for Endpoint portal. Here make sure to select the Group Policy version , because that script does not have a prompt to confirm the script execution. Once downloaded extract the script from the ZIP file.
Next adjust the helper script $onbaordingScript variable and then run it.
Open the generated mdeonboardbase64.txt and copy the content into the clipboard
Next, open the script CI_DefenderOnboarding_Remediation.ps1
And then copy the previously generated base64 string into the script.
Now that we have the remediation script ready for our configuration item, we need to get it into Microsoft Endpoint Configuration Manager. You can create the CI manually and import the script or use the New-CMCIDefenderOnboarding_Remediation.ps1 script that I include in the source code that will create the CI for you.
The CI has two scripts embedded.
The CI_DefenderOnboarding_Discovery.ps1 script simply checks the onboarding status by querying the appropriate registry key.
The CI_DefenderOnboarding_Remediation.ps1 script does the following:
The CI is now in the Console so we can continue creating the configuration baseline.
When created, we deploy the configuration baseline to our collection that contains devices ‘not onboarded’ into defender for endpoint.
On the client we see that the device is not onboarded and the configuration baseline hasn’t run yet.
And as soon as the CI is triggered the device is successfully onboarded.
And after a while we have our client back under control.
That is it for today, hope you found this useful and will help you with getting devices successfully onboarded into Defender for Endpoint. You find all the scripts referenced I this blog post in my GitHub repository here: https://github.com/alexverboon/PowerShellCode/tree/main/DefenderforEndpoint/Onboarding
]]>So heading over to Microsoft Graph and there we can grab all the Authentication Methods for users as shown in the example below
So, I created Get-AzureADUserAuthMethodInventory.ps1, the script first retrieves all users in AzureAD and then retrieves the registered authentication methods for each user.
If you have not done so yet, install the Microsoft Graph PowerShell modules
find-module -name “Microsoft.graph” | Install-module -Scope CurrentUser
find-module -name Microsoft.Graph.Identity.AuthenticationMethods | install-module -Scope CurrentUser
Then run the following command
Connect-Graph -Scopes @(“UserAuthenticationMethod.Read.All”, “User.Read.All” )
Follow the instructions and grant consent
And finally run the script
$AuthInfo = .\Get-AzureADUserAuthMethodInventory.ps1
For each user found in AzureAD the following information is collected
Filter the results as needed.
The script and instructions can be found on GitHub here: https://github.com/alexverboon/PowerShellCode/tree/main/AzureAD/MFA/MfaAuthMethodsAnalysisV2
Hope you liked this blog post, as always feedback is welcome
Alex
]]>