diff --git a/prowler/compliance/aws/cis_1.4_aws.json b/prowler/compliance/aws/cis_1.4_aws.json index 3f21bda2..05ccb761 100644 --- a/prowler/compliance/aws/cis_1.4_aws.json +++ b/prowler/compliance/aws/cis_1.4_aws.json @@ -15,11 +15,11 @@ "Section": "1. Identity and Access Management", "Profile": "Level 1", "AssessmentStatus": "Manual", - "Description": "Ensure contact email and telephone details for AWS accounts are current and map to more than one individual in your organization.\n\nAn AWS account supports a number of contact details, and AWS will use these to contact the account owner if activity judged to be in breach of Acceptable Use Policy or indicative of likely security compromise is observed by the AWS Abuse team. Contact details should not be for a single individual, as circumstances may arise where that individual is unavailable. Email contact details should point to a mail alias which forwards email to multiple individuals within the organization; where feasible, phone contact details should point to a PABX hunt group or other call-forwarding system.", + "Description": "Ensure contact email and telephone details for AWS accounts are current and map to more than one individual in your organization. An AWS account supports a number of contact details, and AWS will use these to contact the account owner if activity judged to be in breach of Acceptable Use Policy or indicative of likely security compromise is observed by the AWS Abuse team. Contact details should not be for a single individual, as circumstances may arise where that individual is unavailable. Email contact details should point to a mail alias which forwards email to multiple individuals within the organization; where feasible, phone contact details should point to a PABX hunt group or other call-forwarding system.", "RationaleStatement": "If an AWS account is observed to be behaving in a prohibited or suspicious manner, AWS will attempt to contact the account owner by email and phone using the contact details listed. If this is unsuccessful and the account behavior needs urgent mitigation, proactive measures may be taken, including throttling of traffic between the account exhibiting suspicious behavior and the AWS API endpoints and the Internet. This will result in impaired service to and from the account in question, so it is in both the customers' and AWS' best interests that prompt contact can be established. This is best achieved by setting AWS account contact details to point to resources which have multiple individuals as recipients, such as email aliases and PABX hunt groups.", "ImpactStatement": "", - "RemediationProcedure": "This activity can only be performed via the AWS Console, with a user who has permission to read and write Billing information (aws-portal:\\*Billing ).\n\n1. Sign in to the AWS Management Console and open the `Billing and Cost Management` console at https://console.aws.amazon.com/billing/home#/.\n2. On the navigation bar, choose your account name, and then choose `My Account`.\n3. On the `Account Settings` page, next to `Account Settings`, choose `Edit`.\n4. Next to the field that you need to update, choose `Edit`.\n5. After you have entered your changes, choose `Save changes`.\n6. After you have made your changes, choose `Done`.\n7. To edit your contact information, under `Contact Information`, choose `Edit`.\n8. For the fields that you want to change, type your updated information, and then choose `Update`.", - "AuditProcedure": "This activity can only be performed via the AWS Console, with a user who has permission to read and write Billing information (aws-portal:\\*Billing )\n\n1. Sign in to the AWS Management Console and open the `Billing and Cost Management` console at https://console.aws.amazon.com/billing/home#/.\n2. On the navigation bar, choose your account name, and then choose `My Account`.\n3. On the `Account Settings` page, review and verify the current details.\n4. Under `Contact Information`, review and verify the current details.", + "RemediationProcedure": "This activity can only be performed via the AWS Console, with a user who has permission to read and write Billing information (aws-portal:\\*Billing ). 1. Sign in to the AWS Management Console and open the `Billing and Cost Management` console at https://console.aws.amazon.com/billing/home#/. 2. On the navigation bar, choose your account name, and then choose `My Account`. 3. On the `Account Settings` page, next to `Account Settings`, choose `Edit`. 4. Next to the field that you need to update, choose `Edit`. 5. After you have entered your changes, choose `Save changes`. 6. After you have made your changes, choose `Done`. 7. To edit your contact information, under `Contact Information`, choose `Edit`. 8. For the fields that you want to change, type your updated information, and then choose `Update`.", + "AuditProcedure": "This activity can only be performed via the AWS Console, with a user who has permission to read and write Billing information (aws-portal:\\*Billing ) 1. Sign in to the AWS Management Console and open the `Billing and Cost Management` console at https://console.aws.amazon.com/billing/home#/. 2. On the navigation bar, choose your account name, and then choose `My Account`. 3. On the `Account Settings` page, review and verify the current details. 4. Under `Contact Information`, review and verify the current details.", "AdditionalInformation": "", "References": "https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/manage-account-payment.html#contact-info" } @@ -39,9 +39,9 @@ "Description": "Multi-Factor Authentication (MFA) adds an extra layer of authentication assurance beyond traditional credentials. With MFA enabled, when a user signs in to the AWS Console, they will be prompted for their user name and password as well as for an authentication code from their physical or virtual MFA token. It is recommended that MFA be enabled for all accounts that have a console password.", "RationaleStatement": "Enabling MFA provides increased security for console access as it requires the authenticating principal to possess a device that displays a time-sensitive key and have knowledge of a credential.", "ImpactStatement": "AWS will soon end support for SMS multi-factor authentication (MFA). New customers are not allowed to use this feature. We recommend that existing customers switch to one of the following alternative methods of MFA.", - "RemediationProcedure": "Perform the following to enable MFA:\n\n**From Console:**\n\n1. Sign in to the AWS Management Console and open the IAM console at 'https://console.aws.amazon.com/iam/'\n2. In the left pane, select `Users`.\n3. In the `User Name` list, choose the name of the intended MFA user.\n4. Choose the `Security Credentials` tab, and then choose `Manage MFA Device`.\n5. In the `Manage MFA Device wizard`, choose `Virtual MFA` device, and then choose `Continue`.\n\n IAM generates and displays configuration information for the virtual MFA device, including a QR code graphic. The graphic is a representation of the 'secret configuration key' that is available for manual entry on devices that do not support QR codes.\n\n6. Open your virtual MFA application. (For a list of apps that you can use for hosting virtual MFA devices, see Virtual MFA Applications at https://aws.amazon.com/iam/details/mfa/#Virtual_MFA_Applications). If the virtual MFA application supports multiple accounts (multiple virtual MFA devices), choose the option to create a new account (a new virtual MFA device).\n7. Determine whether the MFA app supports QR codes, and then do one of the following:\n\n - Use the app to scan the QR code. For example, you might choose the camera icon or choose an option similar to Scan code, and then use the device's camera to scan the code.\n - In the Manage MFA Device wizard, choose Show secret key for manual configuration, and then type the secret configuration key into your MFA application.\n\n When you are finished, the virtual MFA device starts generating one-time passwords.\n\n8. In the `Manage MFA Device wizard`, in the `MFA Code 1 box`, type the `one-time password` that currently appears in the virtual MFA device. Wait up to 30 seconds for the device to generate a new one-time password. Then type the second `one-time password` into the `MFA Code 2 box`.\n\n9. Click `Assign MFA`.", - "AuditProcedure": "Perform the following to determine if a MFA device is enabled for all IAM users having a console password:\n\n**From Console:**\n\n1. Open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).\n2. In the left pane, select `Users` \n3. If the `MFA` or `Password age` columns are not visible in the table, click the gear icon at the upper right corner of the table and ensure a checkmark is next to both, then click `Close`.\n4. Ensure that for each user where the `Password age` column shows a password age, the `MFA` column shows `Virtual`, `U2F Security Key`, or `Hardware`.\n\n**From Command Line:**\n\n1. Run the following command (OSX/Linux/UNIX) to generate a list of all IAM users along with their password and MFA status:\n```\n aws iam generate-credential-report\n```\n```\n aws iam get-credential-report --query 'Content' --output text | base64 -d | cut -d, -f1,4,8 \n```\n2. The output of this command will produce a table similar to the following:\n```\n user,password_enabled,mfa_active\n elise,false,false\n brandon,true,true\n rakesh,false,false\n helene,false,false\n paras,true,true\n anitha,false,false \n```\n3. For any column having `password_enabled` set to `true` , ensure `mfa_active` is also set to `true.`", - "AdditionalInformation": "**Forced IAM User Self-Service Remediation**\n\nAmazon has published a pattern that forces users to self-service setup MFA before they have access to their complete permissions set. Until they complete this step, they cannot access their full permissions. This pattern can be used on new AWS accounts. It can also be used on existing accounts - it is recommended users are given instructions and a grace period to accomplish MFA enrollment before active enforcement on existing AWS accounts.", + "RemediationProcedure": "Perform the following to enable MFA: **From Console:** 1. Sign in to the AWS Management Console and open the IAM console at 'https://console.aws.amazon.com/iam/' 2. In the left pane, select `Users`. 3. In the `User Name` list, choose the name of the intended MFA user. 4. Choose the `Security Credentials` tab, and then choose `Manage MFA Device`. 5. In the `Manage MFA Device wizard`, choose `Virtual MFA` device, and then choose `Continue`. IAM generates and displays configuration information for the virtual MFA device, including a QR code graphic. The graphic is a representation of the 'secret configuration key' that is available for manual entry on devices that do not support QR codes. 6. Open your virtual MFA application. (For a list of apps that you can use for hosting virtual MFA devices, see Virtual MFA Applications at https://aws.amazon.com/iam/details/mfa/#Virtual_MFA_Applications). If the virtual MFA application supports multiple accounts (multiple virtual MFA devices), choose the option to create a new account (a new virtual MFA device). 7. Determine whether the MFA app supports QR codes, and then do one of the following: - Use the app to scan the QR code. For example, you might choose the camera icon or choose an option similar to Scan code, and then use the device's camera to scan the code. - In the Manage MFA Device wizard, choose Show secret key for manual configuration, and then type the secret configuration key into your MFA application. When you are finished, the virtual MFA device starts generating one-time passwords. 8. In the `Manage MFA Device wizard`, in the `MFA Code 1 box`, type the `one-time password` that currently appears in the virtual MFA device. Wait up to 30 seconds for the device to generate a new one-time password. Then type the second `one-time password` into the `MFA Code 2 box`. 9. Click `Assign MFA`.", + "AuditProcedure": "Perform the following to determine if a MFA device is enabled for all IAM users having a console password: **From Console:** 1. Open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/). 2. In the left pane, select `Users` 3. If the `MFA` or `Password age` columns are not visible in the table, click the gear icon at the upper right corner of the table and ensure a checkmark is next to both, then click `Close`. 4. Ensure that for each user where the `Password age` column shows a password age, the `MFA` column shows `Virtual`, `U2F Security Key`, or `Hardware`. **From Command Line:** 1. Run the following command (OSX/Linux/UNIX) to generate a list of all IAM users along with their password and MFA status: ``` aws iam generate-credential-report ``` ``` aws iam get-credential-report --query 'Content' --output text | base64 -d | cut -d, -f1,4,8 ``` 2. The output of this command will produce a table similar to the following: ``` user,password_enabled,mfa_active elise,false,false brandon,true,true rakesh,false,false helene,false,false paras,true,true anitha,false,false ``` 3. For any column having `password_enabled` set to `true` , ensure `mfa_active` is also set to `true.`", + "AdditionalInformation": "**Forced IAM User Self-Service Remediation** Amazon has published a pattern that forces users to self-service setup MFA before they have access to their complete permissions set. Until they complete this step, they cannot access their full permissions. This pattern can be used on new AWS accounts. It can also be used on existing accounts - it is recommended users are given instructions and a grace period to accomplish MFA enrollment before active enforcement on existing AWS accounts.", "References": "https://tools.ietf.org/html/rfc6238:https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa.html:https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#enable-mfa-for-privileged-users:https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa_enable_virtual.html:https://blogs.aws.amazon.com/security/post/Tx2SJJYE082KBUK/How-to-Delegate-Management-of-Multi-Factor-Authentication-to-AWS-IAM-Users" } ] @@ -57,11 +57,11 @@ "Section": "1. Identity and Access Management", "Profile": "Level 1", "AssessmentStatus": "Automated", - "Description": "AWS console defaults to no check boxes selected when creating a new IAM user. When cerating the IAM User credentials you have to determine what type of access they require. \n\nProgrammatic access: The IAM user might need to make API calls, use the AWS CLI, or use the Tools for Windows PowerShell. In that case, create an access key (access key ID and a secret access key) for that user. \n\nAWS Management Console access: If the user needs to access the AWS Management Console, create a password for the user.", - "RationaleStatement": "Requiring the additional steps be taken by the user for programmatic access after their profile has been created will give a stronger indication of intent that access keys are [a] necessary for their work and [b] once the access key is established on an account that the keys may be in use somewhere in the organization.\n\n**Note**: Even if it is known the user will need access keys, require them to create the keys themselves or put in a support ticket to have them created as a separate step from user creation.", + "Description": "AWS console defaults to no check boxes selected when creating a new IAM user. When cerating the IAM User credentials you have to determine what type of access they require. Programmatic access: The IAM user might need to make API calls, use the AWS CLI, or use the Tools for Windows PowerShell. In that case, create an access key (access key ID and a secret access key) for that user. AWS Management Console access: If the user needs to access the AWS Management Console, create a password for the user.", + "RationaleStatement": "Requiring the additional steps be taken by the user for programmatic access after their profile has been created will give a stronger indication of intent that access keys are [a] necessary for their work and [b] once the access key is established on an account that the keys may be in use somewhere in the organization. **Note**: Even if it is known the user will need access keys, require them to create the keys themselves or put in a support ticket to have them created as a separate step from user creation.", "ImpactStatement": "", - "RemediationProcedure": "Perform the following to delete access keys that do not pass the audit:\n\n**From Console:**\n\n1. Login to the AWS Management Console:\n2. Click `Services` \n3. Click `IAM` \n4. Click on `Users` \n5. Click on `Security Credentials` \n6. As an Administrator \n - Click on the X `(Delete)` for keys that were created at the same time as the user profile but have not been used.\n7. As an IAM User\n - Click on the X `(Delete)` for keys that were created at the same time as the user profile but have not been used.\n\n**From Command Line:**\n```\naws iam delete-access-key --access-key-id --user-name \n```", - "AuditProcedure": "Perform the following to determine if access keys were created upon user creation and are being used and rotated as prescribed:\n\n**From Console:**\n\n1. Login to the AWS Management Console\n2. Click `Services` \n3. Click `IAM` \n4. Click on a User where column `Password age` and `Access key age` is not set to `None`\n5. Click on `Security credentials` Tab\n6. Compare the user 'Creation time` to the Access Key `Created` date.\n6. For any that match, the key was created during initial user setup.\n\n- Keys that were created at the same time as the user profile and do not have a last used date should be deleted. Refer to the remediation below.\n\n**From Command Line:**\n\n1. Run the following command (OSX/Linux/UNIX) to generate a list of all IAM users along with their access keys utilization:\n```\n aws iam generate-credential-report\n```\n```\n aws iam get-credential-report --query 'Content' --output text | base64 -d | cut -d, -f1,4,9,11,14,16\n```\n2. The output of this command will produce a table similar to the following:\n```\nuser,password_enabled,access_key_1_active,access_key_1_last_used_date,access_key_2_active,access_key_2_last_used_date\n elise,false,true,2015-04-16T15:14:00+00:00,false,N/A\n brandon,true,true,N/A,false,N/A\n rakesh,false,false,N/A,false,N/A\n helene,false,true,2015-11-18T17:47:00+00:00,false,N/A\n paras,true,true,2016-08-28T12:04:00+00:00,true,2016-03-04T10:11:00+00:00\n anitha,true,true,2016-06-08T11:43:00+00:00,true,N/A \n```\n3. For any user having `password_enabled` set to `true` AND `access_key_last_used_date` set to `N/A` refer to the remediation below.", + "RemediationProcedure": "Perform the following to delete access keys that do not pass the audit: **From Console:** 1. Login to the AWS Management Console: 2. Click `Services` 3. Click `IAM` 4. Click on `Users` 5. Click on `Security Credentials` 6. As an Administrator - Click on the X `(Delete)` for keys that were created at the same time as the user profile but have not been used. 7. As an IAM User - Click on the X `(Delete)` for keys that were created at the same time as the user profile but have not been used. **From Command Line:** ``` aws iam delete-access-key --access-key-id --user-name ```", + "AuditProcedure": "Perform the following to determine if access keys were created upon user creation and are being used and rotated as prescribed: **From Console:** 1. Login to the AWS Management Console 2. Click `Services` 3. Click `IAM` 4. Click on a User where column `Password age` and `Access key age` is not set to `None` 5. Click on `Security credentials` Tab 6. Compare the user 'Creation time` to the Access Key `Created` date. 6. For any that match, the key was created during initial user setup. - Keys that were created at the same time as the user profile and do not have a last used date should be deleted. Refer to the remediation below. **From Command Line:** 1. Run the following command (OSX/Linux/UNIX) to generate a list of all IAM users along with their access keys utilization: ``` aws iam generate-credential-report ``` ``` aws iam get-credential-report --query 'Content' --output text | base64 -d | cut -d, -f1,4,9,11,14,16 ``` 2. The output of this command will produce a table similar to the following: ``` user,password_enabled,access_key_1_active,access_key_1_last_used_date,access_key_2_active,access_key_2_last_used_date elise,false,true,2015-04-16T15:14:00+00:00,false,N/A brandon,true,true,N/A,false,N/A rakesh,false,false,N/A,false,N/A helene,false,true,2015-11-18T17:47:00+00:00,false,N/A paras,true,true,2016-08-28T12:04:00+00:00,true,2016-03-04T10:11:00+00:00 anitha,true,true,2016-06-08T11:43:00+00:00,true,N/A ``` 3. For any user having `password_enabled` set to `true` AND `access_key_last_used_date` set to `N/A` refer to the remediation below.", "AdditionalInformation": "Credential report does not appear to contain \"Key Creation Date\"", "References": "https://docs.aws.amazon.com/cli/latest/reference/iam/delete-access-key.html:https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html" } @@ -82,8 +82,8 @@ "Description": "AWS IAM users can access AWS resources using different types of credentials, such as passwords or access keys. It is recommended that all credentials that have been unused in 45 or greater days be deactivated or removed.", "RationaleStatement": "Disabling or removing unnecessary credentials will reduce the window of opportunity for credentials associated with a compromised or abandoned account to be used.", "ImpactStatement": "", - "RemediationProcedure": "**From Console:**\n\nPerform the following to manage Unused Password (IAM user console access)\n\n1. Login to the AWS Management Console:\n2. Click `Services` \n3. Click `IAM` \n4. Click on `Users` \n5. Click on `Security Credentials` \n6. Select user whose `Console last sign-in` is greater than 45 days\n7. Click `Security credentials`\n8. In section `Sign-in credentials`, `Console password` click `Manage` \n9. Under Console Access select `Disable`\n10.Click `Apply`\n\nPerform the following to deactivate Access Keys:\n\n1. Login to the AWS Management Console:\n2. Click `Services` \n3. Click `IAM` \n4. Click on `Users` \n5. Click on `Security Credentials` \n6. Select any access keys that are over 45 days old and that have been used and \n - Click on `Make Inactive`\n7. Select any access keys that are over 45 days old and that have not been used and \n - Click the X to `Delete`", - "AuditProcedure": "Perform the following to determine if unused credentials exist:\n\n**From Console:**\n\n1. Login to the AWS Management Console\n2. Click `Services` \n3. Click `IAM`\n4. Click on `Users`\n5. Click the `Settings` (gear) icon.\n6. Select `Console last sign-in`, `Access key last used`, and `Access Key Id`\n7. Click on `Close` \n8. Check and ensure that `Console last sign-in` is less than 45 days ago.\n\n**Note** - `Never` means the user has never logged in.\n\n9. Check and ensure that `Access key age` is less than 45 days and that `Access key last used` does not say `None`\n\nIf the user hasn't signed into the Console in the last 45 days or Access keys are over 45 days old refer to the remediation.\n\n**From Command Line:**\n\n**Download Credential Report:**\n\n1. Run the following commands:\n```\n aws iam generate-credential-report\n\n aws iam get-credential-report --query 'Content' --output text | base64 -d | cut -d, -f1,4,5,6,9,10,11,14,15,16 | grep -v '^'\n```\n\n**Ensure unused credentials do not exist:**\n\n2. For each user having `password_enabled` set to `TRUE` , ensure `password_last_used_date` is less than `45` days ago.\n\n- When `password_enabled` is set to `TRUE` and `password_last_used` is set to `No_Information` , ensure `password_last_changed` is less than 45 days ago.\n\n3. For each user having an `access_key_1_active` or `access_key_2_active` to `TRUE` , ensure the corresponding `access_key_n_last_used_date` is less than `45` days ago.\n\n- When a user having an `access_key_x_active` (where x is 1 or 2) to `TRUE` and corresponding access_key_x_last_used_date is set to `N/A', ensure `access_key_x_last_rotated` is less than 45 days ago.", + "RemediationProcedure": "**From Console:** Perform the following to manage Unused Password (IAM user console access) 1. Login to the AWS Management Console: 2. Click `Services` 3. Click `IAM` 4. Click on `Users` 5. Click on `Security Credentials` 6. Select user whose `Console last sign-in` is greater than 45 days 7. Click `Security credentials` 8. In section `Sign-in credentials`, `Console password` click `Manage` 9. Under Console Access select `Disable` 10.Click `Apply` Perform the following to deactivate Access Keys: 1. Login to the AWS Management Console: 2. Click `Services` 3. Click `IAM` 4. Click on `Users` 5. Click on `Security Credentials` 6. Select any access keys that are over 45 days old and that have been used and - Click on `Make Inactive` 7. Select any access keys that are over 45 days old and that have not been used and - Click the X to `Delete`", + "AuditProcedure": "Perform the following to determine if unused credentials exist: **From Console:** 1. Login to the AWS Management Console 2. Click `Services` 3. Click `IAM` 4. Click on `Users` 5. Click the `Settings` (gear) icon. 6. Select `Console last sign-in`, `Access key last used`, and `Access Key Id` 7. Click on `Close` 8. Check and ensure that `Console last sign-in` is less than 45 days ago. **Note** - `Never` means the user has never logged in. 9. Check and ensure that `Access key age` is less than 45 days and that `Access key last used` does not say `None` If the user hasn't signed into the Console in the last 45 days or Access keys are over 45 days old refer to the remediation. **From Command Line:** **Download Credential Report:** 1. Run the following commands: ``` aws iam generate-credential-report aws iam get-credential-report --query 'Content' --output text | base64 -d | cut -d, -f1,4,5,6,9,10,11,14,15,16 | grep -v '^' ``` **Ensure unused credentials do not exist:** 2. For each user having `password_enabled` set to `TRUE` , ensure `password_last_used_date` is less than `45` days ago. - When `password_enabled` is set to `TRUE` and `password_last_used` is set to `No_Information` , ensure `password_last_changed` is less than 45 days ago. 3. For each user having an `access_key_1_active` or `access_key_2_active` to `TRUE` , ensure the corresponding `access_key_n_last_used_date` is less than `45` days ago. - When a user having an `access_key_x_active` (where x is 1 or 2) to `TRUE` and corresponding access_key_x_last_used_date is set to `N/A', ensure `access_key_x_last_rotated` is less than 45 days ago.", "AdditionalInformation": " is excluded in the audit since the root account should not be used for day to day business and would likely be unused for more than 45 days.", "References": "https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#remove-credentials:https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_finding-unused.html:https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_passwords_admin-change-user.html:https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html" } @@ -103,8 +103,8 @@ "Description": "Access keys are long-term credentials for an IAM user or the AWS account 'root' user. You can use access keys to sign programmatic requests to the AWS CLI or AWS API (directly or using the AWS SDK)", "RationaleStatement": "Access keys are long-term credentials for an IAM user or the AWS account 'root' user. You can use access keys to sign programmatic requests to the AWS CLI or AWS API. One of the best ways to protect your account is to not allow users to have multiple access keys.", "ImpactStatement": "", - "RemediationProcedure": "**From Console:**\n\n1. Sign in to the AWS Management Console and navigate to IAM dashboard at `https://console.aws.amazon.com/iam/`.\n2. In the left navigation panel, choose `Users`.\n3. Click on the IAM user name that you want to examine.\n4. On the IAM user configuration page, select `Security Credentials` tab.\n5. In `Access Keys` section, choose one access key that is less than 90 days old. This should be the only active key used by this IAM user to access AWS resources programmatically. Test your application(s) to make sure that the chosen access key is working.\n6. In the same `Access Keys` section, identify your non-operational access keys (other than the chosen one) and deactivate it by clicking the `Make Inactive` link.\n7. If you receive the `Change Key Status` confirmation box, click `Deactivate` to switch off the selected key.\n8. Repeat steps no. 3 – 7 for each IAM user in your AWS account.\n\n**From Command Line:**\n\n1. Using the IAM user and access key information provided in the `Audit CLI`, choose one access key that is less than 90 days old. This should be the only active key used by this IAM user to access AWS resources programmatically. Test your application(s) to make sure that the chosen access key is working.\n\n2. Run the `update-access-key` command below using the IAM user name and the non-operational access key IDs to deactivate the unnecessary key(s). Refer to the Audit section to identify the unnecessary access key ID for the selected IAM user\n\n**Note** - the command does not return any output:\n```\naws iam update-access-key --access-key-id --status Inactive --user-name \n```\n3. To confirm that the selected access key pair has been successfully `deactivated` run the `list-access-keys` audit command again for that IAM User:\n```\naws iam list-access-keys --user-name \n```\n- The command output should expose the metadata for each access key associated with the IAM user. If the non-operational key pair(s) `Status` is set to `Inactive`, the key has been successfully deactivated and the IAM user access configuration adheres now to this recommendation.\n\n4. Repeat steps no. 1 – 3 for each IAM user in your AWS account.", - "AuditProcedure": "**From Console:**\n\n1. Sign in to the AWS Management Console and navigate to IAM dashboard at `https://console.aws.amazon.com/iam/`.\n2. In the left navigation panel, choose `Users`.\n3. Click on the IAM user name that you want to examine.\n4. On the IAM user configuration page, select `Security Credentials` tab.\n5. Under `Access Keys` section, in the Status column, check the current status for each access key associated with the IAM user. If the selected IAM user has more than one access key activated then the users access configuration does not adhere to security best practices and the risk of accidental exposures increases.\n- Repeat steps no. 3 – 5 for each IAM user in your AWS account.\n\n**From Command Line:**\n\n1. Run `list-users` command to list all IAM users within your account:\n```\naws iam list-users --query \"Users[*].UserName\"\n```\nThe command output should return an array that contains all your IAM user names.\n\n2. Run `list-access-keys` command using the IAM user name list to return the current status of each access key associated with the selected IAM user:\n```\naws iam list-access-keys --user-name \n```\nThe command output should expose the metadata `(\"Username\", \"AccessKeyId\", \"Status\", \"CreateDate\")` for each access key on that user account.\n\n3. Check the `Status` property value for each key returned to determine each keys current state. If the `Status` property value for more than one IAM access key is set to `Active`, the user access configuration does not adhere to this recommendation, refer to the remediation below.\n\n- Repeat steps no. 2 and 3 for each IAM user in your AWS account.", + "RemediationProcedure": "**From Console:** 1. Sign in to the AWS Management Console and navigate to IAM dashboard at `https://console.aws.amazon.com/iam/`. 2. In the left navigation panel, choose `Users`. 3. Click on the IAM user name that you want to examine. 4. On the IAM user configuration page, select `Security Credentials` tab. 5. In `Access Keys` section, choose one access key that is less than 90 days old. This should be the only active key used by this IAM user to access AWS resources programmatically. Test your application(s) to make sure that the chosen access key is working. 6. In the same `Access Keys` section, identify your non-operational access keys (other than the chosen one) and deactivate it by clicking the `Make Inactive` link. 7. If you receive the `Change Key Status` confirmation box, click `Deactivate` to switch off the selected key. 8. Repeat steps no. 3 – 7 for each IAM user in your AWS account. **From Command Line:** 1. Using the IAM user and access key information provided in the `Audit CLI`, choose one access key that is less than 90 days old. This should be the only active key used by this IAM user to access AWS resources programmatically. Test your application(s) to make sure that the chosen access key is working. 2. Run the `update-access-key` command below using the IAM user name and the non-operational access key IDs to deactivate the unnecessary key(s). Refer to the Audit section to identify the unnecessary access key ID for the selected IAM user **Note** - the command does not return any output: ``` aws iam update-access-key --access-key-id --status Inactive --user-name ``` 3. To confirm that the selected access key pair has been successfully `deactivated` run the `list-access-keys` audit command again for that IAM User: ``` aws iam list-access-keys --user-name ``` - The command output should expose the metadata for each access key associated with the IAM user. If the non-operational key pair(s) `Status` is set to `Inactive`, the key has been successfully deactivated and the IAM user access configuration adheres now to this recommendation. 4. Repeat steps no. 1 – 3 for each IAM user in your AWS account.", + "AuditProcedure": "**From Console:** 1. Sign in to the AWS Management Console and navigate to IAM dashboard at `https://console.aws.amazon.com/iam/`. 2. In the left navigation panel, choose `Users`. 3. Click on the IAM user name that you want to examine. 4. On the IAM user configuration page, select `Security Credentials` tab. 5. Under `Access Keys` section, in the Status column, check the current status for each access key associated with the IAM user. If the selected IAM user has more than one access key activated then the users access configuration does not adhere to security best practices and the risk of accidental exposures increases. - Repeat steps no. 3 – 5 for each IAM user in your AWS account. **From Command Line:** 1. Run `list-users` command to list all IAM users within your account: ``` aws iam list-users --query \"Users[*].UserName\" ``` The command output should return an array that contains all your IAM user names. 2. Run `list-access-keys` command using the IAM user name list to return the current status of each access key associated with the selected IAM user: ``` aws iam list-access-keys --user-name ``` The command output should expose the metadata `(\"Username\", \"AccessKeyId\", \"Status\", \"CreateDate\")` for each access key on that user account. 3. Check the `Status` property value for each key returned to determine each keys current state. If the `Status` property value for more than one IAM access key is set to `Active`, the user access configuration does not adhere to this recommendation, refer to the remediation below. - Repeat steps no. 2 and 3 for each IAM user in your AWS account.", "AdditionalInformation": "", "References": "https://docs.aws.amazon.com/general/latest/gr/aws-access-keys-best-practices.html:https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html" } @@ -122,10 +122,10 @@ "Profile": "Level 1", "AssessmentStatus": "Automated", "Description": "Access keys consist of an access key ID and secret access key, which are used to sign programmatic requests that you make to AWS. AWS users need their own access keys to make programmatic calls to AWS from the AWS Command Line Interface (AWS CLI), Tools for Windows PowerShell, the AWS SDKs, or direct HTTP calls using the APIs for individual AWS services. It is recommended that all access keys be regularly rotated.", - "RationaleStatement": "Rotating access keys will reduce the window of opportunity for an access key that is associated with a compromised or terminated account to be used.\n\nAccess keys should be rotated to ensure that data cannot be accessed with an old key which might have been lost, cracked, or stolen.", + "RationaleStatement": "Rotating access keys will reduce the window of opportunity for an access key that is associated with a compromised or terminated account to be used. Access keys should be rotated to ensure that data cannot be accessed with an old key which might have been lost, cracked, or stolen.", "ImpactStatement": "", - "RemediationProcedure": "Perform the following to rotate access keys:\n\n**From Console:**\n\n1. Go to Management Console (https://console.aws.amazon.com/iam)\n2. Click on `Users`\n3. Click on `Security Credentials` \n4. As an Administrator \n - Click on `Make Inactive` for keys that have not been rotated in `90` Days\n5. As an IAM User\n - Click on `Make Inactive` or `Delete` for keys which have not been rotated or used in `90` Days\n6. Click on `Create Access Key` \n7. Update programmatic call with new Access Key credentials\n\n**From Command Line:**\n\n1. While the first access key is still active, create a second access key, which is active by default. Run the following command:\n```\naws iam create-access-key\n```\n\nAt this point, the user has two active access keys.\n\n2. Update all applications and tools to use the new access key.\n3. Determine whether the first access key is still in use by using this command:\n```\naws iam get-access-key-last-used\n```\n4. One approach is to wait several days and then check the old access key for any use before proceeding.\n\nEven if step Step 3 indicates no use of the old key, it is recommended that you do not immediately delete the first access key. Instead, change the state of the first access key to Inactive using this command:\n```\naws iam update-access-key\n```\n5. Use only the new access key to confirm that your applications are working. Any applications and tools that still use the original access key will stop working at this point because they no longer have access to AWS resources. If you find such an application or tool, you can switch its state back to Active to reenable the first access key. Then return to step Step 2 and update this application to use the new key.\n\n6. After you wait some period of time to ensure that all applications and tools have been updated, you can delete the first access key with this command:\n```\naws iam delete-access-key\n```", - "AuditProcedure": "Perform the following to determine if access keys are rotated as prescribed:\n\n**From Console:**\n\n1. Go to Management Console (https://console.aws.amazon.com/iam)\n2. Click on `Users`\n3. Click `setting` icon\n4. Select `Console last sign-in`\n5. Click `Close`\n6. Ensure that `Access key age` is less than 90 days ago. note) `None` in the `Access key age` means the user has not used the access key.\n\n**From Command Line:**\n\n```\naws iam generate-credential-report\naws iam get-credential-report --query 'Content' --output text | base64 -d\n```\nThe `access_key_1_last_rotated` field in this file notes The date and time, in ISO 8601 date-time format, when the user's access key was created or last changed. If the user does not have an active access key, the value in this field is N/A (not applicable).", + "RemediationProcedure": "Perform the following to rotate access keys: **From Console:** 1. Go to Management Console (https://console.aws.amazon.com/iam) 2. Click on `Users` 3. Click on `Security Credentials` 4. As an Administrator - Click on `Make Inactive` for keys that have not been rotated in `90` Days 5. As an IAM User - Click on `Make Inactive` or `Delete` for keys which have not been rotated or used in `90` Days 6. Click on `Create Access Key` 7. Update programmatic call with new Access Key credentials **From Command Line:** 1. While the first access key is still active, create a second access key, which is active by default. Run the following command: ``` aws iam create-access-key ``` At this point, the user has two active access keys. 2. Update all applications and tools to use the new access key. 3. Determine whether the first access key is still in use by using this command: ``` aws iam get-access-key-last-used ``` 4. One approach is to wait several days and then check the old access key for any use before proceeding. Even if step Step 3 indicates no use of the old key, it is recommended that you do not immediately delete the first access key. Instead, change the state of the first access key to Inactive using this command: ``` aws iam update-access-key ``` 5. Use only the new access key to confirm that your applications are working. Any applications and tools that still use the original access key will stop working at this point because they no longer have access to AWS resources. If you find such an application or tool, you can switch its state back to Active to reenable the first access key. Then return to step Step 2 and update this application to use the new key. 6. After you wait some period of time to ensure that all applications and tools have been updated, you can delete the first access key with this command: ``` aws iam delete-access-key ```", + "AuditProcedure": "Perform the following to determine if access keys are rotated as prescribed: **From Console:** 1. Go to Management Console (https://console.aws.amazon.com/iam) 2. Click on `Users` 3. Click `setting` icon 4. Select `Console last sign-in` 5. Click `Close` 6. Ensure that `Access key age` is less than 90 days ago. note) `None` in the `Access key age` means the user has not used the access key. **From Command Line:** ``` aws iam generate-credential-report aws iam get-credential-report --query 'Content' --output text | base64 -d ``` The `access_key_1_last_rotated` field in this file notes The date and time, in ISO 8601 date-time format, when the user's access key was created or last changed. If the user does not have an active access key, the value in this field is N/A (not applicable).", "AdditionalInformation": "", "References": "https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#rotate-credentials:https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_finding-unused.html:https://docs.aws.amazon.com/general/latest/gr/managing-aws-access-keys.html:https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html" } @@ -142,11 +142,11 @@ "Section": "1. Identity and Access Management", "Profile": "Level 1", "AssessmentStatus": "Automated", - "Description": "IAM users are granted access to services, functions, and data through IAM policies. There are three ways to define policies for a user: 1) Edit the user policy directly, aka an inline, or user, policy; 2) attach a policy directly to a user; 3) add the user to an IAM group that has an attached policy. \n\nOnly the third implementation is recommended.", + "Description": "IAM users are granted access to services, functions, and data through IAM policies. There are three ways to define policies for a user: 1) Edit the user policy directly, aka an inline, or user, policy; 2) attach a policy directly to a user; 3) add the user to an IAM group that has an attached policy. Only the third implementation is recommended.", "RationaleStatement": "Assigning IAM policy only through groups unifies permissions management to a single, flexible layer consistent with organizational functional roles. By unifying permissions management, the likelihood of excessive permissions is reduced.", "ImpactStatement": "", - "RemediationProcedure": "Perform the following to create an IAM group and assign a policy to it:\n\n1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).\n2. In the navigation pane, click `Groups` and then click `Create New Group` .\n3. In the `Group Name` box, type the name of the group and then click `Next Step` .\n4. In the list of policies, select the check box for each policy that you want to apply to all members of the group. Then click `Next Step` .\n5. Click `Create Group` \n\nPerform the following to add a user to a given group:\n\n1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).\n2. In the navigation pane, click `Groups` \n3. Select the group to add a user to\n4. Click `Add Users To Group` \n5. Select the users to be added to the group\n6. Click `Add Users` \n\nPerform the following to remove a direct association between a user and policy:\n\n1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).\n2. In the left navigation pane, click on Users\n3. For each user:\n - Select the user\n - Click on the `Permissions` tab\n - Expand `Permissions policies` \n - Click `X` for each policy; then click Detach or Remove (depending on policy type)", - "AuditProcedure": "Perform the following to determine if an inline policy is set or a policy is directly attached to users:\n\n1. Run the following to get a list of IAM users:\n```\n aws iam list-users --query 'Users[*].UserName' --output text \n```\n2. For each user returned, run the following command to determine if any policies are attached to them:\n```\n aws iam list-attached-user-policies --user-name \n aws iam list-user-policies --user-name \n```\n3. If any policies are returned, the user has an inline policy or direct policy attachment.", + "RemediationProcedure": "Perform the following to create an IAM group and assign a policy to it: 1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/). 2. In the navigation pane, click `Groups` and then click `Create New Group` . 3. In the `Group Name` box, type the name of the group and then click `Next Step` . 4. In the list of policies, select the check box for each policy that you want to apply to all members of the group. Then click `Next Step` . 5. Click `Create Group` Perform the following to add a user to a given group: 1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/). 2. In the navigation pane, click `Groups` 3. Select the group to add a user to 4. Click `Add Users To Group` 5. Select the users to be added to the group 6. Click `Add Users` Perform the following to remove a direct association between a user and policy: 1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/). 2. In the left navigation pane, click on Users 3. For each user: - Select the user - Click on the `Permissions` tab - Expand `Permissions policies` - Click `X` for each policy; then click Detach or Remove (depending on policy type)", + "AuditProcedure": "Perform the following to determine if an inline policy is set or a policy is directly attached to users: 1. Run the following to get a list of IAM users: ``` aws iam list-users --query 'Users[*].UserName' --output text ``` 2. For each user returned, run the following command to determine if any policies are attached to them: ``` aws iam list-attached-user-policies --user-name aws iam list-user-policies --user-name ``` 3. If any policies are returned, the user has an inline policy or direct policy attachment.", "AdditionalInformation": "", "References": "http://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html:http://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_managed-vs-inline.html" } @@ -165,10 +165,10 @@ "Profile": "Level 1", "AssessmentStatus": "Automated", "Description": "IAM policies are the means by which privileges are granted to users, groups, or roles. It is recommended and considered a standard security advice to grant _least privilege_ -that is, granting only the permissions required to perform a task. Determine what users need to do and then craft policies for them that let the users perform _only_ those tasks, instead of allowing full administrative privileges.", - "RationaleStatement": "It's more secure to start with a minimum set of permissions and grant additional permissions as necessary, rather than starting with permissions that are too lenient and then trying to tighten them later.\n\nProviding full administrative privileges instead of restricting to the minimum set of permissions that the user is required to do exposes the resources to potentially unwanted actions.\n\nIAM policies that have a statement with \"Effect\": \"Allow\" with \"Action\": \"\\*\" over \"Resource\": \"\\*\" should be removed.", + "RationaleStatement": "It's more secure to start with a minimum set of permissions and grant additional permissions as necessary, rather than starting with permissions that are too lenient and then trying to tighten them later. Providing full administrative privileges instead of restricting to the minimum set of permissions that the user is required to do exposes the resources to potentially unwanted actions. IAM policies that have a statement with \"Effect\": \"Allow\" with \"Action\": \"\\*\" over \"Resource\": \"\\*\" should be removed.", "ImpactStatement": "", - "RemediationProcedure": "**From Console:**\n\nPerform the following to detach the policy that has full administrative privileges:\n\n1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).\n2. In the navigation pane, click Policies and then search for the policy name found in the audit step.\n3. Select the policy that needs to be deleted.\n4. In the policy action menu, select first `Detach` \n5. Select all Users, Groups, Roles that have this policy attached\n6. Click `Detach Policy` \n7. In the policy action menu, select `Detach` \n\n**From Command Line:**\n\nPerform the following to detach the policy that has full administrative privileges as found in the audit step:\n\n1. Lists all IAM users, groups, and roles that the specified managed policy is attached to.\n\n```\n aws iam list-entities-for-policy --policy-arn \n```\n2. Detach the policy from all IAM Users:\n```\n aws iam detach-user-policy --user-name --policy-arn \n```\n3. Detach the policy from all IAM Groups:\n```\n aws iam detach-group-policy --group-name --policy-arn \n```\n4. Detach the policy from all IAM Roles:\n```\n aws iam detach-role-policy --role-name --policy-arn \n```", - "AuditProcedure": "Perform the following to determine what policies are created:\n\n**From Command Line:**\n\n1. Run the following to get a list of IAM policies:\n```\n aws iam list-policies --only-attached --output text\n```\n2. For each policy returned, run the following command to determine if any policies is allowing full administrative privileges on the account:\n```\n aws iam get-policy-version --policy-arn --version-id \n```\n3. In output ensure policy should not have any Statement block with `\"Effect\": \"Allow\"` and `Action` set to `\"*\"` and `Resource` set to `\"*\"`", + "RemediationProcedure": "**From Console:** Perform the following to detach the policy that has full administrative privileges: 1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/). 2. In the navigation pane, click Policies and then search for the policy name found in the audit step. 3. Select the policy that needs to be deleted. 4. In the policy action menu, select first `Detach` 5. Select all Users, Groups, Roles that have this policy attached 6. Click `Detach Policy` 7. In the policy action menu, select `Detach` **From Command Line:** Perform the following to detach the policy that has full administrative privileges as found in the audit step: 1. Lists all IAM users, groups, and roles that the specified managed policy is attached to. ``` aws iam list-entities-for-policy --policy-arn ``` 2. Detach the policy from all IAM Users: ``` aws iam detach-user-policy --user-name --policy-arn ``` 3. Detach the policy from all IAM Groups: ``` aws iam detach-group-policy --group-name --policy-arn ``` 4. Detach the policy from all IAM Roles: ``` aws iam detach-role-policy --role-name --policy-arn ```", + "AuditProcedure": "Perform the following to determine what policies are created: **From Command Line:** 1. Run the following to get a list of IAM policies: ``` aws iam list-policies --only-attached --output text ``` 2. For each policy returned, run the following command to determine if any policies is allowing full administrative privileges on the account: ``` aws iam get-policy-version --policy-arn --version-id ``` 3. In output ensure policy should not have any Statement block with `\"Effect\": \"Allow\"` and `Action` set to `\"*\"` and `Resource` set to `\"*\"`", "AdditionalInformation": "", "References": "https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html:https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_managed-vs-inline.html:https://docs.aws.amazon.com/cli/latest/reference/iam/index.html#cli-aws-iam" } @@ -188,8 +188,8 @@ "Description": "AWS provides a support center that can be used for incident notification and response, as well as technical support and customer services. Create an IAM Role to allow authorized users to manage incidents with AWS Support.", "RationaleStatement": "By implementing least privilege for access control, an IAM Role will require an appropriate IAM Policy to allow Support Center Access in order to manage Incidents with AWS Support.", "ImpactStatement": "All AWS Support plans include an unlimited number of account and billing support cases, with no long-term contracts. Support billing calculations are performed on a per-account basis for all plans. Enterprise Support plan customers have the option to include multiple enabled accounts in an aggregated monthly billing calculation. Monthly charges for the Business and Enterprise support plans are based on each month's AWS usage charges, subject to a monthly minimum, billed in advance.", - "RemediationProcedure": "**From Command Line:**\n\n1. Create an IAM role for managing incidents with AWS:\n - Create a trust relationship policy document that allows to manage AWS incidents, and save it locally as /tmp/TrustPolicy.json:\n```\n {\n \"Version\": \"2012-10-17\",\n \"Statement\": [\n {\n \"Effect\": \"Allow\",\n \"Principal\": {\n \"AWS\": \"\"\n },\n \"Action\": \"sts:AssumeRole\"\n }\n ]\n }\n```\n2. Create the IAM role using the above trust policy:\n```\naws iam create-role --role-name --assume-role-policy-document file:///tmp/TrustPolicy.json\n```\n3. Attach 'AWSSupportAccess' managed policy to the created IAM role:\n```\naws iam attach-role-policy --policy-arn arn:aws:iam::aws:policy/AWSSupportAccess --role-name \n```", - "AuditProcedure": "**From Command Line:**\n\n1. List IAM policies, filter for the 'AWSSupportAccess' managed policy, and note the \"Arn\" element value:\n```\naws iam list-policies --query \"Policies[?PolicyName == 'AWSSupportAccess']\"\n```\n2. Check if the 'AWSSupportAccess' policy is attached to any role:\n\n```\naws iam list-entities-for-policy --policy-arn arn:aws:iam::aws:policy/AWSSupportAccess\n```\n\n3. In Output, Ensure `PolicyRoles` does not return empty. 'Example: Example: PolicyRoles: [ ]'\n\nIf it returns empty refer to the remediation below.", + "RemediationProcedure": "**From Command Line:** 1. Create an IAM role for managing incidents with AWS: - Create a trust relationship policy document that allows to manage AWS incidents, and save it locally as /tmp/TrustPolicy.json: ``` { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"AWS\": \"\" }, \"Action\": \"sts:AssumeRole\" } ] } ``` 2. Create the IAM role using the above trust policy: ``` aws iam create-role --role-name --assume-role-policy-document file:///tmp/TrustPolicy.json ``` 3. Attach 'AWSSupportAccess' managed policy to the created IAM role: ``` aws iam attach-role-policy --policy-arn arn:aws:iam::aws:policy/AWSSupportAccess --role-name ```", + "AuditProcedure": "**From Command Line:** 1. List IAM policies, filter for the 'AWSSupportAccess' managed policy, and note the \"Arn\" element value: ``` aws iam list-policies --query \"Policies[?PolicyName == 'AWSSupportAccess']\" ``` 2. Check if the 'AWSSupportAccess' policy is attached to any role: ``` aws iam list-entities-for-policy --policy-arn arn:aws:iam::aws:policy/AWSSupportAccess ``` 3. In Output, Ensure `PolicyRoles` does not return empty. 'Example: Example: PolicyRoles: [ ]' If it returns empty refer to the remediation below.", "AdditionalInformation": "AWSSupportAccess policy is a global AWS resource. It has same ARN as `arn:aws:iam::aws:policy/AWSSupportAccess` for every account.", "References": "https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_managed-vs-inline.html:https://aws.amazon.com/premiumsupport/pricing/:https://docs.aws.amazon.com/cli/latest/reference/iam/list-policies.html:https://docs.aws.amazon.com/cli/latest/reference/iam/attach-role-policy.html:https://docs.aws.amazon.com/cli/latest/reference/iam/list-entities-for-policy.html" } @@ -207,10 +207,10 @@ "Profile": "Level 2", "AssessmentStatus": "Manual", "Description": "AWS access from within AWS instances can be done by either encoding AWS keys into AWS API calls or by assigning the instance to a role which has an appropriate permissions policy for the required access. \"AWS Access\" means accessing the APIs of AWS in order to access AWS resources or manage AWS account resources.", - "RationaleStatement": "AWS IAM roles reduce the risks associated with sharing and rotating credentials that can be used outside of AWS itself. If credentials are compromised, they can be used from outside of the AWS account they give access to. In contrast, in order to leverage role permissions an attacker would need to gain and maintain access to a specific instance to use the privileges associated with it.\n\nAdditionally, if credentials are encoded into compiled applications or other hard to change mechanisms, then they are even more unlikely to be properly rotated due to service disruption risks. As time goes on, credentials that cannot be rotated are more likely to be known by an increasing number of individuals who no longer work for the organization owning the credentials.", + "RationaleStatement": "AWS IAM roles reduce the risks associated with sharing and rotating credentials that can be used outside of AWS itself. If credentials are compromised, they can be used from outside of the AWS account they give access to. In contrast, in order to leverage role permissions an attacker would need to gain and maintain access to a specific instance to use the privileges associated with it. Additionally, if credentials are encoded into compiled applications or other hard to change mechanisms, then they are even more unlikely to be properly rotated due to service disruption risks. As time goes on, credentials that cannot be rotated are more likely to be known by an increasing number of individuals who no longer work for the organization owning the credentials.", "ImpactStatement": "", - "RemediationProcedure": "IAM roles can only be associated at the launch of an instance. To remediate an instance to add it to a role you must create a new instance.\n\nIf the instance has no external dependencies on its current private ip or public addresses are elastic IPs:\n\n1. In AWS IAM create a new role. Assign a permissions policy if needed permissions are already known.\n2. In the AWS console launch a new instance with identical settings to the existing instance, and ensure that the newly created role is selected.\n3. Shutdown both the existing instance and the new instance.\n4. Detach disks from both instances.\n5. Attach the existing instance disks to the new instance.\n6. Boot the new instance and you should have the same machine, but with the associated role.\n\n**Note:** if your environment has dependencies on a dynamically assigned PRIVATE IP address you can create an AMI from the existing instance, destroy the old one and then when launching from the AMI, manually assign the previous private IP address.\n\n**Note: **if your environment has dependencies on a dynamically assigned PUBLIC IP address there is not a way ensure the address is retained and assign an instance role. Dependencies on dynamically assigned public IP addresses are a bad practice and, if possible, you may wish to rebuild the instance with a new elastic IP address and make the investment to remediate affected systems while assigning the system to a role.", - "AuditProcedure": "Where an instance is associated with a Role:\n\nFor instances that are known to perform AWS actions, ensure that they belong to an instance role that has the necessary permissions:\n\n1. Login to AWS Console (with appropriate permissions to View Identity Access Management Account Settings)\n2. Open the EC2 Dashboard and choose \"Instances\"\n3. Click the EC2 instance that performs AWS actions, in the lower pane details find \"IAM Role\"\n4. If the Role is blank, the instance is not assigned to one.\n5. If the Role is filled in, it does not mean the instance might not \\*also\\* have credentials encoded on it for some activities.\n\nWhere an Instance Contains Embedded Credentials:\n\n- On the instance that is known to perform AWS actions, audit all scripts and environment variables to ensure that none of them contain AWS credentials.\n\nWhere an Instance Application Contains Embedded Credentials:\n\n- Applications that run on an instance may also have credentials embedded. This is a bad practice, but even worse if the source code is stored in a public code repository such as github. When an application contains credentials can be determined by eliminating all other sources of credentials and if the application can still access AWS resources - it likely contains embedded credentials. Another method is to examine all source code and configuration files of the application.", + "RemediationProcedure": "IAM roles can only be associated at the launch of an instance. To remediate an instance to add it to a role you must create a new instance. If the instance has no external dependencies on its current private ip or public addresses are elastic IPs: 1. In AWS IAM create a new role. Assign a permissions policy if needed permissions are already known. 2. In the AWS console launch a new instance with identical settings to the existing instance, and ensure that the newly created role is selected. 3. Shutdown both the existing instance and the new instance. 4. Detach disks from both instances. 5. Attach the existing instance disks to the new instance. 6. Boot the new instance and you should have the same machine, but with the associated role. **Note:** if your environment has dependencies on a dynamically assigned PRIVATE IP address you can create an AMI from the existing instance, destroy the old one and then when launching from the AMI, manually assign the previous private IP address. **Note: **if your environment has dependencies on a dynamically assigned PUBLIC IP address there is not a way ensure the address is retained and assign an instance role. Dependencies on dynamically assigned public IP addresses are a bad practice and, if possible, you may wish to rebuild the instance with a new elastic IP address and make the investment to remediate affected systems while assigning the system to a role.", + "AuditProcedure": "Where an instance is associated with a Role: For instances that are known to perform AWS actions, ensure that they belong to an instance role that has the necessary permissions: 1. Login to AWS Console (with appropriate permissions to View Identity Access Management Account Settings) 2. Open the EC2 Dashboard and choose \"Instances\" 3. Click the EC2 instance that performs AWS actions, in the lower pane details find \"IAM Role\" 4. If the Role is blank, the instance is not assigned to one. 5. If the Role is filled in, it does not mean the instance might not \\*also\\* have credentials encoded on it for some activities. Where an Instance Contains Embedded Credentials: - On the instance that is known to perform AWS actions, audit all scripts and environment variables to ensure that none of them contain AWS credentials. Where an Instance Application Contains Embedded Credentials: - Applications that run on an instance may also have credentials embedded. This is a bad practice, but even worse if the source code is stored in a public code repository such as github. When an application contains credentials can be determined by eliminating all other sources of credentials and if the application can still access AWS resources - it likely contains embedded credentials. Another method is to examine all source code and configuration files of the application.", "AdditionalInformation": "", "References": "https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2.html:https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html" } @@ -227,11 +227,11 @@ "Section": "1. Identity and Access Management", "Profile": "Level 1", "AssessmentStatus": "Automated", - "Description": "To enable HTTPS connections to your website or application in AWS, you need an SSL/TLS server certificate. You can use ACM or IAM to store and deploy server certificates. \nUse IAM as a certificate manager only when you must support HTTPS connections in a region that is not supported by ACM. IAM securely encrypts your private keys and stores the encrypted version in IAM SSL certificate storage. IAM supports deploying server certificates in all regions, but you must obtain your certificate from an external provider for use with AWS. You cannot upload an ACM certificate to IAM. Additionally, you cannot manage your certificates from the IAM Console.", + "Description": "To enable HTTPS connections to your website or application in AWS, you need an SSL/TLS server certificate. You can use ACM or IAM to store and deploy server certificates. Use IAM as a certificate manager only when you must support HTTPS connections in a region that is not supported by ACM. IAM securely encrypts your private keys and stores the encrypted version in IAM SSL certificate storage. IAM supports deploying server certificates in all regions, but you must obtain your certificate from an external provider for use with AWS. You cannot upload an ACM certificate to IAM. Additionally, you cannot manage your certificates from the IAM Console.", "RationaleStatement": "Removing expired SSL/TLS certificates eliminates the risk that an invalid certificate will be deployed accidentally to a resource such as AWS Elastic Load Balancer (ELB), which can damage the credibility of the application/website behind the ELB. As a best practice, it is recommended to delete expired certificates.", - "ImpactStatement": "Deleting the certificate could have implications for your application if you are using an expired server certificate with Elastic Load Balancing, CloudFront, etc.\nOne has to make configurations at respective services to ensure there is no interruption in application functionality.", - "RemediationProcedure": "**From Console:**\n\nRemoving expired certificates via AWS Management Console is not currently supported. To delete SSL/TLS certificates stored in IAM via the AWS API use the Command Line Interface (CLI).\n\n**From Command Line:**\n\nTo delete Expired Certificate run following command by replacing with the name of the certificate to delete:\n\n```\naws iam delete-server-certificate --server-certificate-name \n```\n\nWhen the preceding command is successful, it does not return any output.", - "AuditProcedure": "**From Console:**\n\nGetting the certificates expiration information via AWS Management Console is not currently supported. \nTo request information about the SSL/TLS certificates stored in IAM via the AWS API use the Command Line Interface (CLI).\n\n**From Command Line:**\n\nRun list-server-certificates command to list all the IAM-stored server certificates:\n\n```\naws iam list-server-certificates\n```\n\nThe command output should return an array that contains all the SSL/TLS certificates currently stored in IAM and their metadata (name, ID, expiration date, etc):\n\n```\n{\n \"ServerCertificateMetadataList\": [\n {\n \"ServerCertificateId\": \"EHDGFRW7EJFYTE88D\",\n \"ServerCertificateName\": \"MyServerCertificate\",\n \"Expiration\": \"2018-07-10T23:59:59Z\",\n \"Path\": \"/\",\n \"Arn\": \"arn:aws:iam::012345678910:server-certificate/MySSLCertificate\",\n \"UploadDate\": \"2018-06-10T11:56:08Z\"\n }\n ]\n}\n```\n\nVerify the `ServerCertificateName` and `Expiration` parameter value (expiration date) for each SSL/TLS certificate returned by the list-server-certificates command and determine if there are any expired server certificates currently stored in AWS IAM. If so, use the AWS API to remove them.\n\nIf this command returns:\n```\n{ { \"ServerCertificateMetadataList\": [] }\n```\nThis means that there are no expired certificates, It DOES NOT mean that no certificates exist.", + "ImpactStatement": "Deleting the certificate could have implications for your application if you are using an expired server certificate with Elastic Load Balancing, CloudFront, etc. One has to make configurations at respective services to ensure there is no interruption in application functionality.", + "RemediationProcedure": "**From Console:** Removing expired certificates via AWS Management Console is not currently supported. To delete SSL/TLS certificates stored in IAM via the AWS API use the Command Line Interface (CLI). **From Command Line:** To delete Expired Certificate run following command by replacing with the name of the certificate to delete: ``` aws iam delete-server-certificate --server-certificate-name ``` When the preceding command is successful, it does not return any output.", + "AuditProcedure": "**From Console:** Getting the certificates expiration information via AWS Management Console is not currently supported. To request information about the SSL/TLS certificates stored in IAM via the AWS API use the Command Line Interface (CLI). **From Command Line:** Run list-server-certificates command to list all the IAM-stored server certificates: ``` aws iam list-server-certificates ``` The command output should return an array that contains all the SSL/TLS certificates currently stored in IAM and their metadata (name, ID, expiration date, etc): ``` { \"ServerCertificateMetadataList\": [ { \"ServerCertificateId\": \"EHDGFRW7EJFYTE88D\", \"ServerCertificateName\": \"MyServerCertificate\", \"Expiration\": \"2018-07-10T23:59:59Z\", \"Path\": \"/\", \"Arn\": \"arn:aws:iam::012345678910:server-certificate/MySSLCertificate\", \"UploadDate\": \"2018-06-10T11:56:08Z\" } ] } ``` Verify the `ServerCertificateName` and `Expiration` parameter value (expiration date) for each SSL/TLS certificate returned by the list-server-certificates command and determine if there are any expired server certificates currently stored in AWS IAM. If so, use the AWS API to remove them. If this command returns: ``` { { \"ServerCertificateMetadataList\": [] } ``` This means that there are no expired certificates, It DOES NOT mean that no certificates exist.", "AdditionalInformation": "", "References": "https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_server-certs.html:https://docs.aws.amazon.com/cli/latest/reference/iam/delete-server-certificate.html" } @@ -251,8 +251,8 @@ "Description": "AWS provides customers with the option of specifying the contact information for account's security team. It is recommended that this information be provided.", "RationaleStatement": "Specifying security-specific contact information will help ensure that security advisories sent by AWS reach the team in your organization that is best equipped to respond to them.", "ImpactStatement": "", - "RemediationProcedure": "Perform the following to establish security contact information:\n\n**From Console:**\n\n1. Click on your account name at the top right corner of the console.\n2. From the drop-down menu Click `My Account` \n3. Scroll down to the `Alternate Contacts` section\n4. Enter contact information in the `Security` section\n\n**Note:** Consider specifying an internal email distribution list to ensure emails are regularly monitored by more than one individual.", - "AuditProcedure": "Perform the following to determine if security contact information is present:\n\n**From Console:**\n\n1. Click on your account name at the top right corner of the console\n2. From the drop-down menu Click `My Account` \n3. Scroll down to the `Alternate Contacts` section\n4. Ensure contact information is specified in the `Security` section", + "RemediationProcedure": "Perform the following to establish security contact information: **From Console:** 1. Click on your account name at the top right corner of the console. 2. From the drop-down menu Click `My Account` 3. Scroll down to the `Alternate Contacts` section 4. Enter contact information in the `Security` section **Note:** Consider specifying an internal email distribution list to ensure emails are regularly monitored by more than one individual.", + "AuditProcedure": "Perform the following to determine if security contact information is present: **From Console:** 1. Click on your account name at the top right corner of the console 2. From the drop-down menu Click `My Account` 3. Scroll down to the `Alternate Contacts` section 4. Ensure contact information is specified in the `Security` section", "AdditionalInformation": "", "References": "" } @@ -269,11 +269,11 @@ "Section": "1. Identity and Access Management", "Profile": "Level 1", "AssessmentStatus": "Automated", - "Description": "Enable IAM Access analyzer for IAM policies about all resources in each region.\n\nIAM Access Analyzer is a technology introduced at AWS reinvent 2019. After the Analyzer is enabled in IAM, scan results are displayed on the console showing the accessible resources. Scans show resources that other accounts and federated users can access, such as KMS keys and IAM roles. So the results allow you to determine if an unintended user is allowed, making it easier for administrators to monitor least privileges access.\nAccess Analyzer analyzes only policies that are applied to resources in the same AWS Region.", + "Description": "Enable IAM Access analyzer for IAM policies about all resources in each region. IAM Access Analyzer is a technology introduced at AWS reinvent 2019. After the Analyzer is enabled in IAM, scan results are displayed on the console showing the accessible resources. Scans show resources that other accounts and federated users can access, such as KMS keys and IAM roles. So the results allow you to determine if an unintended user is allowed, making it easier for administrators to monitor least privileges access. Access Analyzer analyzes only policies that are applied to resources in the same AWS Region.", "RationaleStatement": "AWS IAM Access Analyzer helps you identify the resources in your organization and accounts, such as Amazon S3 buckets or IAM roles, that are shared with an external entity. This lets you identify unintended access to your resources and data. Access Analyzer identifies resources that are shared with external principals by using logic-based reasoning to analyze the resource-based policies in your AWS environment. IAM Access Analyzer continuously monitors all policies for S3 bucket, IAM roles, KMS(Key Management Service) keys, AWS Lambda functions, and Amazon SQS(Simple Queue Service) queues.", "ImpactStatement": "", - "RemediationProcedure": "**From Console:**\n\nPerform the following to enable IAM Access analyzer for IAM policies:\n\n1. Open the IAM console at `https://console.aws.amazon.com/iam/.`\n2. Choose `Access analyzer`.\n3. Choose `Create analyzer`.\n4. On the `Create analyzer` page, confirm that the `Region` displayed is the Region where you want to enable Access Analyzer.\n5. Enter a name for the analyzer. `Optional as it will generate a name for you automatically`.\n6. Add any tags that you want to apply to the analyzer. `Optional`. \n7. Choose `Create Analyzer`.\n8. Repeat these step for each active region\n\n**From Command Line:**\n\nRun the following command:\n```\naws accessanalyzer create-analyzer --analyzer-name --type \n```\nRepeat this command above for each active region.\n\n**Note:** The IAM Access Analyzer is successfully configured only when the account you use has the necessary permissions.", - "AuditProcedure": "**From Console:**\n\n1. Open the IAM console at `https://console.aws.amazon.com/iam/`\n2. Choose `Access analyzer`\n3. Click 'Analyzers'\n4. Ensure that at least one analyzer is present\n5. Ensure that the `STATUS` is set to `Active`\n6. Repeat these step for each active region\n\n**From Command Line:**\n\n1. Run the following command:\n```\naws accessanalyzer list-analyzers | grep status\n```\n2. Ensure that at least one Analyzer the `status` is set to `ACTIVE`\n\n3. Repeat the steps above for each active region.\n\nIf an Access analyzer is not listed for each region or the status is not set to active refer to the remediation procedure below.", + "RemediationProcedure": "**From Console:** Perform the following to enable IAM Access analyzer for IAM policies: 1. Open the IAM console at `https://console.aws.amazon.com/iam/.` 2. Choose `Access analyzer`. 3. Choose `Create analyzer`. 4. On the `Create analyzer` page, confirm that the `Region` displayed is the Region where you want to enable Access Analyzer. 5. Enter a name for the analyzer. `Optional as it will generate a name for you automatically`. 6. Add any tags that you want to apply to the analyzer. `Optional`. 7. Choose `Create Analyzer`. 8. Repeat these step for each active region **From Command Line:** Run the following command: ``` aws accessanalyzer create-analyzer --analyzer-name --type ``` Repeat this command above for each active region. **Note:** The IAM Access Analyzer is successfully configured only when the account you use has the necessary permissions.", + "AuditProcedure": "**From Console:** 1. Open the IAM console at `https://console.aws.amazon.com/iam/` 2. Choose `Access analyzer` 3. Click 'Analyzers' 4. Ensure that at least one analyzer is present 5. Ensure that the `STATUS` is set to `Active` 6. Repeat these step for each active region **From Command Line:** 1. Run the following command: ``` aws accessanalyzer list-analyzers | grep status ``` 2. Ensure that at least one Analyzer the `status` is set to `ACTIVE` 3. Repeat the steps above for each active region. If an Access analyzer is not listed for each region or the status is not set to active refer to the remediation procedure below.", "AdditionalInformation": "", "References": "https://docs.aws.amazon.com/IAM/latest/UserGuide/what-is-access-analyzer.html:https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-getting-started.html:https://docs.aws.amazon.com/cli/latest/reference/accessanalyzer/get-analyzer.html:https://docs.aws.amazon.com/cli/latest/reference/accessanalyzer/create-analyzer.html" } @@ -294,7 +294,7 @@ "RationaleStatement": "Centralizing IAM user management to a single identity store reduces complexity and thus the likelihood of access management errors.", "ImpactStatement": "", "RemediationProcedure": "The remediation procedure will vary based on the individual organization's implementation of identity federation and/or AWS Organizations with the acceptance criteria that no non-service IAM users, and non-root accounts, are present outside the account providing centralized IAM user management.", - "AuditProcedure": "For multi-account AWS environments with an external identity provider... \n\n1. Determine the master account for identity federation or IAM user management\n2. Login to that account through the AWS Management Console\n3. Click `Services` \n4. Click `IAM` \n5. Click `Identity providers`\n6. Verify the configuration\n\nThen..., determine all accounts that should not have local users present. For each account...\n\n1. Determine all accounts that should not have local users present\n2. Log into the AWS Management Console\n3. Switch role into each identified account\n4. Click `Services` \n5. Click `IAM` \n6. Click `Users`\n7. Confirm that no IAM users representing individuals are present\n\nFor multi-account AWS environments implementing AWS Organizations without an external identity provider... \n\n1. Determine all accounts that should not have local users present\n2. Log into the AWS Management Console\n3. Switch role into each identified account\n4. Click `Services` \n5. Click `IAM` \n6. Click `Users`\n7. Confirm that no IAM users representing individuals are present", + "AuditProcedure": "For multi-account AWS environments with an external identity provider... 1. Determine the master account for identity federation or IAM user management 2. Login to that account through the AWS Management Console 3. Click `Services` 4. Click `IAM` 5. Click `Identity providers` 6. Verify the configuration Then..., determine all accounts that should not have local users present. For each account... 1. Determine all accounts that should not have local users present 2. Log into the AWS Management Console 3. Switch role into each identified account 4. Click `Services` 5. Click `IAM` 6. Click `Users` 7. Confirm that no IAM users representing individuals are present For multi-account AWS environments implementing AWS Organizations without an external identity provider... 1. Determine all accounts that should not have local users present 2. Log into the AWS Management Console 3. Switch role into each identified account 4. Click `Services` 5. Click `IAM` 6. Click `Users` 7. Confirm that no IAM users representing individuals are present", "AdditionalInformation": "", "References": "" } @@ -314,8 +314,8 @@ "Description": "The AWS support portal allows account owners to establish security questions that can be used to authenticate individuals calling AWS customer service for support. It is recommended that security questions be established.", "RationaleStatement": "When creating a new AWS account, a default super user is automatically created. This account is referred to as the 'root user' or 'root' account. It is recommended that the use of this account be limited and highly controlled. During events in which the 'root' password is no longer accessible or the MFA token associated with 'root' is lost/destroyed it is possible, through authentication using secret questions and associated answers, to recover 'root' user login access.", "ImpactStatement": "", - "RemediationProcedure": "**From Console:**\n\n1. Login to the AWS Account as the 'root' user\n2. Click on the __ from the top right of the console\n3. From the drop-down menu Click _My Account_\n4. Scroll down to the `Configure Security Questions` section\n5. Click on `Edit` \n6. Click on each `Question` \n - From the drop-down select an appropriate question\n - Click on the `Answer` section\n - Enter an appropriate answer \n - Follow process for all 3 questions\n7. Click `Update` when complete\n8. Save Questions and Answers and place in a secure physical location", - "AuditProcedure": "**From Console:**\n\n1. Login to the AWS account as the 'root' user\n2. On the top right you will see the __\n3. Click on the __\n4. From the drop-down menu Click `My Account` \n5. In the `Configure Security Challenge Questions` section on the `Personal Information` page, configure three security challenge questions.\n6. Click `Save questions` .", + "RemediationProcedure": "**From Console:** 1. Login to the AWS Account as the 'root' user 2. Click on the __ from the top right of the console 3. From the drop-down menu Click _My Account_ 4. Scroll down to the `Configure Security Questions` section 5. Click on `Edit` 6. Click on each `Question` - From the drop-down select an appropriate question - Click on the `Answer` section - Enter an appropriate answer - Follow process for all 3 questions 7. Click `Update` when complete 8. Save Questions and Answers and place in a secure physical location", + "AuditProcedure": "**From Console:** 1. Login to the AWS account as the 'root' user 2. On the top right you will see the __ 3. Click on the __ 4. From the drop-down menu Click `My Account` 5. In the `Configure Security Challenge Questions` section on the `Personal Information` page, configure three security challenge questions. 6. Click `Save questions` .", "AdditionalInformation": "", "References": "" } @@ -335,8 +335,8 @@ "Description": "The 'root' user account is the most privileged user in an AWS account. AWS Access Keys provide programmatic access to a given AWS account. It is recommended that all access keys associated with the 'root' user account be removed.", "RationaleStatement": "Removing access keys associated with the 'root' user account limits vectors by which the account can be compromised. Additionally, removing the 'root' access keys encourages the creation and use of role based accounts that are least privileged.", "ImpactStatement": "", - "RemediationProcedure": "Perform the following to delete or disable active 'root' user access keys\n\n**From Console:**\n\n1. Sign in to the AWS Management Console as 'root' and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).\n2. Click on __ at the top right and select `My Security Credentials` from the drop down list\n3. On the pop out screen Click on `Continue to Security Credentials` \n4. Click on `Access Keys` _(Access Key ID and Secret Access Key)_\n5. Under the `Status` column if there are any Keys which are Active\n - Click on `Make Inactive` - (Temporarily disable Key - may be needed again)\n - Click `Delete` - (Deleted keys cannot be recovered)", - "AuditProcedure": "Perform the following to determine if the 'root' user account has access keys:\n\n**From Console:**\n\n1. Login to the AWS Management Console\n2. Click `Services` \n3. Click `IAM` \n4. Click on `Credential Report` \n5. This will download a `.csv` file which contains credential usage for all IAM users within an AWS Account - open this file\n6. For the `` user, ensure the `access_key_1_active` and `access_key_2_active` fields are set to `FALSE` .\n\n**From Command Line:**\n\nRun the following command:\n```\n aws iam get-account-summary | grep \"AccountAccessKeysPresent\" \n```\nIf no 'root' access keys exist the output will show \"AccountAccessKeysPresent\": 0,. \n\nIf the output shows a \"1\" than 'root' keys exist, refer to the remediation procedure below.", + "RemediationProcedure": "Perform the following to delete or disable active 'root' user access keys **From Console:** 1. Sign in to the AWS Management Console as 'root' and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/). 2. Click on __ at the top right and select `My Security Credentials` from the drop down list 3. On the pop out screen Click on `Continue to Security Credentials` 4. Click on `Access Keys` _(Access Key ID and Secret Access Key)_ 5. Under the `Status` column if there are any Keys which are Active - Click on `Make Inactive` - (Temporarily disable Key - may be needed again) - Click `Delete` - (Deleted keys cannot be recovered)", + "AuditProcedure": "Perform the following to determine if the 'root' user account has access keys: **From Console:** 1. Login to the AWS Management Console 2. Click `Services` 3. Click `IAM` 4. Click on `Credential Report` 5. This will download a `.csv` file which contains credential usage for all IAM users within an AWS Account - open this file 6. For the `` user, ensure the `access_key_1_active` and `access_key_2_active` fields are set to `FALSE` . **From Command Line:** Run the following command: ``` aws iam get-account-summary | grep \"AccountAccessKeysPresent\" ``` If no 'root' access keys exist the output will show \"AccountAccessKeysPresent\": 0,. If the output shows a \"1\" than 'root' keys exist, refer to the remediation procedure below.", "AdditionalInformation": "IAM User account \"root\" for us-gov cloud regions is not enabled by default. However, on request to AWS support enables 'root' access only through access-keys (CLI, API methods) for us-gov cloud region.", "References": "http://docs.aws.amazon.com/general/latest/gr/aws-access-keys-best-practices.html:http://docs.aws.amazon.com/general/latest/gr/managing-aws-access-keys.html:http://docs.aws.amazon.com/IAM/latest/APIReference/API_GetAccountSummary.html:https://aws.amazon.com/blogs/security/an-easier-way-to-determine-the-presence-of-aws-account-access-keys/" } @@ -353,11 +353,11 @@ "Section": "1. Identity and Access Management", "Profile": "Level 1", "AssessmentStatus": "Automated", - "Description": "The 'root' user account is the most privileged user in an AWS account. Multi-factor Authentication (MFA) adds an extra layer of protection on top of a username and password. With MFA enabled, when a user signs in to an AWS website, they will be prompted for their username and password as well as for an authentication code from their AWS MFA device.\n\n**Note:** When virtual MFA is used for 'root' accounts, it is recommended that the device used is NOT a personal device, but rather a dedicated mobile device (tablet or phone) that is managed to be kept charged and secured independent of any individual personal devices. (\"non-personal virtual MFA\") This lessens the risks of losing access to the MFA due to device loss, device trade-in or if the individual owning the device is no longer employed at the company.", + "Description": "The 'root' user account is the most privileged user in an AWS account. Multi-factor Authentication (MFA) adds an extra layer of protection on top of a username and password. With MFA enabled, when a user signs in to an AWS website, they will be prompted for their username and password as well as for an authentication code from their AWS MFA device. **Note:** When virtual MFA is used for 'root' accounts, it is recommended that the device used is NOT a personal device, but rather a dedicated mobile device (tablet or phone) that is managed to be kept charged and secured independent of any individual personal devices. (\"non-personal virtual MFA\") This lessens the risks of losing access to the MFA due to device loss, device trade-in or if the individual owning the device is no longer employed at the company.", "RationaleStatement": "Enabling MFA provides increased security for console access as it requires the authenticating principal to possess a device that emits a time-sensitive key and have knowledge of a credential.", "ImpactStatement": "", - "RemediationProcedure": "Perform the following to establish MFA for the 'root' user account:\n\n1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).\n\n Note: to manage MFA devices for the 'root' AWS account, you must use your 'root' account credentials to sign in to AWS. You cannot manage MFA devices for the 'root' account using other credentials.\n\n2. Choose `Dashboard` , and under `Security Status` , expand `Activate MFA` on your root account.\n3. Choose `Activate MFA` \n4. In the wizard, choose `A virtual MFA` device and then choose `Next Step` .\n5. IAM generates and displays configuration information for the virtual MFA device, including a QR code graphic. The graphic is a representation of the 'secret configuration key' that is available for manual entry on devices that do not support QR codes.\n6. Open your virtual MFA application. (For a list of apps that you can use for hosting virtual MFA devices, see [Virtual MFA Applications](http://aws.amazon.com/iam/details/mfa/#Virtual_MFA_Applications).) If the virtual MFA application supports multiple accounts (multiple virtual MFA devices), choose the option to create a new account (a new virtual MFA device).\n7. Determine whether the MFA app supports QR codes, and then do one of the following:\n\n - Use the app to scan the QR code. For example, you might choose the camera icon or choose an option similar to Scan code, and then use the device's camera to scan the code.\n - In the Manage MFA Device wizard, choose Show secret key for manual configuration, and then type the secret configuration key into your MFA application.\n\nWhen you are finished, the virtual MFA device starts generating one-time passwords.\n\nIn the Manage MFA Device wizard, in the Authentication Code 1 box, type the one-time password that currently appears in the virtual MFA device. Wait up to 30 seconds for the device to generate a new one-time password. Then type the second one-time password into the Authentication Code 2 box. Choose Assign Virtual MFA.", - "AuditProcedure": "Perform the following to determine if the 'root' user account has MFA setup:\n\n**From Console:**\n\n1. Login to the AWS Management Console\n2. Click `Services` \n3. Click `IAM` \n4. Click on `Credential Report` \n5. This will download a `.csv` file which contains credential usage for all IAM users within an AWS Account - open this file\n6. For the `` user, ensure the `mfa_active` field is set to `TRUE` .\n\n**From Command Line:**\n\n1. Run the following command:\n```\n aws iam get-account-summary | grep \"AccountMFAEnabled\"\n```\n2. Ensure the AccountMFAEnabled property is set to 1", + "RemediationProcedure": "Perform the following to establish MFA for the 'root' user account: 1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/). Note: to manage MFA devices for the 'root' AWS account, you must use your 'root' account credentials to sign in to AWS. You cannot manage MFA devices for the 'root' account using other credentials. 2. Choose `Dashboard` , and under `Security Status` , expand `Activate MFA` on your root account. 3. Choose `Activate MFA` 4. In the wizard, choose `A virtual MFA` device and then choose `Next Step` . 5. IAM generates and displays configuration information for the virtual MFA device, including a QR code graphic. The graphic is a representation of the 'secret configuration key' that is available for manual entry on devices that do not support QR codes. 6. Open your virtual MFA application. (For a list of apps that you can use for hosting virtual MFA devices, see [Virtual MFA Applications](http://aws.amazon.com/iam/details/mfa/#Virtual_MFA_Applications).) If the virtual MFA application supports multiple accounts (multiple virtual MFA devices), choose the option to create a new account (a new virtual MFA device). 7. Determine whether the MFA app supports QR codes, and then do one of the following: - Use the app to scan the QR code. For example, you might choose the camera icon or choose an option similar to Scan code, and then use the device's camera to scan the code. - In the Manage MFA Device wizard, choose Show secret key for manual configuration, and then type the secret configuration key into your MFA application. When you are finished, the virtual MFA device starts generating one-time passwords. In the Manage MFA Device wizard, in the Authentication Code 1 box, type the one-time password that currently appears in the virtual MFA device. Wait up to 30 seconds for the device to generate a new one-time password. Then type the second one-time password into the Authentication Code 2 box. Choose Assign Virtual MFA.", + "AuditProcedure": "Perform the following to determine if the 'root' user account has MFA setup: **From Console:** 1. Login to the AWS Management Console 2. Click `Services` 3. Click `IAM` 4. Click on `Credential Report` 5. This will download a `.csv` file which contains credential usage for all IAM users within an AWS Account - open this file 6. For the `` user, ensure the `mfa_active` field is set to `TRUE` . **From Command Line:** 1. Run the following command: ``` aws iam get-account-summary | grep \"AccountMFAEnabled\" ``` 2. Ensure the AccountMFAEnabled property is set to 1", "AdditionalInformation": "IAM User account \"root\" for us-gov cloud regions does not have console access. This recommendation is not applicable for us-gov cloud regions.", "References": "https://docs.aws.amazon.com/IAM/latest/UserGuide/id_root-user.html#id_root-user_manage_mfa:https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa_enable_virtual.html#enable-virt-mfa-for-root" } @@ -375,10 +375,10 @@ "Profile": "Level 2", "AssessmentStatus": "Automated", "Description": "The 'root' user account is the most privileged user in an AWS account. MFA adds an extra layer of protection on top of a user name and password. With MFA enabled, when a user signs in to an AWS website, they will be prompted for their user name and password as well as for an authentication code from their AWS MFA device. For Level 2, it is recommended that the 'root' user account be protected with a hardware MFA.", - "RationaleStatement": "A hardware MFA has a smaller attack surface than a virtual MFA. For example, a hardware MFA does not suffer the attack surface introduced by the mobile smartphone on which a virtual MFA resides.\n\n**Note**: Using hardware MFA for many, many AWS accounts may create a logistical device management issue. If this is the case, consider implementing this Level 2 recommendation selectively to the highest security AWS accounts and the Level 1 recommendation applied to the remaining accounts.", + "RationaleStatement": "A hardware MFA has a smaller attack surface than a virtual MFA. For example, a hardware MFA does not suffer the attack surface introduced by the mobile smartphone on which a virtual MFA resides. **Note**: Using hardware MFA for many, many AWS accounts may create a logistical device management issue. If this is the case, consider implementing this Level 2 recommendation selectively to the highest security AWS accounts and the Level 1 recommendation applied to the remaining accounts.", "ImpactStatement": "", - "RemediationProcedure": "Perform the following to establish a hardware MFA for the 'root' user account:\n\n1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).\nNote: to manage MFA devices for the AWS 'root' user account, you must use your 'root' account credentials to sign in to AWS. You cannot manage MFA devices for the 'root' account using other credentials.\n2. Choose `Dashboard` , and under `Security Status` , expand `Activate MFA` on your root account.\n3. Choose `Activate MFA` \n4. In the wizard, choose `A hardware MFA` device and then choose `Next Step` .\n5. In the `Serial Number` box, enter the serial number that is found on the back of the MFA device.\n6. In the `Authentication Code 1` box, enter the six-digit number displayed by the MFA device. You might need to press the button on the front of the device to display the number.\n7. Wait 30 seconds while the device refreshes the code, and then enter the next six-digit number into the `Authentication Code 2` box. You might need to press the button on the front of the device again to display the second number.\n8. Choose `Next Step` . The MFA device is now associated with the AWS account. The next time you use your AWS account credentials to sign in, you must type a code from the hardware MFA device.\n\nRemediation for this recommendation is not available through AWS CLI.", - "AuditProcedure": "Perform the following to determine if the 'root' user account has a hardware MFA setup:\n\n1. Run the following command to determine if the 'root' account has MFA setup:\n```\n aws iam get-account-summary | grep \"AccountMFAEnabled\"\n```\n\nThe `AccountMFAEnabled` property is set to `1` will ensure that the 'root' user account has MFA (Virtual or Hardware) Enabled.\nIf `AccountMFAEnabled` property is set to `0` the account is not compliant with this recommendation.\n\n2. If `AccountMFAEnabled` property is set to `1`, determine 'root' account has Hardware MFA enabled.\nRun the following command to list all virtual MFA devices:\n```\n aws iam list-virtual-mfa-devices \n```\nIf the output contains one MFA with the following Serial Number, it means the MFA is virtual, not hardware and the account is not compliant with this recommendation:\n\n `\"SerialNumber\": \"arn:aws:iam::__:mfa/root-account-mfa-device\"`", + "RemediationProcedure": "Perform the following to establish a hardware MFA for the 'root' user account: 1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/). Note: to manage MFA devices for the AWS 'root' user account, you must use your 'root' account credentials to sign in to AWS. You cannot manage MFA devices for the 'root' account using other credentials. 2. Choose `Dashboard` , and under `Security Status` , expand `Activate MFA` on your root account. 3. Choose `Activate MFA` 4. In the wizard, choose `A hardware MFA` device and then choose `Next Step` . 5. In the `Serial Number` box, enter the serial number that is found on the back of the MFA device. 6. In the `Authentication Code 1` box, enter the six-digit number displayed by the MFA device. You might need to press the button on the front of the device to display the number. 7. Wait 30 seconds while the device refreshes the code, and then enter the next six-digit number into the `Authentication Code 2` box. You might need to press the button on the front of the device again to display the second number. 8. Choose `Next Step` . The MFA device is now associated with the AWS account. The next time you use your AWS account credentials to sign in, you must type a code from the hardware MFA device. Remediation for this recommendation is not available through AWS CLI.", + "AuditProcedure": "Perform the following to determine if the 'root' user account has a hardware MFA setup: 1. Run the following command to determine if the 'root' account has MFA setup: ``` aws iam get-account-summary | grep \"AccountMFAEnabled\" ``` The `AccountMFAEnabled` property is set to `1` will ensure that the 'root' user account has MFA (Virtual or Hardware) Enabled. If `AccountMFAEnabled` property is set to `0` the account is not compliant with this recommendation. 2. If `AccountMFAEnabled` property is set to `1`, determine 'root' account has Hardware MFA enabled. Run the following command to list all virtual MFA devices: ``` aws iam list-virtual-mfa-devices ``` If the output contains one MFA with the following Serial Number, it means the MFA is virtual, not hardware and the account is not compliant with this recommendation: `\"SerialNumber\": \"arn:aws:iam::__:mfa/root-account-mfa-device\"`", "AdditionalInformation": "IAM User account 'root' for us-gov cloud regions does not have console access. This control is not applicable for us-gov cloud regions.", "References": "https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa_enable_virtual.html:https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa_enable_physical.html#enable-hw-mfa-for-root" } @@ -398,9 +398,9 @@ "Description": "With the creation of an AWS account, a 'root user' is created that cannot be disabled or deleted. That user has unrestricted access to and control over all resources in the AWS account. It is highly recommended that the use of this account be avoided for everyday tasks.", "RationaleStatement": "The 'root user' has unrestricted access to and control over all account resources. Use of it is inconsistent with the principles of least privilege and separation of duties, and can lead to unnecessary harm due to error or account compromise.", "ImpactStatement": "", - "RemediationProcedure": "If you find that the 'root' user account is being used for daily activity to include administrative tasks that do not require the 'root' user:\n\n1. Change the 'root' user password.\n2. Deactivate or delete any access keys associate with the 'root' user.\n\n**Remember, anyone who has 'root' user credentials for your AWS account has unrestricted access to and control of all the resources in your account, including billing information.", - "AuditProcedure": "**From Console:**\n\n1. Login to the AWS Management Console at `https://console.aws.amazon.com/iam/`\n2. In the left pane, click `Credential Report`\n3. Click on `Download Report`\n4. Open of Save the file locally\n5. Locate the `` under the user column\n6. Review `password_last_used, access_key_1_last_used_date, access_key_2_last_used_date` to determine when the 'root user' was last used.\n\n**From Command Line:**\n\nRun the following CLI commands to provide a credential report for determining the last time the 'root user' was used:\n```\naws iam generate-credential-report\n```\n```\naws iam get-credential-report --query 'Content' --output text | base64 -d | cut -d, -f1,5,11,16 | grep -B1 ''\n```\n\nReview `password_last_used`, `access_key_1_last_used_date`, `access_key_2_last_used_date` to determine when the _root user_ was last used.\n\n**Note:** There are a few conditions under which the use of the 'root' user account is required. Please see the reference links for all of the tasks that require use of the 'root' user.", - "AdditionalInformation": "The 'root' user for us-gov cloud regions is not enabled by default. However, on request to AWS support, they can enable the 'root' user and grant access only through access-keys (CLI, API methods) for us-gov cloud region. If the 'root' user for us-gov cloud regions is enabled, this recommendation is applicable.\n\nMonitoring usage of the 'root' user can be accomplished by implementing recommendation 3.3 Ensure a log metric filter and alarm exist for usage of the 'root' user.", + "RemediationProcedure": "If you find that the 'root' user account is being used for daily activity to include administrative tasks that do not require the 'root' user: 1. Change the 'root' user password. 2. Deactivate or delete any access keys associate with the 'root' user. **Remember, anyone who has 'root' user credentials for your AWS account has unrestricted access to and control of all the resources in your account, including billing information.", + "AuditProcedure": "**From Console:** 1. Login to the AWS Management Console at `https://console.aws.amazon.com/iam/` 2. In the left pane, click `Credential Report` 3. Click on `Download Report` 4. Open of Save the file locally 5. Locate the `` under the user column 6. Review `password_last_used, access_key_1_last_used_date, access_key_2_last_used_date` to determine when the 'root user' was last used. **From Command Line:** Run the following CLI commands to provide a credential report for determining the last time the 'root user' was used: ``` aws iam generate-credential-report ``` ``` aws iam get-credential-report --query 'Content' --output text | base64 -d | cut -d, -f1,5,11,16 | grep -B1 '' ``` Review `password_last_used`, `access_key_1_last_used_date`, `access_key_2_last_used_date` to determine when the _root user_ was last used. **Note:** There are a few conditions under which the use of the 'root' user account is required. Please see the reference links for all of the tasks that require use of the 'root' user.", + "AdditionalInformation": "The 'root' user for us-gov cloud regions is not enabled by default. However, on request to AWS support, they can enable the 'root' user and grant access only through access-keys (CLI, API methods) for us-gov cloud region. If the 'root' user for us-gov cloud regions is enabled, this recommendation is applicable. Monitoring usage of the 'root' user can be accomplished by implementing recommendation 3.3 Ensure a log metric filter and alarm exist for usage of the 'root' user.", "References": "https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html:https://docs.aws.amazon.com/IAM/latest/UserGuide/id_root-user.html:https://docs.aws.amazon.com/general/latest/gr/aws_tasks-that-require-root.html" } ] @@ -419,8 +419,8 @@ "Description": "Password policies are, in part, used to enforce password complexity requirements. IAM password policies can be used to ensure password are at least a given length. It is recommended that the password policy require a minimum password length 14.", "RationaleStatement": "Setting a password complexity policy increases account resiliency against brute force login attempts.", "ImpactStatement": "", - "RemediationProcedure": "Perform the following to set the password policy as prescribed:\n\n**From Console:**\n\n1. Login to AWS Console (with appropriate permissions to View Identity Access Management Account Settings)\n2. Go to IAM Service on the AWS Console\n3. Click on Account Settings on the Left Pane\n4. Set \"Minimum password length\" to `14` or greater.\n5. Click \"Apply password policy\"\n\n**From Command Line:**\n```\n aws iam update-account-password-policy --minimum-password-length 14\n```\nNote: All commands starting with \"aws iam update-account-password-policy\" can be combined into a single command.", - "AuditProcedure": "Perform the following to ensure the password policy is configured as prescribed:\n\n**From Console:**\n\n1. Login to AWS Console (with appropriate permissions to View Identity Access Management Account Settings)\n2. Go to IAM Service on the AWS Console\n3. Click on Account Settings on the Left Pane\n4. Ensure \"Minimum password length\" is set to 14 or greater.\n\n**From Command Line:**\n```\naws iam get-account-password-policy\n```\nEnsure the output of the above command includes \"MinimumPasswordLength\": 14 (or higher)", + "RemediationProcedure": "Perform the following to set the password policy as prescribed: **From Console:** 1. Login to AWS Console (with appropriate permissions to View Identity Access Management Account Settings) 2. Go to IAM Service on the AWS Console 3. Click on Account Settings on the Left Pane 4. Set \"Minimum password length\" to `14` or greater. 5. Click \"Apply password policy\" **From Command Line:** ``` aws iam update-account-password-policy --minimum-password-length 14 ``` Note: All commands starting with \"aws iam update-account-password-policy\" can be combined into a single command.", + "AuditProcedure": "Perform the following to ensure the password policy is configured as prescribed: **From Console:** 1. Login to AWS Console (with appropriate permissions to View Identity Access Management Account Settings) 2. Go to IAM Service on the AWS Console 3. Click on Account Settings on the Left Pane 4. Ensure \"Minimum password length\" is set to 14 or greater. **From Command Line:** ``` aws iam get-account-password-policy ``` Ensure the output of the above command includes \"MinimumPasswordLength\": 14 (or higher)", "AdditionalInformation": "", "References": "https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_passwords_account-policy.html:https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#configure-strong-password-policy" } @@ -440,8 +440,8 @@ "Description": "IAM password policies can prevent the reuse of a given password by the same user. It is recommended that the password policy prevent the reuse of passwords.", "RationaleStatement": "Preventing password reuse increases account resiliency against brute force login attempts.", "ImpactStatement": "", - "RemediationProcedure": "Perform the following to set the password policy as prescribed:\n\n**From Console:**\n\n1. Login to AWS Console (with appropriate permissions to View Identity Access Management Account Settings)\n2. Go to IAM Service on the AWS Console\n3. Click on Account Settings on the Left Pane\n4. Check \"Prevent password reuse\"\n5. Set \"Number of passwords to remember\" is set to `24` \n\n**From Command Line:**\n```\n aws iam update-account-password-policy --password-reuse-prevention 24\n```\nNote: All commands starting with \"aws iam update-account-password-policy\" can be combined into a single command.", - "AuditProcedure": "Perform the following to ensure the password policy is configured as prescribed:\n\n**From Console:**\n\n1. Login to AWS Console (with appropriate permissions to View Identity Access Management Account Settings)\n2. Go to IAM Service on the AWS Console\n3. Click on Account Settings on the Left Pane\n4. Ensure \"Prevent password reuse\" is checked\n5. Ensure \"Number of passwords to remember\" is set to 24\n\n**From Command Line:**\n```\naws iam get-account-password-policy \n```\nEnsure the output of the above command includes \"PasswordReusePrevention\": 24", + "RemediationProcedure": "Perform the following to set the password policy as prescribed: **From Console:** 1. Login to AWS Console (with appropriate permissions to View Identity Access Management Account Settings) 2. Go to IAM Service on the AWS Console 3. Click on Account Settings on the Left Pane 4. Check \"Prevent password reuse\" 5. Set \"Number of passwords to remember\" is set to `24` **From Command Line:** ``` aws iam update-account-password-policy --password-reuse-prevention 24 ``` Note: All commands starting with \"aws iam update-account-password-policy\" can be combined into a single command.", + "AuditProcedure": "Perform the following to ensure the password policy is configured as prescribed: **From Console:** 1. Login to AWS Console (with appropriate permissions to View Identity Access Management Account Settings) 2. Go to IAM Service on the AWS Console 3. Click on Account Settings on the Left Pane 4. Ensure \"Prevent password reuse\" is checked 5. Ensure \"Number of passwords to remember\" is set to 24 **From Command Line:** ``` aws iam get-account-password-policy ``` Ensure the output of the above command includes \"PasswordReusePrevention\": 24", "AdditionalInformation": "", "References": "https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_passwords_account-policy.html:https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#configure-strong-password-policy" } @@ -461,8 +461,8 @@ "Description": "Amazon S3 provides a variety of no, or low, cost encryption options to protect data at rest.", "RationaleStatement": "Encrypting data at rest reduces the likelihood that it is unintentionally exposed and can nullify the impact of disclosure if the encryption remains unbroken.", "ImpactStatement": "Amazon S3 buckets with default bucket encryption using SSE-KMS cannot be used as destination buckets for Amazon S3 server access logging. Only SSE-S3 default encryption is supported for server access log destination buckets.", - "RemediationProcedure": "**From Console:**\n\n1. Login to AWS Management Console and open the Amazon S3 console using https://console.aws.amazon.com/s3/ \n2. Select a Bucket.\n3. Click on 'Properties'.\n4. Click edit on `Default Encryption`.\n5. Select either `AES-256`, `AWS-KMS`, `SSE-KMS` or `SSE-S3`.\n6. Click `Save`\n7. Repeat for all the buckets in your AWS account lacking encryption.\n\n**From Command Line:**\n\nRun either \n```\naws s3api put-bucket-encryption --bucket --server-side-encryption-configuration '{\"Rules\": [{\"ApplyServerSideEncryptionByDefault\": {\"SSEAlgorithm\": \"AES256\"}}]}'\n```\n or \n```\naws s3api put-bucket-encryption --bucket --server-side-encryption-configuration '{\"Rules\": [{\"ApplyServerSideEncryptionByDefault\": {\"SSEAlgorithm\": \"aws:kms\",\"KMSMasterKeyID\": \"aws/s3\"}}]}'\n```\n\n**Note:** the KMSMasterKeyID can be set to the master key of your choosing; aws/s3 is an AWS preconfigured default.", - "AuditProcedure": "**From Console:**\n\n1. Login to AWS Management Console and open the Amazon S3 console using https://console.aws.amazon.com/s3/ \n2. Select a Bucket.\n3. Click on 'Properties'.\n4. Verify that `Default Encryption` is enabled, and displays either `AES-256`, `AWS-KMS`, `SSE-KMS` or `SSE-S3`.\n5. Repeat for all the buckets in your AWS account.\n\n**From Command Line:**\n\n1. Run command to list buckets\n```\naws s3 ls\n```\n2. For each bucket, run \n```\naws s3api get-bucket-encryption --bucket \n```\n3. Verify that either \n```\n\"SSEAlgorithm\": \"AES256\"\n```\n or \n```\n\"SSEAlgorithm\": \"aws:kms\"```\n is displayed.", + "RemediationProcedure": "**From Console:** 1. Login to AWS Management Console and open the Amazon S3 console using https://console.aws.amazon.com/s3/ 2. Select a Bucket. 3. Click on 'Properties'. 4. Click edit on `Default Encryption`. 5. Select either `AES-256`, `AWS-KMS`, `SSE-KMS` or `SSE-S3`. 6. Click `Save` 7. Repeat for all the buckets in your AWS account lacking encryption. **From Command Line:** Run either ``` aws s3api put-bucket-encryption --bucket --server-side-encryption-configuration '{\"Rules\": [{\"ApplyServerSideEncryptionByDefault\": {\"SSEAlgorithm\": \"AES256\"}}]}' ``` or ``` aws s3api put-bucket-encryption --bucket --server-side-encryption-configuration '{\"Rules\": [{\"ApplyServerSideEncryptionByDefault\": {\"SSEAlgorithm\": \"aws:kms\",\"KMSMasterKeyID\": \"aws/s3\"}}]}' ``` **Note:** the KMSMasterKeyID can be set to the master key of your choosing; aws/s3 is an AWS preconfigured default.", + "AuditProcedure": "**From Console:** 1. Login to AWS Management Console and open the Amazon S3 console using https://console.aws.amazon.com/s3/ 2. Select a Bucket. 3. Click on 'Properties'. 4. Verify that `Default Encryption` is enabled, and displays either `AES-256`, `AWS-KMS`, `SSE-KMS` or `SSE-S3`. 5. Repeat for all the buckets in your AWS account. **From Command Line:** 1. Run command to list buckets ``` aws s3 ls ``` 2. For each bucket, run ``` aws s3api get-bucket-encryption --bucket ``` 3. Verify that either ``` \"SSEAlgorithm\": \"AES256\" ``` or ``` \"SSEAlgorithm\": \"aws:kms\"``` is displayed.", "AdditionalInformation": "S3 bucket encryption only applies to objects as they are placed in the bucket. Enabling S3 bucket encryption does **not** encrypt objects previously stored within the bucket.", "References": "https://docs.aws.amazon.com/AmazonS3/latest/user-guide/default-bucket-encryption.html:https://docs.aws.amazon.com/AmazonS3/latest/dev/bucket-encryption.html#bucket-encryption-related-resources" } @@ -482,8 +482,8 @@ "Description": "At the Amazon S3 bucket level, you can configure permissions through a bucket policy making the objects accessible only through HTTPS.", "RationaleStatement": "By default, Amazon S3 allows both HTTP and HTTPS requests. To achieve only allowing access to Amazon S3 objects through HTTPS you also have to explicitly deny access to HTTP requests. Bucket policies that allow HTTPS requests without explicitly denying HTTP requests will not comply with this recommendation.", "ImpactStatement": "", - "RemediationProcedure": "**From Console:**\n\n1. Login to AWS Management Console and open the Amazon S3 console using https://console.aws.amazon.com/s3/\n2. Select the Check box next to the Bucket.\n3. Click on 'Permissions'.\n4. Click 'Bucket Policy'\n5. Add this to the existing policy filling in the required information\n```\n{\n \"Sid\": \",\n \"Effect\": \"Deny\",\n \"Principal\": \"*\",\n \"Action\": \"s3:*\",\n \"Resource\": \"arn:aws:s3:::/*\",\n \"Condition\": {\n \"Bool\": {\n \"aws:SecureTransport\": \"false\"\n }\n }\n }\n```\n6. Save\n7. Repeat for all the buckets in your AWS account that contain sensitive data.\n\n**From Console** \n\nusing AWS Policy Generator:\n\n1. Repeat steps 1-4 above.\n2. Click on `Policy Generator` at the bottom of the Bucket Policy Editor\n3. Select Policy Type\n`S3 Bucket Policy`\n4. Add Statements\n- `Effect` = Deny\n- `Principal` = *\n- `AWS Service` = Amazon S3\n- `Actions` = *\n- `Amazon Resource Name` = \n5. Generate Policy\n6. Copy the text and add it to the Bucket Policy.\n\n**From Command Line:**\n\n1. Export the bucket policy to a json file.\n```\naws s3api get-bucket-policy --bucket --query Policy --output text > policy.json\n```\n\n2. Modify the policy.json file by adding in this statement:\n```\n{\n \"Sid\": \",\n \"Effect\": \"Deny\",\n \"Principal\": \"*\",\n \"Action\": \"s3:*\",\n \"Resource\": \"arn:aws:s3:::/*\",\n \"Condition\": {\n \"Bool\": {\n \"aws:SecureTransport\": \"false\"\n }\n }\n }\n```\n3. Apply this modified policy back to the S3 bucket:\n```\naws s3api put-bucket-policy --bucket --policy file://policy.json\n```", - "AuditProcedure": "To allow access to HTTPS you can use a condition that checks for the key `\"aws:SecureTransport: true\"`. This means that the request is sent through HTTPS but that HTTP can still be used. So to make sure you do not allow HTTP access confirm that there is a bucket policy that explicitly denies access for HTTP requests and that it contains the key \"aws:SecureTransport\": \"false\".\n\n**From Console:**\n\n1. Login to AWS Management Console and open the Amazon S3 console using https://console.aws.amazon.com/s3/\n2. Select the Check box next to the Bucket.\n3. Click on 'Permissions', then Click on `Bucket Policy`.\n4. Ensure that a policy is listed that matches:\n```\n'{\n \"Sid\": ,\n \"Effect\": \"Deny\",\n \"Principal\": \"*\",\n \"Action\": \"s3:*\",\n \"Resource\": \"arn:aws:s3:::/*\",\n \"Condition\": {\n \"Bool\": {\n \"aws:SecureTransport\": \"false\"\n }'\n```\n`` and `` will be specific to your account\n\n5. Repeat for all the buckets in your AWS account.\n\n**From Command Line:**\n\n1. List all of the S3 Buckets \n```\naws s3 ls\n```\n2. Using the list of buckets run this command on each of them:\n```\naws s3api get-bucket-policy --bucket | grep aws:SecureTransport\n```\n3. Confirm that `aws:SecureTransport` is set to false `aws:SecureTransport:false`\n4. Confirm that the policy line has Effect set to Deny 'Effect:Deny'", + "RemediationProcedure": "**From Console:** 1. Login to AWS Management Console and open the Amazon S3 console using https://console.aws.amazon.com/s3/ 2. Select the Check box next to the Bucket. 3. Click on 'Permissions'. 4. Click 'Bucket Policy' 5. Add this to the existing policy filling in the required information ``` { \"Sid\": \", \"Effect\": \"Deny\", \"Principal\": \"*\", \"Action\": \"s3:*\", \"Resource\": \"arn:aws:s3:::/*\", \"Condition\": { \"Bool\": { \"aws:SecureTransport\": \"false\" } } } ``` 6. Save 7. Repeat for all the buckets in your AWS account that contain sensitive data. **From Console** using AWS Policy Generator: 1. Repeat steps 1-4 above. 2. Click on `Policy Generator` at the bottom of the Bucket Policy Editor 3. Select Policy Type `S3 Bucket Policy` 4. Add Statements - `Effect` = Deny - `Principal` = * - `AWS Service` = Amazon S3 - `Actions` = * - `Amazon Resource Name` = 5. Generate Policy 6. Copy the text and add it to the Bucket Policy. **From Command Line:** 1. Export the bucket policy to a json file. ``` aws s3api get-bucket-policy --bucket --query Policy --output text > policy.json ``` 2. Modify the policy.json file by adding in this statement: ``` { \"Sid\": \", \"Effect\": \"Deny\", \"Principal\": \"*\", \"Action\": \"s3:*\", \"Resource\": \"arn:aws:s3:::/*\", \"Condition\": { \"Bool\": { \"aws:SecureTransport\": \"false\" } } } ``` 3. Apply this modified policy back to the S3 bucket: ``` aws s3api put-bucket-policy --bucket --policy file://policy.json ```", + "AuditProcedure": "To allow access to HTTPS you can use a condition that checks for the key `\"aws:SecureTransport: true\"`. This means that the request is sent through HTTPS but that HTTP can still be used. So to make sure you do not allow HTTP access confirm that there is a bucket policy that explicitly denies access for HTTP requests and that it contains the key \"aws:SecureTransport\": \"false\". **From Console:** 1. Login to AWS Management Console and open the Amazon S3 console using https://console.aws.amazon.com/s3/ 2. Select the Check box next to the Bucket. 3. Click on 'Permissions', then Click on `Bucket Policy`. 4. Ensure that a policy is listed that matches: ``` '{ \"Sid\": , \"Effect\": \"Deny\", \"Principal\": \"*\", \"Action\": \"s3:*\", \"Resource\": \"arn:aws:s3:::/*\", \"Condition\": { \"Bool\": { \"aws:SecureTransport\": \"false\" }' ``` `` and `` will be specific to your account 5. Repeat for all the buckets in your AWS account. **From Command Line:** 1. List all of the S3 Buckets ``` aws s3 ls ``` 2. Using the list of buckets run this command on each of them: ``` aws s3api get-bucket-policy --bucket | grep aws:SecureTransport ``` 3. Confirm that `aws:SecureTransport` is set to false `aws:SecureTransport:false` 4. Confirm that the policy line has Effect set to Deny 'Effect:Deny'", "AdditionalInformation": "", "References": "https://aws.amazon.com/premiumsupport/knowledge-center/s3-bucket-policy-for-config-rule/:https://aws.amazon.com/blogs/security/how-to-use-bucket-policies-and-apply-defense-in-depth-to-help-secure-your-amazon-s3-data/:https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/get-bucket-policy.html" } @@ -503,8 +503,8 @@ "Description": "Once MFA Delete is enabled on your sensitive and classified S3 bucket it requires the user to have two forms of authentication.", "RationaleStatement": "Adding MFA delete to an S3 bucket, requires additional authentication when you change the version state of your bucket or you delete and object version adding another layer of security in the event your security credentials are compromised or unauthorized access is granted.", "ImpactStatement": "", - "RemediationProcedure": "Perform the steps below to enable MFA delete on an S3 bucket.\n\nNote:\n-You cannot enable MFA Delete using the AWS Management Console. You must use the AWS CLI or API.\n-You must use your 'root' account to enable MFA Delete on S3 buckets.\n\n**From Command line:**\n\n1. Run the s3api put-bucket-versioning command\n\n```\naws s3api put-bucket-versioning --profile my-root-profile --bucket Bucket_Name --versioning-configuration Status=Enabled,MFADelete=Enabled --mfa “arn:aws:iam::aws_account_id:mfa/root-account-mfa-device passcode”\n```", - "AuditProcedure": "Perform the steps below to confirm MFA delete is configured on an S3 Bucket\n\n**From Console:**\n\n1. Login to the S3 console at `https://console.aws.amazon.com/s3/`\n\n2. Click the `Check` box next to the Bucket name you want to confirm\n\n3. In the window under `Properties`\n\n4. Confirm that Versioning is `Enabled`\n\n5. Confirm that MFA Delete is `Enabled`\n\n**From Command Line:**\n\n1. Run the `get-bucket-versioning`\n```\naws s3api get-bucket-versioning --bucket my-bucket\n```\n\nOutput example:\n```\n \n Enabled\n Enabled \n\n```\n\nIf the Console or the CLI output does not show Versioning and MFA Delete `enabled` refer to the remediation below.", + "RemediationProcedure": "Perform the steps below to enable MFA delete on an S3 bucket. Note: -You cannot enable MFA Delete using the AWS Management Console. You must use the AWS CLI or API. -You must use your 'root' account to enable MFA Delete on S3 buckets. **From Command line:** 1. Run the s3api put-bucket-versioning command ``` aws s3api put-bucket-versioning --profile my-root-profile --bucket Bucket_Name --versioning-configuration Status=Enabled,MFADelete=Enabled --mfa “arn:aws:iam::aws_account_id:mfa/root-account-mfa-device passcode” ```", + "AuditProcedure": "Perform the steps below to confirm MFA delete is configured on an S3 Bucket **From Console:** 1. Login to the S3 console at `https://console.aws.amazon.com/s3/` 2. Click the `Check` box next to the Bucket name you want to confirm 3. In the window under `Properties` 4. Confirm that Versioning is `Enabled` 5. Confirm that MFA Delete is `Enabled` **From Command Line:** 1. Run the `get-bucket-versioning` ``` aws s3api get-bucket-versioning --bucket my-bucket ``` Output example: ``` Enabled Enabled ``` If the Console or the CLI output does not show Versioning and MFA Delete `enabled` refer to the remediation below.", "AdditionalInformation": "", "References": "https://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html#MultiFactorAuthenticationDelete:https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMFADelete.html:https://aws.amazon.com/blogs/security/securing-access-to-aws-using-mfa-part-3/:https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa_lost-or-broken.html" } @@ -522,10 +522,10 @@ "Profile": "Level 2", "AssessmentStatus": "Manual", "Description": "Amazon S3 buckets can contain sensitive data, that for security purposes should be discovered, monitored, classified and protected. Macie along with other 3rd party tools can automatically provide an inventory of Amazon S3 buckets.", - "RationaleStatement": "Using a Cloud service or 3rd Party software to continuously monitor and automate the process of data discovery and classification for S3 buckets using machine learning and pattern matching is a strong defense in protecting that information.\n\nAmazon Macie is a fully managed data security and data privacy service that uses machine learning and pattern matching to discover and protect your sensitive data in AWS.", + "RationaleStatement": "Using a Cloud service or 3rd Party software to continuously monitor and automate the process of data discovery and classification for S3 buckets using machine learning and pattern matching is a strong defense in protecting that information. Amazon Macie is a fully managed data security and data privacy service that uses machine learning and pattern matching to discover and protect your sensitive data in AWS.", "ImpactStatement": "There is a cost associated with using Amazon Macie. There is also typically a cost associated with 3rd Party tools that perform similar processes and protection.", - "RemediationProcedure": "Perform the steps below to enable and configure Amazon Macie\n\n**From Console:**\n\n1. Log on to the Macie console at `https://console.aws.amazon.com/macie/`\n\n2. Click `Get started`.\n\n3. Click `Enable Macie`.\n\nSetup a repository for sensitive data discovery results\n\n1. In the Left pane, under Settings, click `Discovery results`.\n\n2. Make sure `Create bucket` is selected.\n\n3. Create a bucket, enter a name for the bucket. The name must be unique across all S3 buckets. In addition, the name must start with a lowercase letter or a number.\n\n4. Click on `Advanced`.\n\n5. Block all public access, make sure `Yes` is selected.\n\n6. KMS encryption, specify the AWS KMS key that you want to use to encrypt the results. The key must be a symmetric, customer master key (CMK) that's in the same Region as the S3 bucket.\n\n7. Click on `Save`\n\nCreate a job to discover sensitive data\n\n1. In the left pane, click `S3 buckets`. Macie displays a list of all the S3 buckets for your account.\n\n2. Select the `check box` for each bucket that you want Macie to analyze as part of the job\n\n3. Click `Create job`.\n\n3. Click `Quick create`.\n\n4. For the Name and description step, enter a name and, optionally, a description of the job.\n\n5. Then click `Next`.\n\n6. For the Review and create step, click `Submit`.\n\nReview your findings\n\n1. In the left pane, click `Findings`.\n\n2. To view the details of a specific finding, choose any field other than the check box for the finding.\n\nIf you are using a 3rd Party tool to manage and protect your s3 data, follow the Vendor documentation for implementing and configuring that tool.", - "AuditProcedure": "Perform the following steps to determine if Macie is running:\n\n**From Console:**\n\n 1. Login to the Macie console at https://console.aws.amazon.com/macie/\n\n 2. In the left hand pane click on By job under findings.\n\n 3. Confirm that you have a Job setup for your S3 Buckets\n\nWhen you log into the Macie console if you aren't taken to the summary page and you don't have a job setup and running then refer to the remediation procedure below.\n\nIf you are using a 3rd Party tool to manage and protect your s3 data you meet this recommendation.", + "RemediationProcedure": "Perform the steps below to enable and configure Amazon Macie **From Console:** 1. Log on to the Macie console at `https://console.aws.amazon.com/macie/` 2. Click `Get started`. 3. Click `Enable Macie`. Setup a repository for sensitive data discovery results 1. In the Left pane, under Settings, click `Discovery results`. 2. Make sure `Create bucket` is selected. 3. Create a bucket, enter a name for the bucket. The name must be unique across all S3 buckets. In addition, the name must start with a lowercase letter or a number. 4. Click on `Advanced`. 5. Block all public access, make sure `Yes` is selected. 6. KMS encryption, specify the AWS KMS key that you want to use to encrypt the results. The key must be a symmetric, customer master key (CMK) that's in the same Region as the S3 bucket. 7. Click on `Save` Create a job to discover sensitive data 1. In the left pane, click `S3 buckets`. Macie displays a list of all the S3 buckets for your account. 2. Select the `check box` for each bucket that you want Macie to analyze as part of the job 3. Click `Create job`. 3. Click `Quick create`. 4. For the Name and description step, enter a name and, optionally, a description of the job. 5. Then click `Next`. 6. For the Review and create step, click `Submit`. Review your findings 1. In the left pane, click `Findings`. 2. To view the details of a specific finding, choose any field other than the check box for the finding. If you are using a 3rd Party tool to manage and protect your s3 data, follow the Vendor documentation for implementing and configuring that tool.", + "AuditProcedure": "Perform the following steps to determine if Macie is running: **From Console:** 1. Login to the Macie console at https://console.aws.amazon.com/macie/ 2. In the left hand pane click on By job under findings. 3. Confirm that you have a Job setup for your S3 Buckets When you log into the Macie console if you aren't taken to the summary page and you don't have a job setup and running then refer to the remediation procedure below. If you are using a 3rd Party tool to manage and protect your s3 data you meet this recommendation.", "AdditionalInformation": "", "References": "https://aws.amazon.com/macie/getting-started/:https://docs.aws.amazon.com/workspaces/latest/adminguide/data-protection.html:https://docs.aws.amazon.com/macie/latest/user/data-classification.html" } @@ -544,10 +544,10 @@ "Profile": "Level 1", "AssessmentStatus": "Automated", "Description": "Amazon S3 provides `Block public access (bucket settings)` and `Block public access (account settings)` to help you manage public access to Amazon S3 resources. By default, S3 buckets and objects are created with public access disabled. However, an IAM principal with sufficient S3 permissions can enable public access at the bucket and/or object level. While enabled, `Block public access (bucket settings)` prevents an individual bucket, and its contained objects, from becoming publicly accessible. Similarly, `Block public access (account settings)` prevents all buckets, and contained objects, from becoming publicly accessible across the entire account.", - "RationaleStatement": "Amazon S3 `Block public access (bucket settings)` prevents the accidental or malicious public exposure of data contained within the respective bucket(s). \n\nAmazon S3 `Block public access (account settings)` prevents the accidental or malicious public exposure of data contained within all buckets of the respective AWS account.\n\nWhether blocking public access to all or some buckets is an organizational decision that should be based on data sensitivity, least privilege, and use case.", + "RationaleStatement": "Amazon S3 `Block public access (bucket settings)` prevents the accidental or malicious public exposure of data contained within the respective bucket(s). Amazon S3 `Block public access (account settings)` prevents the accidental or malicious public exposure of data contained within all buckets of the respective AWS account. Whether blocking public access to all or some buckets is an organizational decision that should be based on data sensitivity, least privilege, and use case.", "ImpactStatement": "When you apply Block Public Access settings to an account, the settings apply to all AWS Regions globally. The settings might not take effect in all Regions immediately or simultaneously, but they eventually propagate to all Regions.", - "RemediationProcedure": "**If utilizing Block Public Access (bucket settings)**\n\n**From Console:**\n\n1. Login to AWS Management Console and open the Amazon S3 console using https://console.aws.amazon.com/s3/ \n2. Select the Check box next to the Bucket.\n3. Click on 'Edit public access settings'.\n4. Click 'Block all public access'\n5. Repeat for all the buckets in your AWS account that contain sensitive data.\n\n**From Command Line:**\n\n1. List all of the S3 Buckets\n```\naws s3 ls\n```\n2. Set the Block Public Access to true on that bucket\n```\naws s3api put-public-access-block --bucket --public-access-block-configuration \"BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPublicBuckets=true\"\n```\n\n**If utilizing Block Public Access (account settings)**\n\n**From Console:**\n\nIf the output reads `true` for the separate configuration settings then it is set on the account.\n\n1. Login to AWS Management Console and open the Amazon S3 console using https://console.aws.amazon.com/s3/ \n2. Choose `Block Public Access (account settings)`\n3. Choose `Edit` to change the block public access settings for all the buckets in your AWS account\n4. Choose the settings you want to change, and then choose `Save`. For details about each setting, pause on the `i` icons.\n5. When you're asked for confirmation, enter `confirm`. Then Click `Confirm` to save your changes.\n\n**From Command Line:**\n\nTo set Block Public access settings for this account, run the following command:\n```\naws s3control put-public-access-block\n--public-access-block-configuration BlockPublicAcls=true, IgnorePublicAcls=true, BlockPublicPolicy=true, RestrictPublicBuckets=true\n--account-id \n```", - "AuditProcedure": "**If utilizing Block Public Access (bucket settings)**\n\n**From Console:**\n\n1. Login to AWS Management Console and open the Amazon S3 console using https://console.aws.amazon.com/s3/ \n2. Select the Check box next to the Bucket.\n3. Click on 'Edit public access settings'.\n4. Ensure that block public access settings are set appropriately for this bucket\n5. Repeat for all the buckets in your AWS account.\n\n**From Command Line:**\n\n1. List all of the S3 Buckets\n```\naws s3 ls\n```\n2. Find the public access setting on that bucket\n```\naws s3api get-public-access-block --bucket \n```\nOutput if Block Public access is enabled:\n\n```\n{\n \"PublicAccessBlockConfiguration\": {\n \"BlockPublicAcls\": true,\n \"IgnorePublicAcls\": true,\n \"BlockPublicPolicy\": true,\n \"RestrictPublicBuckets\": true\n }\n}\n```\n\nIf the output reads `false` for the separate configuration settings then proceed to the remediation.\n\n**If utilizing Block Public Access (account settings)**\n\n**From Console:**\n\n1. Login to AWS Management Console and open the Amazon S3 console using https://console.aws.amazon.com/s3/ \n2. Choose `Block public access (account settings)`\n3. Ensure that block public access settings are set appropriately for your AWS account.\n\n**From Command Line:**\n\nTo check Public access settings for this account status, run the following command,\n`aws s3control get-public-access-block --account-id --region `\n\nOutput if Block Public access is enabled:\n\n```\n{\n \"PublicAccessBlockConfiguration\": {\n \"IgnorePublicAcls\": true, \n \"BlockPublicPolicy\": true, \n \"BlockPublicAcls\": true, \n \"RestrictPublicBuckets\": true\n }\n}\n```\n\nIf the output reads `false` for the separate configuration settings then proceed to the remediation.", + "RemediationProcedure": "**If utilizing Block Public Access (bucket settings)** **From Console:** 1. Login to AWS Management Console and open the Amazon S3 console using https://console.aws.amazon.com/s3/ 2. Select the Check box next to the Bucket. 3. Click on 'Edit public access settings'. 4. Click 'Block all public access' 5. Repeat for all the buckets in your AWS account that contain sensitive data. **From Command Line:** 1. List all of the S3 Buckets ``` aws s3 ls ``` 2. Set the Block Public Access to true on that bucket ``` aws s3api put-public-access-block --bucket --public-access-block-configuration \"BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPublicBuckets=true\" ``` **If utilizing Block Public Access (account settings)** **From Console:** If the output reads `true` for the separate configuration settings then it is set on the account. 1. Login to AWS Management Console and open the Amazon S3 console using https://console.aws.amazon.com/s3/ 2. Choose `Block Public Access (account settings)` 3. Choose `Edit` to change the block public access settings for all the buckets in your AWS account 4. Choose the settings you want to change, and then choose `Save`. For details about each setting, pause on the `i` icons. 5. When you're asked for confirmation, enter `confirm`. Then Click `Confirm` to save your changes. **From Command Line:** To set Block Public access settings for this account, run the following command: ``` aws s3control put-public-access-block --public-access-block-configuration BlockPublicAcls=true, IgnorePublicAcls=true, BlockPublicPolicy=true, RestrictPublicBuckets=true --account-id ```", + "AuditProcedure": "**If utilizing Block Public Access (bucket settings)** **From Console:** 1. Login to AWS Management Console and open the Amazon S3 console using https://console.aws.amazon.com/s3/ 2. Select the Check box next to the Bucket. 3. Click on 'Edit public access settings'. 4. Ensure that block public access settings are set appropriately for this bucket 5. Repeat for all the buckets in your AWS account. **From Command Line:** 1. List all of the S3 Buckets ``` aws s3 ls ``` 2. Find the public access setting on that bucket ``` aws s3api get-public-access-block --bucket ``` Output if Block Public access is enabled: ``` { \"PublicAccessBlockConfiguration\": { \"BlockPublicAcls\": true, \"IgnorePublicAcls\": true, \"BlockPublicPolicy\": true, \"RestrictPublicBuckets\": true } } ``` If the output reads `false` for the separate configuration settings then proceed to the remediation. **If utilizing Block Public Access (account settings)** **From Console:** 1. Login to AWS Management Console and open the Amazon S3 console using https://console.aws.amazon.com/s3/ 2. Choose `Block public access (account settings)` 3. Ensure that block public access settings are set appropriately for your AWS account. **From Command Line:** To check Public access settings for this account status, run the following command, `aws s3control get-public-access-block --account-id --region ` Output if Block Public access is enabled: ``` { \"PublicAccessBlockConfiguration\": { \"IgnorePublicAcls\": true, \"BlockPublicPolicy\": true, \"BlockPublicAcls\": true, \"RestrictPublicBuckets\": true } } ``` If the output reads `false` for the separate configuration settings then proceed to the remediation.", "AdditionalInformation": "", "References": "https://docs.aws.amazon.com/AmazonS3/latest/user-guide/block-public-access-account.html" } @@ -567,8 +567,8 @@ "Description": "Elastic Compute Cloud (EC2) supports encryption at rest when using the Elastic Block Store (EBS) service. While disabled by default, forcing encryption at EBS volume creation is supported.", "RationaleStatement": "Encrypting data at rest reduces the likelihood that it is unintentionally exposed and can nullify the impact of disclosure if the encryption remains unbroken.", "ImpactStatement": "Losing access or removing the KMS key in use by the EBS volumes will result in no longer being able to access the volumes.", - "RemediationProcedure": "**From Console:**\n\n1. Login to AWS Management Console and open the Amazon EC2 console using https://console.aws.amazon.com/ec2/ \n2. Under `Account attributes`, click `EBS encryption`.\n3. Click `Manage`.\n4. Click the `Enable` checkbox.\n5. Click `Update EBS encryption`\n6. Repeat for every region requiring the change.\n\n**Note:** EBS volume encryption is configured per region.\n\n**From Command Line:**\n\n1. Run \n```\naws --region ec2 enable-ebs-encryption-by-default\n```\n2. Verify that `\"EbsEncryptionByDefault\": true` is displayed.\n3. Repeat every region requiring the change.\n\n**Note:** EBS volume encryption is configured per region.", - "AuditProcedure": "**From Console:**\n\n1. Login to AWS Management Console and open the Amazon EC2 console using https://console.aws.amazon.com/ec2/ \n2. Under `Account attributes`, click `EBS encryption`.\n3. Verify `Always encrypt new EBS volumes` displays `Enabled`.\n4. Review every region in-use.\n\n**Note:** EBS volume encryption is configured per region.\n\n**From Command Line:**\n\n1. Run \n```\naws --region ec2 get-ebs-encryption-by-default\n```\n2. Verify that `\"EbsEncryptionByDefault\": true` is displayed.\n3. Review every region in-use.\n\n**Note:** EBS volume encryption is configured per region.", + "RemediationProcedure": "**From Console:** 1. Login to AWS Management Console and open the Amazon EC2 console using https://console.aws.amazon.com/ec2/ 2. Under `Account attributes`, click `EBS encryption`. 3. Click `Manage`. 4. Click the `Enable` checkbox. 5. Click `Update EBS encryption` 6. Repeat for every region requiring the change. **Note:** EBS volume encryption is configured per region. **From Command Line:** 1. Run ``` aws --region ec2 enable-ebs-encryption-by-default ``` 2. Verify that `\"EbsEncryptionByDefault\": true` is displayed. 3. Repeat every region requiring the change. **Note:** EBS volume encryption is configured per region.", + "AuditProcedure": "**From Console:** 1. Login to AWS Management Console and open the Amazon EC2 console using https://console.aws.amazon.com/ec2/ 2. Under `Account attributes`, click `EBS encryption`. 3. Verify `Always encrypt new EBS volumes` displays `Enabled`. 4. Review every region in-use. **Note:** EBS volume encryption is configured per region. **From Command Line:** 1. Run ``` aws --region ec2 get-ebs-encryption-by-default ``` 2. Verify that `\"EbsEncryptionByDefault\": true` is displayed. 3. Review every region in-use. **Note:** EBS volume encryption is configured per region.", "AdditionalInformation": "Default EBS volume encryption only applies to newly created EBS volumes. Existing EBS volumes are **not** converted automatically.", "References": "https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html:https://aws.amazon.com/blogs/aws/new-opt-in-to-default-encryption-for-new-ebs-volumes/" } @@ -588,8 +588,8 @@ "Description": "Amazon RDS encrypted DB instances use the industry standard AES-256 encryption algorithm to encrypt your data on the server that hosts your Amazon RDS DB instances. After your data is encrypted, Amazon RDS handles authentication of access and decryption of your data transparently with a minimal impact on performance.", "RationaleStatement": "Databases are likely to hold sensitive and critical data, it is highly recommended to implement encryption in order to protect your data from unauthorized access or disclosure. With RDS encryption enabled, the data stored on the instance's underlying storage, the automated backups, read replicas, and snapshots, are all encrypted.", "ImpactStatement": "", - "RemediationProcedure": "**From Console:**\n\n1. Login to the AWS Management Console and open the RDS dashboard at https://console.aws.amazon.com/rds/.\n2. In the left navigation panel, click on `Databases`\n3. Select the Database instance that needs to be encrypted.\n4. Click on `Actions` button placed at the top right and select `Take Snapshot`.\n5. On the Take Snapshot page, enter a database name of which you want to take a snapshot in the `Snapshot Name` field and click on `Take Snapshot`.\n6. Select the newly created snapshot and click on the `Action` button placed at the top right and select `Copy snapshot` from the Action menu.\n7. On the Make Copy of DB Snapshot page, perform the following:\n\n- In the New DB Snapshot Identifier field, Enter a name for the `new snapshot`.\n- Check `Copy Tags`, New snapshot must have the same tags as the source snapshot.\n- Select `Yes` from the `Enable Encryption` dropdown list to enable encryption, You can choose to use the AWS default encryption key or custom key from Master Key dropdown list.\n\n8. Click `Copy Snapshot` to create an encrypted copy of the selected instance snapshot.\n9. Select the new Snapshot Encrypted Copy and click on the `Action` button placed at the top right and select `Restore Snapshot` button from the Action menu, This will restore the encrypted snapshot to a new database instance.\n10. On the Restore DB Instance page, enter a unique name for the new database instance in the DB Instance Identifier field.\n11. Review the instance configuration details and click `Restore DB Instance`.\n12. As the new instance provisioning process is completed can update application configuration to refer to the endpoint of the new Encrypted database instance Once the database endpoint is changed at the application level, can remove the unencrypted instance.\n\n**From Command Line:**\n\n1. Run `describe-db-instances` command to list all RDS database names available in the selected AWS region, The command output should return the database instance identifier.\n```\naws rds describe-db-instances --region --query 'DBInstances[*].DBInstanceIdentifier'\n```\n2. Run `create-db-snapshot` command to create a snapshot for the selected database instance, The command output will return the `new snapshot` with name DB Snapshot Name.\n```\naws rds create-db-snapshot --region --db-snapshot-identifier --db-instance-identifier \n```\n3. Now run `list-aliases` command to list the KMS keys aliases available in a specified region, The command output should return each `key alias currently available`. For our RDS encryption activation process, locate the ID of the AWS default KMS key.\n```\naws kms list-aliases --region \n```\n4. Run `copy-db-snapshot` command using the default KMS key ID for RDS instances returned earlier to create an encrypted copy of the database instance snapshot, The command output will return the `encrypted instance snapshot configuration`.\n```\naws rds copy-db-snapshot --region --source-db-snapshot-identifier --target-db-snapshot-identifier --copy-tags --kms-key-id \n```\n5. Run `restore-db-instance-from-db-snapshot` command to restore the encrypted snapshot created at the previous step to a new database instance, If successful, the command output should return the new encrypted database instance configuration.\n```\naws rds restore-db-instance-from-db-snapshot --region --db-instance-identifier --db-snapshot-identifier \n```\n6. Run `describe-db-instances` command to list all RDS database names, available in the selected AWS region, Output will return database instance identifier name Select encrypted database name that we just created DB-Name-Encrypted.\n```\naws rds describe-db-instances --region --query 'DBInstances[*].DBInstanceIdentifier'\n```\n7. Run again `describe-db-instances` command using the RDS instance identifier returned earlier, to determine if the selected database instance is encrypted, The command output should return the encryption status `True`.\n```\naws rds describe-db-instances --region --db-instance-identifier --query 'DBInstances[*].StorageEncrypted'\n```", - "AuditProcedure": "**From Console:**\n\n1. Login to the AWS Management Console and open the RDS dashboard at https://console.aws.amazon.com/rds/\n2. In the navigation pane, under RDS dashboard, click `Databases`.\n3. Select the RDS Instance that you want to examine\n4. Click `Instance Name` to see details, then click on `Configuration` tab.\n5. Under Configuration Details section, In Storage pane search for the `Encryption Enabled` Status.\n6. If the current status is set to `Disabled`, Encryption is not enabled for the selected RDS Instance database instance.\n7. Repeat steps 3 to 7 to verify encryption status of other RDS Instance in same region.\n8. Change region from the top of the navigation bar and repeat audit for other regions.\n\n**From Command Line:**\n\n1. Run `describe-db-instances` command to list all RDS Instance database names, available in the selected AWS region, Output will return each Instance database identifier-name.\n ```\naws rds describe-db-instances --region --query 'DBInstances[*].DBInstanceIdentifier'\n```\n2. Run again `describe-db-instances` command using the RDS Instance identifier returned earlier, to determine if the selected database instance is encrypted, The command output should return the encryption status `True` Or `False`.\n```\naws rds describe-db-instances --region --db-instance-identifier --query 'DBInstances[*].StorageEncrypted'\n```\n3. If the StorageEncrypted parameter value is `False`, Encryption is not enabled for the selected RDS database instance.\n4. Repeat steps 1 to 3 for auditing each RDS Instance and change Region to verify for other regions", + "RemediationProcedure": "**From Console:** 1. Login to the AWS Management Console and open the RDS dashboard at https://console.aws.amazon.com/rds/. 2. In the left navigation panel, click on `Databases` 3. Select the Database instance that needs to be encrypted. 4. Click on `Actions` button placed at the top right and select `Take Snapshot`. 5. On the Take Snapshot page, enter a database name of which you want to take a snapshot in the `Snapshot Name` field and click on `Take Snapshot`. 6. Select the newly created snapshot and click on the `Action` button placed at the top right and select `Copy snapshot` from the Action menu. 7. On the Make Copy of DB Snapshot page, perform the following: - In the New DB Snapshot Identifier field, Enter a name for the `new snapshot`. - Check `Copy Tags`, New snapshot must have the same tags as the source snapshot. - Select `Yes` from the `Enable Encryption` dropdown list to enable encryption, You can choose to use the AWS default encryption key or custom key from Master Key dropdown list. 8. Click `Copy Snapshot` to create an encrypted copy of the selected instance snapshot. 9. Select the new Snapshot Encrypted Copy and click on the `Action` button placed at the top right and select `Restore Snapshot` button from the Action menu, This will restore the encrypted snapshot to a new database instance. 10. On the Restore DB Instance page, enter a unique name for the new database instance in the DB Instance Identifier field. 11. Review the instance configuration details and click `Restore DB Instance`. 12. As the new instance provisioning process is completed can update application configuration to refer to the endpoint of the new Encrypted database instance Once the database endpoint is changed at the application level, can remove the unencrypted instance. **From Command Line:** 1. Run `describe-db-instances` command to list all RDS database names available in the selected AWS region, The command output should return the database instance identifier. ``` aws rds describe-db-instances --region --query 'DBInstances[*].DBInstanceIdentifier' ``` 2. Run `create-db-snapshot` command to create a snapshot for the selected database instance, The command output will return the `new snapshot` with name DB Snapshot Name. ``` aws rds create-db-snapshot --region --db-snapshot-identifier --db-instance-identifier ``` 3. Now run `list-aliases` command to list the KMS keys aliases available in a specified region, The command output should return each `key alias currently available`. For our RDS encryption activation process, locate the ID of the AWS default KMS key. ``` aws kms list-aliases --region ``` 4. Run `copy-db-snapshot` command using the default KMS key ID for RDS instances returned earlier to create an encrypted copy of the database instance snapshot, The command output will return the `encrypted instance snapshot configuration`. ``` aws rds copy-db-snapshot --region --source-db-snapshot-identifier --target-db-snapshot-identifier --copy-tags --kms-key-id ``` 5. Run `restore-db-instance-from-db-snapshot` command to restore the encrypted snapshot created at the previous step to a new database instance, If successful, the command output should return the new encrypted database instance configuration. ``` aws rds restore-db-instance-from-db-snapshot --region --db-instance-identifier --db-snapshot-identifier ``` 6. Run `describe-db-instances` command to list all RDS database names, available in the selected AWS region, Output will return database instance identifier name Select encrypted database name that we just created DB-Name-Encrypted. ``` aws rds describe-db-instances --region --query 'DBInstances[*].DBInstanceIdentifier' ``` 7. Run again `describe-db-instances` command using the RDS instance identifier returned earlier, to determine if the selected database instance is encrypted, The command output should return the encryption status `True`. ``` aws rds describe-db-instances --region --db-instance-identifier --query 'DBInstances[*].StorageEncrypted' ```", + "AuditProcedure": "**From Console:** 1. Login to the AWS Management Console and open the RDS dashboard at https://console.aws.amazon.com/rds/ 2. In the navigation pane, under RDS dashboard, click `Databases`. 3. Select the RDS Instance that you want to examine 4. Click `Instance Name` to see details, then click on `Configuration` tab. 5. Under Configuration Details section, In Storage pane search for the `Encryption Enabled` Status. 6. If the current status is set to `Disabled`, Encryption is not enabled for the selected RDS Instance database instance. 7. Repeat steps 3 to 7 to verify encryption status of other RDS Instance in same region. 8. Change region from the top of the navigation bar and repeat audit for other regions. **From Command Line:** 1. Run `describe-db-instances` command to list all RDS Instance database names, available in the selected AWS region, Output will return each Instance database identifier-name. ``` aws rds describe-db-instances --region --query 'DBInstances[*].DBInstanceIdentifier' ``` 2. Run again `describe-db-instances` command using the RDS Instance identifier returned earlier, to determine if the selected database instance is encrypted, The command output should return the encryption status `True` Or `False`. ``` aws rds describe-db-instances --region --db-instance-identifier --query 'DBInstances[*].StorageEncrypted' ``` 3. If the StorageEncrypted parameter value is `False`, Encryption is not enabled for the selected RDS database instance. 4. Repeat steps 1 to 3 for auditing each RDS Instance and change Region to verify for other regions", "AdditionalInformation": "", "References": "https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Encryption.html:https://aws.amazon.com/blogs/database/selecting-the-right-encryption-options-for-amazon-rds-and-amazon-aurora-database-engines/#:~:text=With%20RDS%2Dencrypted%20resources%2C%20data,transparent%20to%20your%20database%20engine.:https://aws.amazon.com/rds/features/security/" } @@ -607,10 +607,10 @@ "Profile": "Level 1", "AssessmentStatus": "Automated", "Description": "AWS CloudTrail is a web service that records AWS API calls for your account and delivers log files to you. The recorded information includes the identity of the API caller, the time of the API call, the source IP address of the API caller, the request parameters, and the response elements returned by the AWS service. CloudTrail provides a history of AWS API calls for an account, including API calls made via the Management Console, SDKs, command line tools, and higher-level AWS services (such as CloudFormation).", - "RationaleStatement": "The AWS API call history produced by CloudTrail enables security analysis, resource change tracking, and compliance auditing. Additionally, \n\n- ensuring that a multi-regions trail exists will ensure that unexpected activity occurring in otherwise unused regions is detected\n\n- ensuring that a multi-regions trail exists will ensure that `Global Service Logging` is enabled for a trail by default to capture recording of events generated on \nAWS global services\n\n- for a multi-regions trail, ensuring that management events configured for all type of Read/Writes ensures recording of management operations that are performed on all resources in an AWS account", - "ImpactStatement": "S3 lifecycle features can be used to manage the accumulation and management of logs over time. See the following AWS resource for more information on these features:\n\n1. https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html", - "RemediationProcedure": "Perform the following to enable global (Multi-region) CloudTrail logging:\n\n**From Console:**\n\n1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/cloudtrail](https://console.aws.amazon.com/cloudtrail)\n2. Click on _Trails_ on the left navigation pane\n3. Click `Get Started Now` , if presented\n - Click `Add new trail` \n - Enter a trail name in the `Trail name` box\n - Set the `Apply trail to all regions` option to `Yes` \n - Specify an S3 bucket name in the `S3 bucket` box\n - Click `Create` \n4. If 1 or more trails already exist, select the target trail to enable for global logging\n5. Click the edit icon (pencil) next to `Apply trail to all regions` , Click `Yes` and Click `Save`.\n6. Click the edit icon (pencil) next to `Management Events` click `All` for setting `Read/Write Events` and Click `Save`.\n\n**From Command Line:**\n```\naws cloudtrail create-trail --name --bucket-name --is-multi-region-trail \naws cloudtrail update-trail --name --is-multi-region-trail\n```\n\nNote: Creating CloudTrail via CLI without providing any overriding options configures `Management Events` to set `All` type of `Read/Writes` by default.", - "AuditProcedure": "Perform the following to determine if CloudTrail is enabled for all regions:\n\n**From Console:**\n\n1. Sign in to the AWS Management Console and open the CloudTrail console at [https://console.aws.amazon.com/cloudtrail](https://console.aws.amazon.com/cloudtrail)\n2. Click on `Trails` on the left navigation pane\n - You will be presented with a list of trails across all regions\n3. Ensure at least one Trail has `All` specified in the `Region` column\n4. Click on a trail via the link in the _Name_ column\n5. Ensure `Logging` is set to `ON` \n6. Ensure `Apply trail to all regions` is set to `Yes`\n7. In section `Management Events` ensure `Read/Write Events` set to `ALL`\n\n**From Command Line:**\n```\n aws cloudtrail describe-trails\n```\nEnsure `IsMultiRegionTrail` is set to `true` \n```\naws cloudtrail get-trail-status --name \n```\nEnsure `IsLogging` is set to `true`\n```\naws cloudtrail get-event-selectors --trail-name \n```\nEnsure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`", + "RationaleStatement": "The AWS API call history produced by CloudTrail enables security analysis, resource change tracking, and compliance auditing. Additionally, - ensuring that a multi-regions trail exists will ensure that unexpected activity occurring in otherwise unused regions is detected - ensuring that a multi-regions trail exists will ensure that `Global Service Logging` is enabled for a trail by default to capture recording of events generated on AWS global services - for a multi-regions trail, ensuring that management events configured for all type of Read/Writes ensures recording of management operations that are performed on all resources in an AWS account", + "ImpactStatement": "S3 lifecycle features can be used to manage the accumulation and management of logs over time. See the following AWS resource for more information on these features: 1. https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html", + "RemediationProcedure": "Perform the following to enable global (Multi-region) CloudTrail logging: **From Console:** 1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/cloudtrail](https://console.aws.amazon.com/cloudtrail) 2. Click on _Trails_ on the left navigation pane 3. Click `Get Started Now` , if presented - Click `Add new trail` - Enter a trail name in the `Trail name` box - Set the `Apply trail to all regions` option to `Yes` - Specify an S3 bucket name in the `S3 bucket` box - Click `Create` 4. If 1 or more trails already exist, select the target trail to enable for global logging 5. Click the edit icon (pencil) next to `Apply trail to all regions` , Click `Yes` and Click `Save`. 6. Click the edit icon (pencil) next to `Management Events` click `All` for setting `Read/Write Events` and Click `Save`. **From Command Line:** ``` aws cloudtrail create-trail --name --bucket-name --is-multi-region-trail aws cloudtrail update-trail --name --is-multi-region-trail ``` Note: Creating CloudTrail via CLI without providing any overriding options configures `Management Events` to set `All` type of `Read/Writes` by default.", + "AuditProcedure": "Perform the following to determine if CloudTrail is enabled for all regions: **From Console:** 1. Sign in to the AWS Management Console and open the CloudTrail console at [https://console.aws.amazon.com/cloudtrail](https://console.aws.amazon.com/cloudtrail) 2. Click on `Trails` on the left navigation pane - You will be presented with a list of trails across all regions 3. Ensure at least one Trail has `All` specified in the `Region` column 4. Click on a trail via the link in the _Name_ column 5. Ensure `Logging` is set to `ON` 6. Ensure `Apply trail to all regions` is set to `Yes` 7. In section `Management Events` ensure `Read/Write Events` set to `ALL` **From Command Line:** ``` aws cloudtrail describe-trails ``` Ensure `IsMultiRegionTrail` is set to `true` ``` aws cloudtrail get-trail-status --name ``` Ensure `IsLogging` is set to `true` ``` aws cloudtrail get-event-selectors --trail-name ``` Ensure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`", "AdditionalInformation": "", "References": "https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-concepts.html#cloudtrail-concepts-management-events:https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-management-and-data-events-with-cloudtrail.html?icmpid=docs_cloudtrail_console#logging-management-events:https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-supported-services.html#cloud-trail-supported-services-data-events" } @@ -630,8 +630,8 @@ "Description": "S3 object-level API operations such as GetObject, DeleteObject, and PutObject are called data events. By default, CloudTrail trails don't log data events and so it is recommended to enable Object-level logging for S3 buckets.", "RationaleStatement": "Enabling object-level logging will help you meet data compliance requirements within your organization, perform comprehensive security analysis, monitor specific patterns of user behavior in your AWS account or take immediate actions on any object-level API activity within your S3 Buckets using Amazon CloudWatch Events.", "ImpactStatement": "", - "RemediationProcedure": "**From Console:**\n\n1. Login to the AWS Management Console and navigate to S3 dashboard at `https://console.aws.amazon.com/s3/`\n2. In the left navigation panel, click `buckets` and then click on the S3 Bucket Name that you want to examine.\n3. Click `Properties` tab to see in detail bucket configuration.\n4. Click on the `Object-level` logging setting, enter the CloudTrail name for the recording activity. You can choose an existing Cloudtrail or create a new one by navigating to the Cloudtrail console link `https://console.aws.amazon.com/cloudtrail/`\n5. Once the Cloudtrail is selected, check the `Write` event checkbox, so that `object-level` logging for Write events is enabled.\n6. Repeat steps 2 to 5 to enable object-level logging of write events for other S3 buckets.\n\n**From Command Line:**\n\n1. To enable `object-level` data events logging for S3 buckets within your AWS account, run `put-event-selectors` command using the name of the trail that you want to reconfigure as identifier:\n```\naws cloudtrail put-event-selectors --region --trail-name --event-selectors '[{ \"ReadWriteType\": \"WriteOnly\", \"IncludeManagementEvents\":true, \"DataResources\": [{ \"Type\": \"AWS::S3::Object\", \"Values\": [\"arn:aws:s3:::/\"] }] }]'\n```\n2. The command output will be `object-level` event trail configuration.\n3. If you want to enable it for all buckets at once then change Values parameter to `[\"arn:aws:s3\"]` in command given above.\n4. Repeat step 1 for each s3 bucket to update `object-level` logging of write events.\n5. Change the AWS region by updating the `--region` command parameter and perform the process for other regions.", - "AuditProcedure": "**From Console:**\n\n1. Login to the AWS Management Console and navigate to CloudTrail dashboard at `https://console.aws.amazon.com/cloudtrail/`\n2. In the left panel, click `Trails` and then click on the CloudTrail Name that you want to examine.\n3. Review `General details`\n4. Confirm that `Multi-region trail` is set to `Yes`\n5. Scroll down to `Data events`\n6. Confirm that it reads:\nData events: S3\nBucket Name: All current and future S3 buckets\nRead: Enabled\nWrite: Enabled\n7. Repeat steps 2 to 6 to verify that Multi-region trail and Data events logging of S3 buckets in CloudTrail.\nIf the CloudTrails do not have multi-region and data events configured for S3 refer to the remediation below.\n\n**From Command Line:**\n\n1. Run `list-trails` command to list the names of all Amazon CloudTrail trails currently available in all AWS regions:\n```\naws cloudtrail list-trails\n```\n2. The command output will be a list of all the trail names to include.\n\"TrailARN\": \"arn:aws:cloudtrail:::trail/\",\n\"Name\": \"\",\n\"HomeRegion\": \"\"\n3. Next run 'get-trail- command to determine Multi-region.\n```\naws cloudtrail get-trail --name --region \n```\n4. The command output should include:\n\"IsMultiRegionTrail\": true,\n5. Next run `get-event-selectors` command using the `Name` of the trail and the `region` returned in step 2 to determine if Data events logging feature is enabled within the selected CloudTrail trail for all S3 buckets:\n```\naws cloudtrail get-event-selectors --region --trail-name --query EventSelectors[*].DataResources[]\n```\n6. The command output should be an array that contains the configuration of the AWS resource(S3 bucket) defined for the Data events selector.\n\"Type\": \"AWS::S3::Object\",\n \"Values\": [\n \"arn:aws:s3\"\n7. If the `get-event-selectors` command returns an empty array '[]', the Data events are not included in the selected AWS Cloudtrail trail logging configuration, therefore the S3 object-level API operations performed within your AWS account are not recorded.\n8. Repeat steps 1 to 5 for auditing each CloudTrail to determine if Data events for S3 are covered.\nIf Multi-region is not set to true and the Data events does not show S3 defined as shown refer to the remediation procedure below.", + "RemediationProcedure": "**From Console:** 1. Login to the AWS Management Console and navigate to S3 dashboard at `https://console.aws.amazon.com/s3/` 2. In the left navigation panel, click `buckets` and then click on the S3 Bucket Name that you want to examine. 3. Click `Properties` tab to see in detail bucket configuration. 4. Click on the `Object-level` logging setting, enter the CloudTrail name for the recording activity. You can choose an existing Cloudtrail or create a new one by navigating to the Cloudtrail console link `https://console.aws.amazon.com/cloudtrail/` 5. Once the Cloudtrail is selected, check the `Write` event checkbox, so that `object-level` logging for Write events is enabled. 6. Repeat steps 2 to 5 to enable object-level logging of write events for other S3 buckets. **From Command Line:** 1. To enable `object-level` data events logging for S3 buckets within your AWS account, run `put-event-selectors` command using the name of the trail that you want to reconfigure as identifier: ``` aws cloudtrail put-event-selectors --region --trail-name --event-selectors '[{ \"ReadWriteType\": \"WriteOnly\", \"IncludeManagementEvents\":true, \"DataResources\": [{ \"Type\": \"AWS::S3::Object\", \"Values\": [\"arn:aws:s3:::/\"] }] }]' ``` 2. The command output will be `object-level` event trail configuration. 3. If you want to enable it for all buckets at once then change Values parameter to `[\"arn:aws:s3\"]` in command given above. 4. Repeat step 1 for each s3 bucket to update `object-level` logging of write events. 5. Change the AWS region by updating the `--region` command parameter and perform the process for other regions.", + "AuditProcedure": "**From Console:** 1. Login to the AWS Management Console and navigate to CloudTrail dashboard at `https://console.aws.amazon.com/cloudtrail/` 2. In the left panel, click `Trails` and then click on the CloudTrail Name that you want to examine. 3. Review `General details` 4. Confirm that `Multi-region trail` is set to `Yes` 5. Scroll down to `Data events` 6. Confirm that it reads: Data events: S3 Bucket Name: All current and future S3 buckets Read: Enabled Write: Enabled 7. Repeat steps 2 to 6 to verify that Multi-region trail and Data events logging of S3 buckets in CloudTrail. If the CloudTrails do not have multi-region and data events configured for S3 refer to the remediation below. **From Command Line:** 1. Run `list-trails` command to list the names of all Amazon CloudTrail trails currently available in all AWS regions: ``` aws cloudtrail list-trails ``` 2. The command output will be a list of all the trail names to include. \"TrailARN\": \"arn:aws:cloudtrail:::trail/\", \"Name\": \"\", \"HomeRegion\": \"\" 3. Next run 'get-trail- command to determine Multi-region. ``` aws cloudtrail get-trail --name --region ``` 4. The command output should include: \"IsMultiRegionTrail\": true, 5. Next run `get-event-selectors` command using the `Name` of the trail and the `region` returned in step 2 to determine if Data events logging feature is enabled within the selected CloudTrail trail for all S3 buckets: ``` aws cloudtrail get-event-selectors --region --trail-name --query EventSelectors[*].DataResources[] ``` 6. The command output should be an array that contains the configuration of the AWS resource(S3 bucket) defined for the Data events selector. \"Type\": \"AWS::S3::Object\", \"Values\": [ \"arn:aws:s3\" 7. If the `get-event-selectors` command returns an empty array '[]', the Data events are not included in the selected AWS Cloudtrail trail logging configuration, therefore the S3 object-level API operations performed within your AWS account are not recorded. 8. Repeat steps 1 to 5 for auditing each CloudTrail to determine if Data events for S3 are covered. If Multi-region is not set to true and the Data events does not show S3 defined as shown refer to the remediation procedure below.", "AdditionalInformation": "", "References": "https://docs.aws.amazon.com/AmazonS3/latest/user-guide/enable-cloudtrail-events.html" } @@ -651,8 +651,8 @@ "Description": "S3 object-level API operations such as GetObject, DeleteObject, and PutObject are called data events. By default, CloudTrail trails don't log data events and so it is recommended to enable Object-level logging for S3 buckets.", "RationaleStatement": "Enabling object-level logging will help you meet data compliance requirements within your organization, perform comprehensive security analysis, monitor specific patterns of user behavior in your AWS account or take immediate actions on any object-level API activity using Amazon CloudWatch Events.", "ImpactStatement": "", - "RemediationProcedure": "**From Console:**\n\n1. Login to the AWS Management Console and navigate to S3 dashboard at `https://console.aws.amazon.com/s3/`\n2. In the left navigation panel, click `buckets` and then click on the S3 Bucket Name that you want to examine.\n3. Click `Properties` tab to see in detail bucket configuration.\n4. Click on the `Object-level` logging setting, enter the CloudTrail name for the recording activity. You can choose an existing Cloudtrail or create a new one by navigating to the Cloudtrail console link `https://console.aws.amazon.com/cloudtrail/`\n5. Once the Cloudtrail is selected, check the Read event checkbox, so that `object-level` logging for `Read` events is enabled.\n6. Repeat steps 2 to 5 to enable `object-level` logging of read events for other S3 buckets.\n\n**From Command Line:**\n1. To enable `object-level` data events logging for S3 buckets within your AWS account, run `put-event-selectors` command using the name of the trail that you want to reconfigure as identifier:\n```\naws cloudtrail put-event-selectors --region --trail-name --event-selectors '[{ \"ReadWriteType\": \"ReadOnly\", \"IncludeManagementEvents\":true, \"DataResources\": [{ \"Type\": \"AWS::S3::Object\", \"Values\": [\"arn:aws:s3:::/\"] }] }]'\n```\n2. The command output will be `object-level` event trail configuration.\n3. If you want to enable it for all buckets at ones then change Values parameter to `[\"arn:aws:s3\"]` in command given above.\n4. Repeat step 1 for each s3 bucket to update `object-level` logging of read events.\n5. Change the AWS region by updating the `--region` command parameter and perform the process for other regions.", - "AuditProcedure": "**From Console:**\n\n1. Login to the AWS Management Console and navigate to S3 dashboard at `https://console.aws.amazon.com/s3/`\n2. In the left navigation panel, click `buckets` and then click on the S3 Bucket Name that you want to examine.\n3. Click `Properties` tab to see in detail bucket configuration.\n4. If the current status for `Object-level` logging is set to `Disabled`, then object-level logging of read events for the selected s3 bucket is not set.\n5. If the current status for `Object-level` logging is set to `Enabled`, but the Read event check-box is unchecked, then object-level logging of read events for the selected s3 bucket is not set.\n6. Repeat steps 2 to 5 to verify `object-level` logging for `read` events of your other S3 buckets.\n\n**From Command Line:**\n1. Run `describe-trails` command to list the names of all Amazon CloudTrail trails currently available in the selected AWS region:\n```\naws cloudtrail describe-trails --region --output table --query trailList[*].Name\n```\n2. The command output will be table of the requested trail names.\n3. Run `get-event-selectors` command using the name of the trail returned at the previous step and custom query filters to determine if Data events logging feature is enabled within the selected CloudTrail trail configuration for s3 bucket resources:\n```\naws cloudtrail get-event-selectors --region --trail-name --query EventSelectors[*].DataResources[]\n```\n4. The command output should be an array that contains the configuration of the AWS resource(S3 bucket) defined for the Data events selector.\n5. If the `get-event-selectors` command returns an empty array, the Data events are not included into the selected AWS Cloudtrail trail logging configuration, therefore the S3 object-level API operations performed within your AWS account are not recorded.\n6. Repeat steps 1 to 5 for auditing each s3 bucket to identify other trails that are missing the capability to log Data events.\n7. Change the AWS region by updating the `--region` command parameter and perform the audit process for other regions.", + "RemediationProcedure": "**From Console:** 1. Login to the AWS Management Console and navigate to S3 dashboard at `https://console.aws.amazon.com/s3/` 2. In the left navigation panel, click `buckets` and then click on the S3 Bucket Name that you want to examine. 3. Click `Properties` tab to see in detail bucket configuration. 4. Click on the `Object-level` logging setting, enter the CloudTrail name for the recording activity. You can choose an existing Cloudtrail or create a new one by navigating to the Cloudtrail console link `https://console.aws.amazon.com/cloudtrail/` 5. Once the Cloudtrail is selected, check the Read event checkbox, so that `object-level` logging for `Read` events is enabled. 6. Repeat steps 2 to 5 to enable `object-level` logging of read events for other S3 buckets. **From Command Line:** 1. To enable `object-level` data events logging for S3 buckets within your AWS account, run `put-event-selectors` command using the name of the trail that you want to reconfigure as identifier: ``` aws cloudtrail put-event-selectors --region --trail-name --event-selectors '[{ \"ReadWriteType\": \"ReadOnly\", \"IncludeManagementEvents\":true, \"DataResources\": [{ \"Type\": \"AWS::S3::Object\", \"Values\": [\"arn:aws:s3:::/\"] }] }]' ``` 2. The command output will be `object-level` event trail configuration. 3. If you want to enable it for all buckets at ones then change Values parameter to `[\"arn:aws:s3\"]` in command given above. 4. Repeat step 1 for each s3 bucket to update `object-level` logging of read events. 5. Change the AWS region by updating the `--region` command parameter and perform the process for other regions.", + "AuditProcedure": "**From Console:** 1. Login to the AWS Management Console and navigate to S3 dashboard at `https://console.aws.amazon.com/s3/` 2. In the left navigation panel, click `buckets` and then click on the S3 Bucket Name that you want to examine. 3. Click `Properties` tab to see in detail bucket configuration. 4. If the current status for `Object-level` logging is set to `Disabled`, then object-level logging of read events for the selected s3 bucket is not set. 5. If the current status for `Object-level` logging is set to `Enabled`, but the Read event check-box is unchecked, then object-level logging of read events for the selected s3 bucket is not set. 6. Repeat steps 2 to 5 to verify `object-level` logging for `read` events of your other S3 buckets. **From Command Line:** 1. Run `describe-trails` command to list the names of all Amazon CloudTrail trails currently available in the selected AWS region: ``` aws cloudtrail describe-trails --region --output table --query trailList[*].Name ``` 2. The command output will be table of the requested trail names. 3. Run `get-event-selectors` command using the name of the trail returned at the previous step and custom query filters to determine if Data events logging feature is enabled within the selected CloudTrail trail configuration for s3 bucket resources: ``` aws cloudtrail get-event-selectors --region --trail-name --query EventSelectors[*].DataResources[] ``` 4. The command output should be an array that contains the configuration of the AWS resource(S3 bucket) defined for the Data events selector. 5. If the `get-event-selectors` command returns an empty array, the Data events are not included into the selected AWS Cloudtrail trail logging configuration, therefore the S3 object-level API operations performed within your AWS account are not recorded. 6. Repeat steps 1 to 5 for auditing each s3 bucket to identify other trails that are missing the capability to log Data events. 7. Change the AWS region by updating the `--region` command parameter and perform the audit process for other regions.", "AdditionalInformation": "", "References": "https://docs.aws.amazon.com/AmazonS3/latest/user-guide/enable-cloudtrail-events.html" } @@ -672,8 +672,8 @@ "Description": "CloudTrail log file validation creates a digitally signed digest file containing a hash of each log that CloudTrail writes to S3. These digest files can be used to determine whether a log file was changed, deleted, or unchanged after CloudTrail delivered the log. It is recommended that file validation be enabled on all CloudTrails.", "RationaleStatement": "Enabling log file validation will provide additional integrity checking of CloudTrail logs.", "ImpactStatement": "", - "RemediationProcedure": "Perform the following to enable log file validation on a given trail:\n\n**From Console:**\n\n1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/cloudtrail](https://console.aws.amazon.com/cloudtrail)\n2. Click on `Trails` on the left navigation pane\n3. Click on target trail\n4. Within the `General details` section click `edit`\n5. Under the `Advanced settings` section\n6. Check the enable box under `Log file validation` \n7. Click `Save changes` \n\n**From Command Line:**\n```\naws cloudtrail update-trail --name --enable-log-file-validation\n```\nNote that periodic validation of logs using these digests can be performed by running the following command:\n```\naws cloudtrail validate-logs --trail-arn --start-time --end-time \n```", - "AuditProcedure": "Perform the following on each trail to determine if log file validation is enabled:\n\n**From Console:**\n\n1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/cloudtrail](https://console.aws.amazon.com/cloudtrail)\n2. Click on `Trails` on the left navigation pane\n3. For Every Trail:\n- Click on a trail via the link in the _Name_ column\n- Under the `General details` section, ensure `Log file validation` is set to `Enabled` \n\n**From Command Line:**\n```\naws cloudtrail describe-trails\n```\nEnsure `LogFileValidationEnabled` is set to `true` for each trail", + "RemediationProcedure": "Perform the following to enable log file validation on a given trail: **From Console:** 1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/cloudtrail](https://console.aws.amazon.com/cloudtrail) 2. Click on `Trails` on the left navigation pane 3. Click on target trail 4. Within the `General details` section click `edit` 5. Under the `Advanced settings` section 6. Check the enable box under `Log file validation` 7. Click `Save changes` **From Command Line:** ``` aws cloudtrail update-trail --name --enable-log-file-validation ``` Note that periodic validation of logs using these digests can be performed by running the following command: ``` aws cloudtrail validate-logs --trail-arn --start-time --end-time ```", + "AuditProcedure": "Perform the following on each trail to determine if log file validation is enabled: **From Console:** 1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/cloudtrail](https://console.aws.amazon.com/cloudtrail) 2. Click on `Trails` on the left navigation pane 3. For Every Trail: - Click on a trail via the link in the _Name_ column - Under the `General details` section, ensure `Log file validation` is set to `Enabled` **From Command Line:** ``` aws cloudtrail describe-trails ``` Ensure `LogFileValidationEnabled` is set to `true` for each trail", "AdditionalInformation": "", "References": "https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-log-file-validation-enabling.html" } @@ -693,8 +693,8 @@ "Description": "CloudTrail logs a record of every API call made in your AWS account. These logs file are stored in an S3 bucket. It is recommended that the bucket policy or access control list (ACL) applied to the S3 bucket that CloudTrail logs to prevent public access to the CloudTrail logs.", "RationaleStatement": "Allowing public access to CloudTrail log content may aid an adversary in identifying weaknesses in the affected account's use or configuration.", "ImpactStatement": "", - "RemediationProcedure": "Perform the following to remove any public access that has been granted to the bucket via an ACL or S3 bucket policy:\n\n1. Go to Amazon S3 console at [https://console.aws.amazon.com/s3/home](https://console.aws.amazon.com/s3/home)\n2. Right-click on the bucket and click Properties\n3. In the `Properties` pane, click the `Permissions` tab.\n4. The tab shows a list of grants, one row per grant, in the bucket ACL. Each row identifies the grantee and the permissions granted.\n5. Select the row that grants permission to `Everyone` or `Any Authenticated User` \n6. Uncheck all the permissions granted to `Everyone` or `Any Authenticated User` (click `x` to delete the row).\n7. Click `Save` to save the ACL.\n8. If the `Edit bucket policy` button is present, click it.\n9. Remove any `Statement` having an `Effect` set to `Allow` and a `Principal` set to \"\\*\" or {\"AWS\" : \"\\*\"}.", - "AuditProcedure": "Perform the following to determine if any public access is granted to an S3 bucket via an ACL or S3 bucket policy:\n\n**From Console:**\n\n1. Go to the Amazon CloudTrail console at [https://console.aws.amazon.com/cloudtrail/home](https://console.aws.amazon.com/cloudtrail/home)\n2. In the `API activity history` pane on the left, click `Trails` \n3. In the `Trails` pane, note the bucket names in the `S3 bucket` column\n4. Go to Amazon S3 console at [https://console.aws.amazon.com/s3/home](https://console.aws.amazon.com/s3/home)\n5. For each bucket noted in step 3, right-click on the bucket and click `Properties` \n6. In the `Properties` pane, click the `Permissions` tab.\n7. The tab shows a list of grants, one row per grant, in the bucket ACL. Each row identifies the grantee and the permissions granted.\n8. Ensure no rows exists that have the `Grantee` set to `Everyone` or the `Grantee` set to `Any Authenticated User.` \n9. If the `Edit bucket policy` button is present, click it to review the bucket policy.\n10. Ensure the policy does not contain a `Statement` having an `Effect` set to `Allow` and a `Principal` set to \"\\*\" or {\"AWS\" : \"\\*\"}\n\n**From Command Line:**\n\n1. Get the name of the S3 bucket that CloudTrail is logging to:\n```\n aws cloudtrail describe-trails --query 'trailList[*].S3BucketName'\n```\n2. Ensure the `AllUsers` principal is not granted privileges to that `` :\n```\n aws s3api get-bucket-acl --bucket --query 'Grants[?Grantee.URI== `https://acs.amazonaws.com/groups/global/AllUsers` ]'\n```\n3. Ensure the `AuthenticatedUsers` principal is not granted privileges to that ``:\n```\n aws s3api get-bucket-acl --bucket --query 'Grants[?Grantee.URI== `https://acs.amazonaws.com/groups/global/Authenticated Users` ]'\n```\n4. Get the S3 Bucket Policy\n```\n aws s3api get-bucket-policy --bucket \n```\n5. Ensure the policy does not contain a `Statement` having an `Effect` set to `Allow` and a `Principal` set to \"\\*\" or {\"AWS\" : \"\\*\"}\n\n**Note:** Principal set to \"\\*\" or {\"AWS\" : \"\\*\"} allows anonymous access.", + "RemediationProcedure": "Perform the following to remove any public access that has been granted to the bucket via an ACL or S3 bucket policy: 1. Go to Amazon S3 console at [https://console.aws.amazon.com/s3/home](https://console.aws.amazon.com/s3/home) 2. Right-click on the bucket and click Properties 3. In the `Properties` pane, click the `Permissions` tab. 4. The tab shows a list of grants, one row per grant, in the bucket ACL. Each row identifies the grantee and the permissions granted. 5. Select the row that grants permission to `Everyone` or `Any Authenticated User` 6. Uncheck all the permissions granted to `Everyone` or `Any Authenticated User` (click `x` to delete the row). 7. Click `Save` to save the ACL. 8. If the `Edit bucket policy` button is present, click it. 9. Remove any `Statement` having an `Effect` set to `Allow` and a `Principal` set to \"\\*\" or {\"AWS\" : \"\\*\"}.", + "AuditProcedure": "Perform the following to determine if any public access is granted to an S3 bucket via an ACL or S3 bucket policy: **From Console:** 1. Go to the Amazon CloudTrail console at [https://console.aws.amazon.com/cloudtrail/home](https://console.aws.amazon.com/cloudtrail/home) 2. In the `API activity history` pane on the left, click `Trails` 3. In the `Trails` pane, note the bucket names in the `S3 bucket` column 4. Go to Amazon S3 console at [https://console.aws.amazon.com/s3/home](https://console.aws.amazon.com/s3/home) 5. For each bucket noted in step 3, right-click on the bucket and click `Properties` 6. In the `Properties` pane, click the `Permissions` tab. 7. The tab shows a list of grants, one row per grant, in the bucket ACL. Each row identifies the grantee and the permissions granted. 8. Ensure no rows exists that have the `Grantee` set to `Everyone` or the `Grantee` set to `Any Authenticated User.` 9. If the `Edit bucket policy` button is present, click it to review the bucket policy. 10. Ensure the policy does not contain a `Statement` having an `Effect` set to `Allow` and a `Principal` set to \"\\*\" or {\"AWS\" : \"\\*\"} **From Command Line:** 1. Get the name of the S3 bucket that CloudTrail is logging to: ``` aws cloudtrail describe-trails --query 'trailList[*].S3BucketName' ``` 2. Ensure the `AllUsers` principal is not granted privileges to that `` : ``` aws s3api get-bucket-acl --bucket --query 'Grants[?Grantee.URI== `https://acs.amazonaws.com/groups/global/AllUsers` ]' ``` 3. Ensure the `AuthenticatedUsers` principal is not granted privileges to that ``: ``` aws s3api get-bucket-acl --bucket --query 'Grants[?Grantee.URI== `https://acs.amazonaws.com/groups/global/Authenticated Users` ]' ``` 4. Get the S3 Bucket Policy ``` aws s3api get-bucket-policy --bucket ``` 5. Ensure the policy does not contain a `Statement` having an `Effect` set to `Allow` and a `Principal` set to \"\\*\" or {\"AWS\" : \"\\*\"} **Note:** Principal set to \"\\*\" or {\"AWS\" : \"\\*\"} allows anonymous access.", "AdditionalInformation": "", "References": "https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_principal.html" } @@ -711,11 +711,11 @@ "Section": "3. Logging", "Profile": "Level 1", "AssessmentStatus": "Automated", - "Description": "AWS CloudTrail is a web service that records AWS API calls made in a given AWS account. The recorded information includes the identity of the API caller, the time of the API call, the source IP address of the API caller, the request parameters, and the response elements returned by the AWS service. CloudTrail uses Amazon S3 for log file storage and delivery, so log files are stored durably. In addition to capturing CloudTrail logs within a specified S3 bucket for long term analysis, realtime analysis can be performed by configuring CloudTrail to send logs to CloudWatch Logs. For a trail that is enabled in all regions in an account, CloudTrail sends log files from all those regions to a CloudWatch Logs log group. It is recommended that CloudTrail logs be sent to CloudWatch Logs.\n\nNote: The intent of this recommendation is to ensure AWS account activity is being captured, monitored, and appropriately alarmed on. CloudWatch Logs is a native way to accomplish this using AWS services but does not preclude the use of an alternate solution.", + "Description": "AWS CloudTrail is a web service that records AWS API calls made in a given AWS account. The recorded information includes the identity of the API caller, the time of the API call, the source IP address of the API caller, the request parameters, and the response elements returned by the AWS service. CloudTrail uses Amazon S3 for log file storage and delivery, so log files are stored durably. In addition to capturing CloudTrail logs within a specified S3 bucket for long term analysis, realtime analysis can be performed by configuring CloudTrail to send logs to CloudWatch Logs. For a trail that is enabled in all regions in an account, CloudTrail sends log files from all those regions to a CloudWatch Logs log group. It is recommended that CloudTrail logs be sent to CloudWatch Logs. Note: The intent of this recommendation is to ensure AWS account activity is being captured, monitored, and appropriately alarmed on. CloudWatch Logs is a native way to accomplish this using AWS services but does not preclude the use of an alternate solution.", "RationaleStatement": "Sending CloudTrail logs to CloudWatch Logs will facilitate real-time and historic activity logging based on user, API, resource, and IP address, and provides opportunity to establish alarms and notifications for anomalous or sensitivity account activity.", - "ImpactStatement": "Note: By default, CloudWatch Logs will store Logs indefinitely unless a specific retention period is defined for the log group. When choosing the number of days to retain, keep in mind the average days it takes an organization to realize they have been breached is 210 days (at the time of this writing). Since additional time is required to research a breach, a minimum 365 day retention policy allows time for detection and research. You may also wish to archive the logs to a cheaper storage service rather than simply deleting them. See the following AWS resource to manage CloudWatch Logs retention periods:\n\n1. https://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/SettingLogRetention.html", - "RemediationProcedure": "Perform the following to establish the prescribed state:\n\n**From Console:**\n\n1. Login to the CloudTrail console at `https://console.aws.amazon.com/cloudtrail/`\n2. Select the `Trail` the needs to be updated.\n3. Scroll down to `CloudWatch Logs`\n4. Click `Edit`\n5. Under `CloudWatch Logs` click the box `Enabled`\n6. Under `Log Group` pick new or select an existing log group\n7. Edit the `Log group name` to match the CloudTrail or pick the existing CloudWatch Group.\n8. Under `IAM Role` pick new or select an existing.\n9. Edit the `Role name` to match the CloudTrail or pick the existing IAM Role.\n10. Click `Save changes.\n\n**From Command Line:**\n```\naws cloudtrail update-trail --name --cloudwatch-logs-log-group-arn --cloudwatch-logs-role-arn \n```", - "AuditProcedure": "Perform the following to ensure CloudTrail is configured as prescribed:\n\n**From Console:**\n\n1. Login to the CloudTrail console at `https://console.aws.amazon.com/cloudtrail/`\n2. Under `Trails` , click on the CloudTrail you wish to evaluate\n3. Under the `CloudWatch Logs` section.\n4. Ensure a `CloudWatch Logs` log group is configured and listed.\n5. Under `General details` confirm `Last log file delivered` has a recent (~one day old) timestamp.\n\n**From Command Line:**\n\n1. Run the following command to get a listing of existing trails:\n```\n aws cloudtrail describe-trails\n```\n2. Ensure `CloudWatchLogsLogGroupArn` is not empty and note the value of the `Name` property.\n3. Using the noted value of the `Name` property, run the following command:\n```\n aws cloudtrail get-trail-status --name \n```\n4. Ensure the `LatestcloudwatchLogdDeliveryTime` property is set to a recent (~one day old) timestamp.\n\nIf the `CloudWatch Logs` log group is not setup and the delivery time is not recent refer to the remediation below.", + "ImpactStatement": "Note: By default, CloudWatch Logs will store Logs indefinitely unless a specific retention period is defined for the log group. When choosing the number of days to retain, keep in mind the average days it takes an organization to realize they have been breached is 210 days (at the time of this writing). Since additional time is required to research a breach, a minimum 365 day retention policy allows time for detection and research. You may also wish to archive the logs to a cheaper storage service rather than simply deleting them. See the following AWS resource to manage CloudWatch Logs retention periods: 1. https://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/SettingLogRetention.html", + "RemediationProcedure": "Perform the following to establish the prescribed state: **From Console:** 1. Login to the CloudTrail console at `https://console.aws.amazon.com/cloudtrail/` 2. Select the `Trail` the needs to be updated. 3. Scroll down to `CloudWatch Logs` 4. Click `Edit` 5. Under `CloudWatch Logs` click the box `Enabled` 6. Under `Log Group` pick new or select an existing log group 7. Edit the `Log group name` to match the CloudTrail or pick the existing CloudWatch Group. 8. Under `IAM Role` pick new or select an existing. 9. Edit the `Role name` to match the CloudTrail or pick the existing IAM Role. 10. Click `Save changes. **From Command Line:** ``` aws cloudtrail update-trail --name --cloudwatch-logs-log-group-arn --cloudwatch-logs-role-arn ```", + "AuditProcedure": "Perform the following to ensure CloudTrail is configured as prescribed: **From Console:** 1. Login to the CloudTrail console at `https://console.aws.amazon.com/cloudtrail/` 2. Under `Trails` , click on the CloudTrail you wish to evaluate 3. Under the `CloudWatch Logs` section. 4. Ensure a `CloudWatch Logs` log group is configured and listed. 5. Under `General details` confirm `Last log file delivered` has a recent (~one day old) timestamp. **From Command Line:** 1. Run the following command to get a listing of existing trails: ``` aws cloudtrail describe-trails ``` 2. Ensure `CloudWatchLogsLogGroupArn` is not empty and note the value of the `Name` property. 3. Using the noted value of the `Name` property, run the following command: ``` aws cloudtrail get-trail-status --name ``` 4. Ensure the `LatestcloudwatchLogdDeliveryTime` property is set to a recent (~one day old) timestamp. If the `CloudWatch Logs` log group is not setup and the delivery time is not recent refer to the remediation below.", "AdditionalInformation": "", "References": "https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-user-guide.html:https://docs.aws.amazon.com/awscloudtrail/latest/userguide/how-cloudtrail-works.html:https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-aws-service-specific-topics.html" } @@ -735,8 +735,8 @@ "Description": "AWS Config is a web service that performs configuration management of supported AWS resources within your account and delivers log files to you. The recorded information includes the configuration item (AWS resource), relationships between configuration items (AWS resources), any configuration changes between resources. It is recommended AWS Config be enabled in all regions.", "RationaleStatement": "The AWS configuration item history captured by AWS Config enables security analysis, resource change tracking, and compliance auditing.", "ImpactStatement": "It is recommended AWS Config be enabled in all regions.", - "RemediationProcedure": "To implement AWS Config configuration:\n\n**From Console:**\n\n1. Select the region you want to focus on in the top right of the console\n2. Click `Services` \n3. Click `Config` \n4. Define which resources you want to record in the selected region\n5. Choose to include global resources (IAM resources)\n6. Specify an S3 bucket in the same account or in another managed AWS account\n7. Create an SNS Topic from the same AWS account or another managed AWS account\n\n**From Command Line:**\n\n1. Ensure there is an appropriate S3 bucket, SNS topic, and IAM role per the [AWS Config Service prerequisites](http://docs.aws.amazon.com/config/latest/developerguide/gs-cli-prereq.html).\n2. Run this command to set up the configuration recorder\n```\naws configservice subscribe --s3-bucket my-config-bucket --sns-topic arn:aws:sns:us-east-1:012345678912:my-config-notice --iam-role arn:aws:iam::012345678912:role/myConfigRole\n```\n3. Run this command to start the configuration recorder:\n```\nstart-configuration-recorder --configuration-recorder-name \n```", - "AuditProcedure": "Process to evaluate AWS Config configuration per region\n\n**From Console:**\n\n1. Sign in to the AWS Management Console and open the AWS Config console at [https://console.aws.amazon.com/config/](https://console.aws.amazon.com/config/).\n2. On the top right of the console select target Region.\n3. If presented with Setup AWS Config - follow remediation procedure:\n4. On the Resource inventory page, Click on edit (the gear icon). The Set Up AWS Config page appears.\n5. Ensure 1 or both check-boxes under \"All Resources\" is checked.\n - Include global resources related to IAM resources - which needs to be enabled in 1 region only\n6. Ensure the correct S3 bucket has been defined.\n7. Ensure the correct SNS topic has been defined.\n8. Repeat steps 2 to 7 for each region.\n\n**From Command Line:**\n\n1. Run this command to show all AWS Config recorders and their properties:\n```\naws configservice describe-configuration-recorders\n```\n2. Evaluate the output to ensure that there's at least one recorder for which `recordingGroup` object includes `\"allSupported\": true` AND `\"includeGlobalResourceTypes\": true`\n\nNote: There is one more parameter \"ResourceTypes\" in recordingGroup object. We don't need to check the same as whenever we set \"allSupported\": true, AWS enforces resource types to be empty (\"ResourceTypes\":[])\n\nSample Output:\n\n```\n{\n \"ConfigurationRecorders\": [\n {\n \"recordingGroup\": {\n \"allSupported\": true,\n \"resourceTypes\": [],\n \"includeGlobalResourceTypes\": true\n },\n \"roleARN\": \"arn:aws:iam:::role/service-role/\",\n \"name\": \"default\"\n }\n ]\n}\n```\n\n3. Run this command to show the status for all AWS Config recorders:\n```\naws configservice describe-configuration-recorder-status\n```\n4. In the output, find recorders with `name` key matching the recorders that met criteria in step 2. Ensure that at least one of them includes `\"recording\": true` and `\"lastStatus\": \"SUCCESS\"`", + "RemediationProcedure": "To implement AWS Config configuration: **From Console:** 1. Select the region you want to focus on in the top right of the console 2. Click `Services` 3. Click `Config` 4. Define which resources you want to record in the selected region 5. Choose to include global resources (IAM resources) 6. Specify an S3 bucket in the same account or in another managed AWS account 7. Create an SNS Topic from the same AWS account or another managed AWS account **From Command Line:** 1. Ensure there is an appropriate S3 bucket, SNS topic, and IAM role per the [AWS Config Service prerequisites](http://docs.aws.amazon.com/config/latest/developerguide/gs-cli-prereq.html). 2. Run this command to set up the configuration recorder ``` aws configservice subscribe --s3-bucket my-config-bucket --sns-topic arn:aws:sns:us-east-1:012345678912:my-config-notice --iam-role arn:aws:iam::012345678912:role/myConfigRole ``` 3. Run this command to start the configuration recorder: ``` start-configuration-recorder --configuration-recorder-name ```", + "AuditProcedure": "Process to evaluate AWS Config configuration per region **From Console:** 1. Sign in to the AWS Management Console and open the AWS Config console at [https://console.aws.amazon.com/config/](https://console.aws.amazon.com/config/). 2. On the top right of the console select target Region. 3. If presented with Setup AWS Config - follow remediation procedure: 4. On the Resource inventory page, Click on edit (the gear icon). The Set Up AWS Config page appears. 5. Ensure 1 or both check-boxes under \"All Resources\" is checked. - Include global resources related to IAM resources - which needs to be enabled in 1 region only 6. Ensure the correct S3 bucket has been defined. 7. Ensure the correct SNS topic has been defined. 8. Repeat steps 2 to 7 for each region. **From Command Line:** 1. Run this command to show all AWS Config recorders and their properties: ``` aws configservice describe-configuration-recorders ``` 2. Evaluate the output to ensure that there's at least one recorder for which `recordingGroup` object includes `\"allSupported\": true` AND `\"includeGlobalResourceTypes\": true` Note: There is one more parameter \"ResourceTypes\" in recordingGroup object. We don't need to check the same as whenever we set \"allSupported\": true, AWS enforces resource types to be empty (\"ResourceTypes\":[]) Sample Output: ``` { \"ConfigurationRecorders\": [ { \"recordingGroup\": { \"allSupported\": true, \"resourceTypes\": [], \"includeGlobalResourceTypes\": true }, \"roleARN\": \"arn:aws:iam:::role/service-role/\", \"name\": \"default\" } ] } ``` 3. Run this command to show the status for all AWS Config recorders: ``` aws configservice describe-configuration-recorder-status ``` 4. In the output, find recorders with `name` key matching the recorders that met criteria in step 2. Ensure that at least one of them includes `\"recording\": true` and `\"lastStatus\": \"SUCCESS\"`", "AdditionalInformation": "", "References": "https://docs.aws.amazon.com/cli/latest/reference/configservice/describe-configuration-recorder-status.html" } @@ -756,8 +756,8 @@ "Description": "S3 Bucket Access Logging generates a log that contains access records for each request made to your S3 bucket. An access log record contains details about the request, such as the request type, the resources specified in the request worked, and the time and date the request was processed. It is recommended that bucket access logging be enabled on the CloudTrail S3 bucket.", "RationaleStatement": "By enabling S3 bucket logging on target S3 buckets, it is possible to capture all events which may affect objects within any target buckets. Configuring logs to be placed in a separate bucket allows access to log information which can be useful in security and incident response workflows.", "ImpactStatement": "", - "RemediationProcedure": "Perform the following to enable S3 bucket logging:\n\n**From Console:**\n\n1. Sign in to the AWS Management Console and open the S3 console at [https://console.aws.amazon.com/s3](https://console.aws.amazon.com/s3).\n2. Under `All Buckets` click on the target S3 bucket\n3. Click on `Properties` in the top right of the console\n4. Under `Bucket:` click on `Logging` \n5. Configure bucket logging\n - Click on the `Enabled` checkbox\n - Select Target Bucket from list\n - Enter a Target Prefix\n6. Click `Save`.\n\n**From Command Line:**\n\n1. Get the name of the S3 bucket that CloudTrail is logging to:\n```\naws cloudtrail describe-trails --region --query trailList[*].S3BucketName\n```\n2. Copy and add target bucket name at ``, Prefix for logfile at `` and optionally add an email address in the following template and save it as ``:\n```\n{\n \"LoggingEnabled\": {\n \"TargetBucket\": \"\",\n \"TargetPrefix\": \"\",\n \"TargetGrants\": [\n {\n \"Grantee\": {\n \"Type\": \"AmazonCustomerByEmail\",\n \"EmailAddress\": \"\"\n },\n \"Permission\": \"FULL_CONTROL\"\n }\n ]\n } \n}\n```\n3. Run the `put-bucket-logging` command with bucket name and `` as input, for more information refer at [put-bucket-logging](https://docs.aws.amazon.com/cli/latest/reference/s3api/put-bucket-logging.html):\n```\naws s3api put-bucket-logging --bucket --bucket-logging-status file://\n```", - "AuditProcedure": "Perform the following ensure the CloudTrail S3 bucket has access logging is enabled:\n\n**From Console:**\n\n1. Go to the Amazon CloudTrail console at [https://console.aws.amazon.com/cloudtrail/home](https://console.aws.amazon.com/cloudtrail/home)\n2. In the API activity history pane on the left, click Trails\n3. In the Trails pane, note the bucket names in the S3 bucket column\n4. Sign in to the AWS Management Console and open the S3 console at [https://console.aws.amazon.com/s3](https://console.aws.amazon.com/s3).\n5. Under `All Buckets` click on a target S3 bucket\n6. Click on `Properties` in the top right of the console\n7. Under `Bucket:` _ `` _ click on `Logging` \n8. Ensure `Enabled` is checked.\n\n**From Command Line:**\n\n1. Get the name of the S3 bucket that CloudTrail is logging to:\n``` \naws cloudtrail describe-trails --query 'trailList[*].S3BucketName' \n```\n2. Ensure Bucket Logging is enabled:\n```\naws s3api get-bucket-logging --bucket \n```\nEnsure command does not returns empty output.\n\nSample Output for a bucket with logging enabled:\n\n```\n{\n \"LoggingEnabled\": {\n \"TargetPrefix\": \"\",\n \"TargetBucket\": \"\"\n }\n}\n```", + "RemediationProcedure": "Perform the following to enable S3 bucket logging: **From Console:** 1. Sign in to the AWS Management Console and open the S3 console at [https://console.aws.amazon.com/s3](https://console.aws.amazon.com/s3). 2. Under `All Buckets` click on the target S3 bucket 3. Click on `Properties` in the top right of the console 4. Under `Bucket:` click on `Logging` 5. Configure bucket logging - Click on the `Enabled` checkbox - Select Target Bucket from list - Enter a Target Prefix 6. Click `Save`. **From Command Line:** 1. Get the name of the S3 bucket that CloudTrail is logging to: ``` aws cloudtrail describe-trails --region --query trailList[*].S3BucketName ``` 2. Copy and add target bucket name at ``, Prefix for logfile at `` and optionally add an email address in the following template and save it as ``: ``` { \"LoggingEnabled\": { \"TargetBucket\": \"\", \"TargetPrefix\": \"\", \"TargetGrants\": [ { \"Grantee\": { \"Type\": \"AmazonCustomerByEmail\", \"EmailAddress\": \"\" }, \"Permission\": \"FULL_CONTROL\" } ] } } ``` 3. Run the `put-bucket-logging` command with bucket name and `` as input, for more information refer at [put-bucket-logging](https://docs.aws.amazon.com/cli/latest/reference/s3api/put-bucket-logging.html): ``` aws s3api put-bucket-logging --bucket --bucket-logging-status file:// ```", + "AuditProcedure": "Perform the following ensure the CloudTrail S3 bucket has access logging is enabled: **From Console:** 1. Go to the Amazon CloudTrail console at [https://console.aws.amazon.com/cloudtrail/home](https://console.aws.amazon.com/cloudtrail/home) 2. In the API activity history pane on the left, click Trails 3. In the Trails pane, note the bucket names in the S3 bucket column 4. Sign in to the AWS Management Console and open the S3 console at [https://console.aws.amazon.com/s3](https://console.aws.amazon.com/s3). 5. Under `All Buckets` click on a target S3 bucket 6. Click on `Properties` in the top right of the console 7. Under `Bucket:` _ `` _ click on `Logging` 8. Ensure `Enabled` is checked. **From Command Line:** 1. Get the name of the S3 bucket that CloudTrail is logging to: ``` aws cloudtrail describe-trails --query 'trailList[*].S3BucketName' ``` 2. Ensure Bucket Logging is enabled: ``` aws s3api get-bucket-logging --bucket ``` Ensure command does not returns empty output. Sample Output for a bucket with logging enabled: ``` { \"LoggingEnabled\": { \"TargetPrefix\": \"\", \"TargetBucket\": \"\" } } ```", "AdditionalInformation": "", "References": "https://docs.aws.amazon.com/AmazonS3/latest/dev/ServerLogs.html" } @@ -777,9 +777,9 @@ "Description": "AWS CloudTrail is a web service that records AWS API calls for an account and makes those logs available to users and resources in accordance with IAM policies. AWS Key Management Service (KMS) is a managed service that helps create and control the encryption keys used to encrypt account data, and uses Hardware Security Modules (HSMs) to protect the security of encryption keys. CloudTrail logs can be configured to leverage server side encryption (SSE) and KMS customer created master keys (CMK) to further protect CloudTrail logs. It is recommended that CloudTrail be configured to use SSE-KMS.", "RationaleStatement": "Configuring CloudTrail to use SSE-KMS provides additional confidentiality controls on log data as a given user must have S3 read permission on the corresponding log bucket and must be granted decrypt permission by the CMK policy.", "ImpactStatement": "Customer created keys incur an additional cost. See https://aws.amazon.com/kms/pricing/ for more information.", - "RemediationProcedure": "Perform the following to configure CloudTrail to use SSE-KMS:\n\n**From Console:**\n\n1. Sign in to the AWS Management Console and open the CloudTrail console at [https://console.aws.amazon.com/cloudtrail](https://console.aws.amazon.com/cloudtrail)\n2. In the left navigation pane, choose `Trails` .\n3. Click on a Trail\n4. Under the `S3` section click on the edit button (pencil icon)\n5. Click `Advanced` \n6. Select an existing CMK from the `KMS key Id` drop-down menu\n - Note: Ensure the CMK is located in the same region as the S3 bucket\n - Note: You will need to apply a KMS Key policy on the selected CMK in order for CloudTrail as a service to encrypt and decrypt log files using the CMK provided. Steps are provided [here](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/create-kms-key-policy-for-cloudtrail.html) for editing the selected CMK Key policy\n7. Click `Save` \n8. You will see a notification message stating that you need to have decrypt permissions on the specified KMS key to decrypt log files.\n9. Click `Yes` \n\n**From Command Line:**\n```\naws cloudtrail update-trail --name --kms-id \naws kms put-key-policy --key-id --policy \n```", - "AuditProcedure": "Perform the following to determine if CloudTrail is configured to use SSE-KMS:\n\n**From Console:**\n\n1. Sign in to the AWS Management Console and open the CloudTrail console at [https://console.aws.amazon.com/cloudtrail](https://console.aws.amazon.com/cloudtrail)\n2. In the left navigation pane, choose `Trails` .\n3. Select a Trail\n4. Under the `S3` section, ensure `Encrypt log files` is set to `Yes` and a KMS key ID is specified in the `KSM Key Id` field.\n\n**From Command Line:**\n\n1. Run the following command:\n```\n aws cloudtrail describe-trails \n```\n2. For each trail listed, SSE-KMS is enabled if the trail has a `KmsKeyId` property defined.", - "AdditionalInformation": "3 statements which need to be added to the CMK policy:\n\n1\\. Enable Cloudtrail to describe CMK properties\n```\n
{\n \"Sid\": \"Allow CloudTrail access\",\n \"Effect\": \"Allow\",\n \"Principal\": {\n \"Service\": \"cloudtrail.amazonaws.com\"\n },\n \"Action\": \"kms:DescribeKey\",\n \"Resource\": \"*\"\n}\n```\n2\\. Granting encrypt permissions\n```\n
{\n \"Sid\": \"Allow CloudTrail to encrypt logs\",\n \"Effect\": \"Allow\",\n \"Principal\": {\n \"Service\": \"cloudtrail.amazonaws.com\"\n },\n \"Action\": \"kms:GenerateDataKey*\",\n \"Resource\": \"*\",\n \"Condition\": {\n \"StringLike\": {\n \"kms:EncryptionContext:aws:cloudtrail:arn\": [\n \"arn:aws:cloudtrail:*:aws-account-id:trail/*\"\n ]\n }\n }\n}\n```\n3\\. Granting decrypt permissions\n```\n
{\n \"Sid\": \"Enable CloudTrail log decrypt permissions\",\n \"Effect\": \"Allow\",\n \"Principal\": {\n \"AWS\": \"arn:aws:iam::aws-account-id:user/username\"\n },\n \"Action\": \"kms:Decrypt\",\n \"Resource\": \"*\",\n \"Condition\": {\n \"Null\": {\n \"kms:EncryptionContext:aws:cloudtrail:arn\": \"false\"\n }\n }\n}\n```",
+          "RemediationProcedure": "Perform the following to configure CloudTrail to use SSE-KMS:  **From Console:**  1. Sign in to the AWS Management Console and open the CloudTrail console at [https://console.aws.amazon.com/cloudtrail](https://console.aws.amazon.com/cloudtrail) 2. In the left navigation pane, choose `Trails` . 3. Click on a Trail 4. Under the `S3` section click on the edit button (pencil icon) 5. Click `Advanced`  6. Select an existing CMK from the `KMS key Id` drop-down menu  - Note: Ensure the CMK is located in the same region as the S3 bucket  - Note: You will need to apply a KMS Key policy on the selected CMK in order for CloudTrail as a service to encrypt and decrypt log files using the CMK provided. Steps are provided [here](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/create-kms-key-policy-for-cloudtrail.html) for editing the selected CMK Key policy 7. Click `Save`  8. You will see a notification message stating that you need to have decrypt permissions on the specified KMS key to decrypt log files. 9. Click `Yes`   **From Command Line:** ``` aws cloudtrail update-trail --name  --kms-id  aws kms put-key-policy --key-id  --policy  ```",
+          "AuditProcedure": "Perform the following to determine if CloudTrail is configured to use SSE-KMS:  **From Console:**  1. Sign in to the AWS Management Console and open the CloudTrail console at [https://console.aws.amazon.com/cloudtrail](https://console.aws.amazon.com/cloudtrail) 2. In the left navigation pane, choose `Trails` . 3. Select a Trail 4. Under the `S3` section, ensure `Encrypt log files` is set to `Yes` and a KMS key ID is specified in the `KSM Key Id` field.  **From Command Line:**  1. Run the following command: ```  aws cloudtrail describe-trails  ``` 2. For each trail listed, SSE-KMS is enabled if the trail has a `KmsKeyId` property defined.",
+          "AdditionalInformation": "3 statements which need to be added to the CMK policy:  1\\. Enable Cloudtrail to describe CMK properties ``` 
{  \"Sid\": \"Allow CloudTrail access\",  \"Effect\": \"Allow\",  \"Principal\": {  \"Service\": \"cloudtrail.amazonaws.com\"  },  \"Action\": \"kms:DescribeKey\",  \"Resource\": \"*\" } ``` 2\\. Granting encrypt permissions ``` 
{  \"Sid\": \"Allow CloudTrail to encrypt logs\",  \"Effect\": \"Allow\",  \"Principal\": {  \"Service\": \"cloudtrail.amazonaws.com\"  },  \"Action\": \"kms:GenerateDataKey*\",  \"Resource\": \"*\",  \"Condition\": {  \"StringLike\": {  \"kms:EncryptionContext:aws:cloudtrail:arn\": [  \"arn:aws:cloudtrail:*:aws-account-id:trail/*\"  ]  }  } } ``` 3\\. Granting decrypt permissions ``` 
{  \"Sid\": \"Enable CloudTrail log decrypt permissions\",  \"Effect\": \"Allow\",  \"Principal\": {  \"AWS\": \"arn:aws:iam::aws-account-id:user/username\"  },  \"Action\": \"kms:Decrypt\",  \"Resource\": \"*\",  \"Condition\": {  \"Null\": {  \"kms:EncryptionContext:aws:cloudtrail:arn\": \"false\"  }  } } ```",
           "References": "https://docs.aws.amazon.com/awscloudtrail/latest/userguide/encrypting-cloudtrail-log-files-with-aws-kms.html:https://docs.aws.amazon.com/kms/latest/developerguide/create-keys.html"
         }
       ]
@@ -796,10 +796,10 @@
           "Profile": "Level 2",
           "AssessmentStatus": "Automated",
           "Description": "AWS Key Management Service (KMS) allows customers to rotate the backing key which is key material stored within the KMS which is tied to the key ID of the Customer Created customer master key (CMK). It is the backing key that is used to perform cryptographic operations such as encryption and decryption. Automated key rotation currently retains all prior backing keys so that decryption of encrypted data can take place transparently. It is recommended that CMK key rotation be enabled for symmetric keys. Key rotation can not be enabled for any asymmetric CMK.",
-          "RationaleStatement": "Rotating encryption keys helps reduce the potential impact of a compromised key as data encrypted with a new key cannot be accessed with a previous key that may have been exposed.\nKeys should be rotated every year, or upon event that would result in the compromise of that key.",
+          "RationaleStatement": "Rotating encryption keys helps reduce the potential impact of a compromised key as data encrypted with a new key cannot be accessed with a previous key that may have been exposed. Keys should be rotated every year, or upon event that would result in the compromise of that key.",
           "ImpactStatement": "Creation, management, and storage of CMKs may require additional time from and administrator.",
-          "RemediationProcedure": "**From Console:**\n\n1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam](https://console.aws.amazon.com/iam).\n2. In the left navigation pane, choose `Customer managed keys` .\n3. Select a customer managed CMK where `Key spec = SYMMETRIC_DEFAULT`\n4. Underneath the \"General configuration\" panel open the tab \"Key rotation\"\n5. Check the \"Automatically rotate this KMS key every year.\" checkbox\n\n**From Command Line:**\n\n1. Run the following command to enable key rotation:\n```\n aws kms enable-key-rotation --key-id \n```",
-          "AuditProcedure": "**From Console:**\n\n1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam](https://console.aws.amazon.com/iam).\n2. In the left navigation pane, choose `Customer managed keys`\n3. Select a customer managed CMK where `Key spec = SYMMETRIC_DEFAULT`\n4. Underneath the `General configuration` panel open the tab `Key rotation`\n5. Ensure that the checkbox `Automatically rotate this KMS key every year.` is activated\n6. Repeat steps 3 - 5 for all customer managed CMKs where \"Key spec = SYMMETRIC_DEFAULT\"\n\n**From Command Line:**\n\n1. Run the following command to get a list of all keys and their associated `KeyIds` \n```\n aws kms list-keys\n```\n2. For each key, note the KeyId and run the following command\n```\ndescribe-key --key-id \n```\n3. If the response contains \"KeySpec = SYMMETRIC_DEFAULT\" run the following command\n```\n aws kms get-key-rotation-status --key-id \n```\n4. Ensure `KeyRotationEnabled` is set to `true`\n5. Repeat steps 2 - 4 for all remaining CMKs",
+          "RemediationProcedure": "**From Console:**  1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam](https://console.aws.amazon.com/iam). 2. In the left navigation pane, choose `Customer managed keys` . 3. Select a customer managed CMK where `Key spec = SYMMETRIC_DEFAULT` 4. Underneath the \"General configuration\" panel open the tab \"Key rotation\" 5. Check the \"Automatically rotate this KMS key every year.\" checkbox  **From Command Line:**  1. Run the following command to enable key rotation: ```  aws kms enable-key-rotation --key-id  ```",
+          "AuditProcedure": "**From Console:**  1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam](https://console.aws.amazon.com/iam). 2. In the left navigation pane, choose `Customer managed keys` 3. Select a customer managed CMK where `Key spec = SYMMETRIC_DEFAULT` 4. Underneath the `General configuration` panel open the tab `Key rotation` 5. Ensure that the checkbox `Automatically rotate this KMS key every year.` is activated 6. Repeat steps 3 - 5 for all customer managed CMKs where \"Key spec = SYMMETRIC_DEFAULT\"  **From Command Line:**  1. Run the following command to get a list of all keys and their associated `KeyIds`  ```  aws kms list-keys ``` 2. For each key, note the KeyId and run the following command ``` describe-key --key-id  ``` 3. If the response contains \"KeySpec = SYMMETRIC_DEFAULT\" run the following command ```  aws kms get-key-rotation-status --key-id  ``` 4. Ensure `KeyRotationEnabled` is set to `true` 5. Repeat steps 2 - 4 for all remaining CMKs",
           "AdditionalInformation": "",
           "References": "https://aws.amazon.com/kms/pricing/:https://csrc.nist.gov/publications/detail/sp/800-57-part-1/rev-5/final"
         }
@@ -818,9 +818,9 @@
           "AssessmentStatus": "Automated",
           "Description": "VPC Flow Logs is a feature that enables you to capture information about the IP traffic going to and from network interfaces in your VPC. After you've created a flow log, you can view and retrieve its data in Amazon CloudWatch Logs. It is recommended that VPC Flow Logs be enabled for packet \"Rejects\" for VPCs.",
           "RationaleStatement": "VPC Flow Logs provide visibility into network traffic that traverses the VPC and can be used to detect anomalous traffic or insight during security workflows.",
-          "ImpactStatement": "By default, CloudWatch Logs will store Logs indefinitely unless a specific retention period is defined for the log group. When choosing the number of days to retain, keep in mind the average days it takes an organization to realize they have been breached is 210 days (at the time of this writing). Since additional time is required to research a breach, a minimum 365 day retention policy allows time for detection and research. You may also wish to archive the logs to a cheaper storage service rather than simply deleting them. See the following AWS resource to manage CloudWatch Logs retention periods:\n\n1. https://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/SettingLogRetention.html",
-          "RemediationProcedure": "Perform the following to determine if VPC Flow logs is enabled:\n\n**From Console:**\n\n1. Sign into the management console\n2. Select `Services` then `VPC` \n3. In the left navigation pane, select `Your VPCs` \n4. Select a VPC\n5. In the right pane, select the `Flow Logs` tab.\n6. If no Flow Log exists, click `Create Flow Log` \n7. For Filter, select `Reject`\n8. Enter in a `Role` and `Destination Log Group` \n9. Click `Create Log Flow` \n10. Click on `CloudWatch Logs Group` \n\n**Note:** Setting the filter to \"Reject\" will dramatically reduce the logging data accumulation for this recommendation and provide sufficient information for the purposes of breach detection, research and remediation. However, during periods of least privilege security group engineering, setting this the filter to \"All\" can be very helpful in discovering existing traffic flows required for proper operation of an already running environment.\n\n**From Command Line:**\n\n1. Create a policy document and name it as `role_policy_document.json` and paste the following content:\n```\n{\n \"Version\": \"2012-10-17\",\n \"Statement\": [\n {\n \"Sid\": \"test\",\n \"Effect\": \"Allow\",\n \"Principal\": {\n \"Service\": \"ec2.amazonaws.com\"\n },\n \"Action\": \"sts:AssumeRole\"\n }\n ]\n}\n```\n2. Create another policy document and name it as `iam_policy.json` and paste the following content:\n```\n{\n \"Version\": \"2012-10-17\",\n \"Statement\": [\n {\n \"Effect\": \"Allow\",\n \"Action\":[\n \"logs:CreateLogGroup\",\n \"logs:CreateLogStream\",\n \"logs:DescribeLogGroups\",\n \"logs:DescribeLogStreams\",\n \"logs:PutLogEvents\",\n \"logs:GetLogEvents\",\n \"logs:FilterLogEvents\"\n ],\n \"Resource\": \"*\"\n }\n ]\n}\n```\n3. Run the below command to create an IAM role:\n```\naws iam create-role --role-name  --assume-role-policy-document file://role_policy_document.json \n```\n4. Run the below command to create an IAM policy:\n```\naws iam create-policy --policy-name  --policy-document file://iam-policy.json\n```\n5. Run `attach-group-policy` command using the IAM policy ARN returned at the previous step to attach the policy to the IAM role (if the command succeeds, no output is returned):\n```\naws iam attach-group-policy --policy-arn arn:aws:iam:::policy/ --group-name \n```\n6. Run `describe-vpcs` to get the VpcId available in the selected region:\n```\naws ec2 describe-vpcs --region \n```\n7. The command output should return the VPC Id available in the selected region.\n8. Run `create-flow-logs` to create a flow log for the vpc:\n```\naws ec2 create-flow-logs --resource-type VPC --resource-ids  --traffic-type REJECT --log-group-name  --deliver-logs-permission-arn \n```\n9. Repeat step 8 for other vpcs available in the selected region.\n10. Change the region by updating --region and repeat remediation procedure for other vpcs.",
-          "AuditProcedure": "Perform the following to determine if VPC Flow logs are enabled:\n\n**From Console:**\n\n1. Sign into the management console\n2. Select `Services` then `VPC` \n3. In the left navigation pane, select `Your VPCs` \n4. Select a VPC\n5. In the right pane, select the `Flow Logs` tab.\n6. Ensure a Log Flow exists that has `Active` in the `Status` column.\n\n**From Command Line:**\n\n1. Run `describe-vpcs` command (OSX/Linux/UNIX) to list the VPC networks available in the current AWS region:\n```\naws ec2 describe-vpcs --region  --query Vpcs[].VpcId\n```\n2. The command output returns the `VpcId` available in the selected region.\n3. Run `describe-flow-logs` command (OSX/Linux/UNIX) using the VPC ID to determine if the selected virtual network has the Flow Logs feature enabled:\n```\naws ec2 describe-flow-logs --filter \"Name=resource-id,Values=\"\n```\n4. If there are no Flow Logs created for the selected VPC, the command output will return an `empty list []`.\n5. Repeat step 3 for other VPCs available in the same region.\n6. Change the region by updating `--region` and repeat steps 1 - 5 for all the VPCs.",
+          "ImpactStatement": "By default, CloudWatch Logs will store Logs indefinitely unless a specific retention period is defined for the log group. When choosing the number of days to retain, keep in mind the average days it takes an organization to realize they have been breached is 210 days (at the time of this writing). Since additional time is required to research a breach, a minimum 365 day retention policy allows time for detection and research. You may also wish to archive the logs to a cheaper storage service rather than simply deleting them. See the following AWS resource to manage CloudWatch Logs retention periods:  1. https://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/SettingLogRetention.html",
+          "RemediationProcedure": "Perform the following to determine if VPC Flow logs is enabled:  **From Console:**  1. Sign into the management console 2. Select `Services` then `VPC`  3. In the left navigation pane, select `Your VPCs`  4. Select a VPC 5. In the right pane, select the `Flow Logs` tab. 6. If no Flow Log exists, click `Create Flow Log`  7. For Filter, select `Reject` 8. Enter in a `Role` and `Destination Log Group`  9. Click `Create Log Flow`  10. Click on `CloudWatch Logs Group`   **Note:** Setting the filter to \"Reject\" will dramatically reduce the logging data accumulation for this recommendation and provide sufficient information for the purposes of breach detection, research and remediation. However, during periods of least privilege security group engineering, setting this the filter to \"All\" can be very helpful in discovering existing traffic flows required for proper operation of an already running environment.  **From Command Line:**  1. Create a policy document and name it as `role_policy_document.json` and paste the following content: ``` {  \"Version\": \"2012-10-17\",  \"Statement\": [  {  \"Sid\": \"test\",  \"Effect\": \"Allow\",  \"Principal\": {  \"Service\": \"ec2.amazonaws.com\"  },  \"Action\": \"sts:AssumeRole\"  }  ] } ``` 2. Create another policy document and name it as `iam_policy.json` and paste the following content: ``` {  \"Version\": \"2012-10-17\",  \"Statement\": [  {  \"Effect\": \"Allow\",  \"Action\":[  \"logs:CreateLogGroup\",  \"logs:CreateLogStream\",  \"logs:DescribeLogGroups\",  \"logs:DescribeLogStreams\",  \"logs:PutLogEvents\",  \"logs:GetLogEvents\",  \"logs:FilterLogEvents\"  ],  \"Resource\": \"*\"  }  ] } ``` 3. Run the below command to create an IAM role: ``` aws iam create-role --role-name  --assume-role-policy-document file://role_policy_document.json  ``` 4. Run the below command to create an IAM policy: ``` aws iam create-policy --policy-name  --policy-document file://iam-policy.json ``` 5. Run `attach-group-policy` command using the IAM policy ARN returned at the previous step to attach the policy to the IAM role (if the command succeeds, no output is returned): ``` aws iam attach-group-policy --policy-arn arn:aws:iam:::policy/ --group-name  ``` 6. Run `describe-vpcs` to get the VpcId available in the selected region: ``` aws ec2 describe-vpcs --region  ``` 7. The command output should return the VPC Id available in the selected region. 8. Run `create-flow-logs` to create a flow log for the vpc: ``` aws ec2 create-flow-logs --resource-type VPC --resource-ids  --traffic-type REJECT --log-group-name  --deliver-logs-permission-arn  ``` 9. Repeat step 8 for other vpcs available in the selected region. 10. Change the region by updating --region and repeat remediation procedure for other vpcs.",
+          "AuditProcedure": "Perform the following to determine if VPC Flow logs are enabled:  **From Console:**  1. Sign into the management console 2. Select `Services` then `VPC`  3. In the left navigation pane, select `Your VPCs`  4. Select a VPC 5. In the right pane, select the `Flow Logs` tab. 6. Ensure a Log Flow exists that has `Active` in the `Status` column.  **From Command Line:**  1. Run `describe-vpcs` command (OSX/Linux/UNIX) to list the VPC networks available in the current AWS region: ``` aws ec2 describe-vpcs --region  --query Vpcs[].VpcId ``` 2. The command output returns the `VpcId` available in the selected region. 3. Run `describe-flow-logs` command (OSX/Linux/UNIX) using the VPC ID to determine if the selected virtual network has the Flow Logs feature enabled: ``` aws ec2 describe-flow-logs --filter \"Name=resource-id,Values=\" ``` 4. If there are no Flow Logs created for the selected VPC, the command output will return an `empty list []`. 5. Repeat step 3 for other VPCs available in the same region. 6. Change the region by updating `--region` and repeat steps 1 - 5 for all the VPCs.",
           "AdditionalInformation": "",
           "References": "https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/flow-logs.html"
         }
@@ -839,10 +839,10 @@
           "AssessmentStatus": "Automated",
           "Description": "Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs and establishing corresponding metric filters and alarms. It is recommended that a metric filter and alarm be established for unauthorized API calls.",
           "RationaleStatement": "Monitoring unauthorized API calls will help reveal application errors and may reduce time to detect malicious activity.",
-          "ImpactStatement": "This alert may be triggered by normal read-only console activities that attempt to opportunistically gather optional information, but gracefully fail if they don't have permissions.\n\nIf an excessive number of alerts are being generated then an organization may wish to consider adding read access to the limited IAM user permissions simply to quiet the alerts.\n\nIn some cases doing this may allow the users to actually view some areas of the system - any additional access given should be reviewed for alignment with the original limited IAM user intent.",
-          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:\n\n1. Create a metric filter based on filter pattern provided which checks for unauthorized API calls and the `` taken from audit step 1.\n```\naws logs put-metric-filter --log-group-name \"cloudtrail_log_group_name\" --filter-name \"\" --metric-transformations metricName=unauthorized_api_calls_metric,metricNamespace=CISBenchmark,metricValue=1 --filter-pattern \"{ ($.errorCode = \"*UnauthorizedOperation\") || ($.errorCode = \"AccessDenied*\") || ($.sourceIPAddress!=\"delivery.logs.amazonaws.com\") || ($.eventName!=\"HeadBucket\") }\"\n```\n\n**Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.\n\n2. Create an SNS topic that the alarm will notify\n```\naws sns create-topic --name \n```\n**Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.\n**Note**: Capture the TopicArn displayed when creating the SNS Topic in Step 2.\n\n3. Create an SNS subscription to the topic created in step 2\n```\naws sns subscribe --topic-arn  --protocol  --notification-endpoint \n```\n\n**Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.\n\n4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2\n```\naws cloudwatch put-metric-alarm --alarm-name \"unauthorized_api_calls_alarm\" --metric-name \"unauthorized_api_calls_metric\" --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace \"CISBenchmark\" --alarm-actions \n```",
-          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:\n\n1. Identify the log group name configured for use with active multi-region CloudTrail:\n\n- List all CloudTrails: `aws cloudtrail describe-trails`\n\n- Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`\n\n- From value associated with \"Name\":` note ``\n\n- From value associated with \"CloudWatchLogsLogGroupArn\" note \n\nExample: for CloudWatchLogsLogGroupArn that looks like arn:aws:logs:::log-group:NewGroup:*,  would be NewGroup\n\n- Ensure Identified Multi region CloudTrail is active\n\n`aws cloudtrail get-trail-status --name `\n\nensure `IsLogging` is set to `TRUE`\n\n- Ensure identified Multi-region Cloudtrail captures all Management Events\n\n`aws cloudtrail get-event-selectors --trail-name <\"Name\" as shown in describe-trails>`\n\nEnsure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`\n\n2. Get a list of all associated metric filters for this `` that you captured in step 1:\n\n```\naws logs describe-metric-filters --log-group-name \"\"\n```\n\n3. Ensure the output from the above command contains the following:\n\n```\n\"filterPattern\": \"{ ($.errorCode = *UnauthorizedOperation) || ($.errorCode = AccessDenied*) || ($.sourceIPAddress!=delivery.logs.amazonaws.com) || ($.eventName!=HeadBucket) }\",\n```\n\n4. Note the \"filterName\" `` value associated with the `filterPattern` found in step 3.\n\n5. Get a list of CloudWatch alarms and filter on the `` captured in step 4.\n\n```\naws cloudwatch describe-alarms --query \"MetricAlarms[?MetricName == `unauthorized_api_calls_metric`]\"\n```\n\n6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.\n\n7. Ensure there is at least one active subscriber to the SNS topic\n\n```\naws sns list-subscriptions-by-topic --topic-arn  \n```\nat least one subscription should have \"SubscriptionArn\" with valid aws ARN.\n\n```\nExample of valid \"SubscriptionArn\": \"arn:aws:sns::::\"\n```",
-          "AdditionalInformation": "Configuring log metric filter and alarm on Multi-region (global) CloudTrail\n- ensures that activities from all regions (used as well as unused) are monitored\n- ensures that activities on all supported global services are monitored\n- ensures that all management events across all regions are monitored",
+          "ImpactStatement": "This alert may be triggered by normal read-only console activities that attempt to opportunistically gather optional information, but gracefully fail if they don't have permissions.  If an excessive number of alerts are being generated then an organization may wish to consider adding read access to the limited IAM user permissions simply to quiet the alerts.  In some cases doing this may allow the users to actually view some areas of the system - any additional access given should be reviewed for alignment with the original limited IAM user intent.",
+          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:  1. Create a metric filter based on filter pattern provided which checks for unauthorized API calls and the `` taken from audit step 1. ``` aws logs put-metric-filter --log-group-name \"cloudtrail_log_group_name\" --filter-name \"\" --metric-transformations metricName=unauthorized_api_calls_metric,metricNamespace=CISBenchmark,metricValue=1 --filter-pattern \"{ ($.errorCode = \"*UnauthorizedOperation\") || ($.errorCode = \"AccessDenied*\") || ($.sourceIPAddress!=\"delivery.logs.amazonaws.com\") || ($.eventName!=\"HeadBucket\") }\" ```  **Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.  2. Create an SNS topic that the alarm will notify ``` aws sns create-topic --name  ``` **Note**: you can execute this command once and then re-use the same topic for all monitoring alarms. **Note**: Capture the TopicArn displayed when creating the SNS Topic in Step 2.  3. Create an SNS subscription to the topic created in step 2 ``` aws sns subscribe --topic-arn  --protocol  --notification-endpoint  ```  **Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.  4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 ``` aws cloudwatch put-metric-alarm --alarm-name \"unauthorized_api_calls_alarm\" --metric-name \"unauthorized_api_calls_metric\" --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace \"CISBenchmark\" --alarm-actions  ```",
+          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:  1. Identify the log group name configured for use with active multi-region CloudTrail:  - List all CloudTrails: `aws cloudtrail describe-trails`  - Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`  - From value associated with \"Name\":` note ``  - From value associated with \"CloudWatchLogsLogGroupArn\" note   Example: for CloudWatchLogsLogGroupArn that looks like arn:aws:logs:::log-group:NewGroup:*,  would be NewGroup  - Ensure Identified Multi region CloudTrail is active  `aws cloudtrail get-trail-status --name `  ensure `IsLogging` is set to `TRUE`  - Ensure identified Multi-region Cloudtrail captures all Management Events  `aws cloudtrail get-event-selectors --trail-name <\"Name\" as shown in describe-trails>`  Ensure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`  2. Get a list of all associated metric filters for this `` that you captured in step 1:  ``` aws logs describe-metric-filters --log-group-name \"\" ```  3. Ensure the output from the above command contains the following:  ``` \"filterPattern\": \"{ ($.errorCode = *UnauthorizedOperation) || ($.errorCode = AccessDenied*) || ($.sourceIPAddress!=delivery.logs.amazonaws.com) || ($.eventName!=HeadBucket) }\", ```  4. Note the \"filterName\" `` value associated with the `filterPattern` found in step 3.  5. Get a list of CloudWatch alarms and filter on the `` captured in step 4.  ``` aws cloudwatch describe-alarms --query \"MetricAlarms[?MetricName == `unauthorized_api_calls_metric`]\" ```  6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.  7. Ensure there is at least one active subscriber to the SNS topic  ``` aws sns list-subscriptions-by-topic --topic-arn   ``` at least one subscription should have \"SubscriptionArn\" with valid aws ARN.  ``` Example of valid \"SubscriptionArn\": \"arn:aws:sns::::\" ```",
+          "AdditionalInformation": "Configuring log metric filter and alarm on Multi-region (global) CloudTrail - ensures that activities from all regions (used as well as unused) are monitored - ensures that activities on all supported global services are monitored - ensures that all management events across all regions are monitored",
           "References": "https://aws.amazon.com/sns/:https://docs.aws.amazon.com/awscloudtrail/latest/userguide/receive-cloudtrail-log-files-from-multiple-regions.html:https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudwatch-alarms-for-cloudtrail.html:https://docs.aws.amazon.com/sns/latest/dg/SubscribeTopic.html"
         }
       ]
@@ -861,9 +861,9 @@
           "Description": "Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs and establishing corresponding metric filters and alarms. Security Groups are a stateful packet filter that controls ingress and egress traffic within a VPC. It is recommended that a metric filter and alarm be established for detecting changes to Security Groups.",
           "RationaleStatement": "Monitoring changes to security group will help ensure that resources and services are not unintentionally exposed.",
           "ImpactStatement": "",
-          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:\n\n1. Create a metric filter based on filter pattern provided which checks for security groups changes and the `` taken from audit step 1.\n```\naws logs put-metric-filter --log-group-name \"\" --filter-name \"\" --metric-transformations metricName= \"\" ,metricNamespace=\"CISBenchmark\",metricValue=1 --filter-pattern \"{ ($.eventName = AuthorizeSecurityGroupIngress) || ($.eventName = AuthorizeSecurityGroupEgress) || ($.eventName = RevokeSecurityGroupIngress) || ($.eventName = RevokeSecurityGroupEgress) || ($.eventName = CreateSecurityGroup) || ($.eventName = DeleteSecurityGroup) }\"\n```\n\n**Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.\n\n2. Create an SNS topic that the alarm will notify\n```\naws sns create-topic --name \"\"\n```\n\n**Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.\n\n3. Create an SNS subscription to the topic created in step 2\n```\naws sns subscribe --topic-arn \"\" --protocol  --notification-endpoint \"\"\n```\n\n**Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.\n\n4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2\n```\naws cloudwatch put-metric-alarm --alarm-name \"\" --metric-name \"\" --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace \"CISBenchmark\" --alarm-actions \"\"\n```",
-          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:\n\n1. Identify the log group name configured for use with active multi-region CloudTrail:\n\n- List all CloudTrails: `aws cloudtrail describe-trails`\n\n- Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`\n\n- From value associated with CloudWatchLogsLogGroupArn note ``\n\nExample: for CloudWatchLogsLogGroupArn that looks like `arn:aws:logs:::log-group:NewGroup:*`, `` would be `NewGroup`\n\n- Ensure Identified Multi region CloudTrail is active\n\n`aws cloudtrail get-trail-status --name `\n\nensure `IsLogging` is set to `TRUE`\n\n- Ensure identified Multi-region Cloudtrail captures all Management Events\n\n`aws cloudtrail get-event-selectors --trail-name `\n\nEnsure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`\n\n2. Get a list of all associated metric filters for this ``:\n```\naws logs describe-metric-filters --log-group-name \"\"\n```\n3. Ensure the output from the above command contains the following:\n```\n\"filterPattern\": \"{ ($.eventName = AuthorizeSecurityGroupIngress) || ($.eventName = AuthorizeSecurityGroupEgress) || ($.eventName = RevokeSecurityGroupIngress) || ($.eventName = RevokeSecurityGroupEgress) || ($.eventName = CreateSecurityGroup) || ($.eventName = DeleteSecurityGroup) }\"\n```\n4. Note the `` value associated with the `filterPattern` found in step 3.\n\n5. Get a list of CloudWatch alarms and filter on the `` captured in step 4.\n```\naws cloudwatch describe-alarms --query \"MetricAlarms[?MetricName== '']\"\n```\n6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.\n\n7. Ensure there is at least one active subscriber to the SNS topic\n```\naws sns list-subscriptions-by-topic --topic-arn  \n```\nat least one subscription should have \"SubscriptionArn\" with valid aws ARN.\n```\nExample of valid \"SubscriptionArn\": \"arn:aws:sns::::\"\n```",
-          "AdditionalInformation": "Configuring log metric filter and alarm on Multi-region (global) CloudTrail\n- ensures that activities from all regions (used as well as unused) are monitored\n- ensures that activities on all supported global services are monitored\n- ensures that all management events across all regions are monitored",
+          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:  1. Create a metric filter based on filter pattern provided which checks for security groups changes and the `` taken from audit step 1. ``` aws logs put-metric-filter --log-group-name \"\" --filter-name \"\" --metric-transformations metricName= \"\" ,metricNamespace=\"CISBenchmark\",metricValue=1 --filter-pattern \"{ ($.eventName = AuthorizeSecurityGroupIngress) || ($.eventName = AuthorizeSecurityGroupEgress) || ($.eventName = RevokeSecurityGroupIngress) || ($.eventName = RevokeSecurityGroupEgress) || ($.eventName = CreateSecurityGroup) || ($.eventName = DeleteSecurityGroup) }\" ```  **Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.  2. Create an SNS topic that the alarm will notify ``` aws sns create-topic --name \"\" ```  **Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.  3. Create an SNS subscription to the topic created in step 2 ``` aws sns subscribe --topic-arn \"\" --protocol  --notification-endpoint \"\" ```  **Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.  4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 ``` aws cloudwatch put-metric-alarm --alarm-name \"\" --metric-name \"\" --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace \"CISBenchmark\" --alarm-actions \"\" ```",
+          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:  1. Identify the log group name configured for use with active multi-region CloudTrail:  - List all CloudTrails: `aws cloudtrail describe-trails`  - Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`  - From value associated with CloudWatchLogsLogGroupArn note ``  Example: for CloudWatchLogsLogGroupArn that looks like `arn:aws:logs:::log-group:NewGroup:*`, `` would be `NewGroup`  - Ensure Identified Multi region CloudTrail is active  `aws cloudtrail get-trail-status --name `  ensure `IsLogging` is set to `TRUE`  - Ensure identified Multi-region Cloudtrail captures all Management Events  `aws cloudtrail get-event-selectors --trail-name `  Ensure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`  2. Get a list of all associated metric filters for this ``: ``` aws logs describe-metric-filters --log-group-name \"\" ``` 3. Ensure the output from the above command contains the following: ``` \"filterPattern\": \"{ ($.eventName = AuthorizeSecurityGroupIngress) || ($.eventName = AuthorizeSecurityGroupEgress) || ($.eventName = RevokeSecurityGroupIngress) || ($.eventName = RevokeSecurityGroupEgress) || ($.eventName = CreateSecurityGroup) || ($.eventName = DeleteSecurityGroup) }\" ``` 4. Note the `` value associated with the `filterPattern` found in step 3.  5. Get a list of CloudWatch alarms and filter on the `` captured in step 4. ``` aws cloudwatch describe-alarms --query \"MetricAlarms[?MetricName== '']\" ``` 6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.  7. Ensure there is at least one active subscriber to the SNS topic ``` aws sns list-subscriptions-by-topic --topic-arn   ``` at least one subscription should have \"SubscriptionArn\" with valid aws ARN. ``` Example of valid \"SubscriptionArn\": \"arn:aws:sns::::\" ```",
+          "AdditionalInformation": "Configuring log metric filter and alarm on Multi-region (global) CloudTrail - ensures that activities from all regions (used as well as unused) are monitored - ensures that activities on all supported global services are monitored - ensures that all management events across all regions are monitored",
           "References": "https://docs.aws.amazon.com/awscloudtrail/latest/userguide/receive-cloudtrail-log-files-from-multiple-regions.html:https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudwatch-alarms-for-cloudtrail.html:https://docs.aws.amazon.com/sns/latest/dg/SubscribeTopic.html"
         }
       ]
@@ -882,9 +882,9 @@
           "Description": "Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs and establishing corresponding metric filters and alarms. NACLs are used as a stateless packet filter to control ingress and egress traffic for subnets within a VPC. It is recommended that a metric filter and alarm be established for changes made to NACLs.",
           "RationaleStatement": "Monitoring changes to NACLs will help ensure that AWS resources and services are not unintentionally exposed.",
           "ImpactStatement": "",
-          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:\n\n1. Create a metric filter based on filter pattern provided which checks for NACL changes and the `` taken from audit step 1.\n```\naws logs put-metric-filter --log-group-name  --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventName = CreateNetworkAcl) || ($.eventName = CreateNetworkAclEntry) || ($.eventName = DeleteNetworkAcl) || ($.eventName = DeleteNetworkAclEntry) || ($.eventName = ReplaceNetworkAclEntry) || ($.eventName = ReplaceNetworkAclAssociation) }'\n```\n\n**Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.\n\n2. Create an SNS topic that the alarm will notify\n```\naws sns create-topic --name \n```\n\n**Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.\n\n3. Create an SNS subscription to the topic created in step 2\n```\naws sns subscribe --topic-arn  --protocol  --notification-endpoint \n```\n\n**Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.\n\n4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2\n```\naws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions \n```",
-          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:\n\n1. Identify the log group name configured for use with active multi-region CloudTrail:\n\n- List all CloudTrails: `aws cloudtrail describe-trails`\n\n- Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`\n\n- From value associated with CloudWatchLogsLogGroupArn note ``\n\nExample: for CloudWatchLogsLogGroupArn that looks like `arn:aws:logs:::log-group:NewGroup:*`, `` would be `NewGroup`\n\n- Ensure Identified Multi region CloudTrail is active\n\n`aws cloudtrail get-trail-status --name `\n\nensure `IsLogging` is set to `TRUE`\n\n- Ensure identified Multi-region Cloudtrail captures all Management Events\n\n`aws cloudtrail get-event-selectors --trail-name `\n\nEnsure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`\n\n2. Get a list of all associated metric filters for this ``:\n```\naws logs describe-metric-filters --log-group-name \"\"\n```\n3. Ensure the output from the above command contains the following:\n```\n\"filterPattern\": \"{ ($.eventName = CreateNetworkAcl) || ($.eventName = CreateNetworkAclEntry) || ($.eventName = DeleteNetworkAcl) || ($.eventName = DeleteNetworkAclEntry) || ($.eventName = ReplaceNetworkAclEntry) || ($.eventName = ReplaceNetworkAclAssociation) }\"\n```\n4. Note the `` value associated with the `filterPattern` found in step 3.\n\n5. Get a list of CloudWatch alarms and filter on the `` captured in step 4.\n```\naws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]'\n```\n6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.\n\n7. Ensure there is at least one active subscriber to the SNS topic\n```\naws sns list-subscriptions-by-topic --topic-arn  \n```\nat least one subscription should have \"SubscriptionArn\" with valid aws ARN.\n```\nExample of valid \"SubscriptionArn\": \"arn:aws:sns::::\"\n```",
-          "AdditionalInformation": "Configuring log metric filter and alarm on Multi-region (global) CloudTrail\n- ensures that activities from all regions (used as well as unused) are monitored\n- ensures that activities on all supported global services are monitored\n- ensures that all management events across all regions are monitored",
+          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:  1. Create a metric filter based on filter pattern provided which checks for NACL changes and the `` taken from audit step 1. ``` aws logs put-metric-filter --log-group-name  --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventName = CreateNetworkAcl) || ($.eventName = CreateNetworkAclEntry) || ($.eventName = DeleteNetworkAcl) || ($.eventName = DeleteNetworkAclEntry) || ($.eventName = ReplaceNetworkAclEntry) || ($.eventName = ReplaceNetworkAclAssociation) }' ```  **Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.  2. Create an SNS topic that the alarm will notify ``` aws sns create-topic --name  ```  **Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.  3. Create an SNS subscription to the topic created in step 2 ``` aws sns subscribe --topic-arn  --protocol  --notification-endpoint  ```  **Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.  4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 ``` aws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions  ```",
+          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:  1. Identify the log group name configured for use with active multi-region CloudTrail:  - List all CloudTrails: `aws cloudtrail describe-trails`  - Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`  - From value associated with CloudWatchLogsLogGroupArn note ``  Example: for CloudWatchLogsLogGroupArn that looks like `arn:aws:logs:::log-group:NewGroup:*`, `` would be `NewGroup`  - Ensure Identified Multi region CloudTrail is active  `aws cloudtrail get-trail-status --name `  ensure `IsLogging` is set to `TRUE`  - Ensure identified Multi-region Cloudtrail captures all Management Events  `aws cloudtrail get-event-selectors --trail-name `  Ensure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`  2. Get a list of all associated metric filters for this ``: ``` aws logs describe-metric-filters --log-group-name \"\" ``` 3. Ensure the output from the above command contains the following: ``` \"filterPattern\": \"{ ($.eventName = CreateNetworkAcl) || ($.eventName = CreateNetworkAclEntry) || ($.eventName = DeleteNetworkAcl) || ($.eventName = DeleteNetworkAclEntry) || ($.eventName = ReplaceNetworkAclEntry) || ($.eventName = ReplaceNetworkAclAssociation) }\" ``` 4. Note the `` value associated with the `filterPattern` found in step 3.  5. Get a list of CloudWatch alarms and filter on the `` captured in step 4. ``` aws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]' ``` 6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.  7. Ensure there is at least one active subscriber to the SNS topic ``` aws sns list-subscriptions-by-topic --topic-arn   ``` at least one subscription should have \"SubscriptionArn\" with valid aws ARN. ``` Example of valid \"SubscriptionArn\": \"arn:aws:sns::::\" ```",
+          "AdditionalInformation": "Configuring log metric filter and alarm on Multi-region (global) CloudTrail - ensures that activities from all regions (used as well as unused) are monitored - ensures that activities on all supported global services are monitored - ensures that all management events across all regions are monitored",
           "References": "https://docs.aws.amazon.com/awscloudtrail/latest/userguide/receive-cloudtrail-log-files-from-multiple-regions.html:https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudwatch-alarms-for-cloudtrail.html:https://docs.aws.amazon.com/sns/latest/dg/SubscribeTopic.html"
         }
       ]
@@ -903,9 +903,9 @@
           "Description": "Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs and establishing corresponding metric filters and alarms. Network gateways are required to send/receive traffic to a destination outside of a VPC. It is recommended that a metric filter and alarm be established for changes to network gateways.",
           "RationaleStatement": "Monitoring changes to network gateways will help ensure that all ingress/egress traffic traverses the VPC border via a controlled path.",
           "ImpactStatement": "",
-          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:\n\n1. Create a metric filter based on filter pattern provided which checks for network gateways changes and the `` taken from audit step 1.\n```\naws logs put-metric-filter --log-group-name  --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventName = CreateCustomerGateway) || ($.eventName = DeleteCustomerGateway) || ($.eventName = AttachInternetGateway) || ($.eventName = CreateInternetGateway) || ($.eventName = DeleteInternetGateway) || ($.eventName = DetachInternetGateway) }'\n```\n\n**Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.\n\n2. Create an SNS topic that the alarm will notify\n```\naws sns create-topic --name \n```\n\n**Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.\n\n3. Create an SNS subscription to the topic created in step 2\n```\naws sns subscribe --topic-arn  --protocol  --notification-endpoint \n```\n\n**Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.\n\n4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2\n```\naws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions \n```",
-          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:\n\n1. Identify the log group name configured for use with active multi-region CloudTrail:\n\n- List all CloudTrails: `aws cloudtrail describe-trails`\n\n- Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`\n\n- From value associated with CloudWatchLogsLogGroupArn note ``\n\nExample: for CloudWatchLogsLogGroupArn that looks like `arn:aws:logs:::log-group:NewGroup:*`, `` would be `NewGroup`\n\n- Ensure Identified Multi region CloudTrail is active\n\n`aws cloudtrail get-trail-status --name `\n\nensure `IsLogging` is set to `TRUE`\n\n- Ensure identified Multi-region Cloudtrail captures all Management Events\n\n`aws cloudtrail get-event-selectors --trail-name `\n\nEnsure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`\n\n2. Get a list of all associated metric filters for this ``:\n```\naws logs describe-metric-filters --log-group-name \"\"\n```\n3. Ensure the output from the above command contains the following:\n```\n\"filterPattern\": \"{ ($.eventName = CreateCustomerGateway) || ($.eventName = DeleteCustomerGateway) || ($.eventName = AttachInternetGateway) || ($.eventName = CreateInternetGateway) || ($.eventName = DeleteInternetGateway) || ($.eventName = DetachInternetGateway) }\"\n```\n4. Note the `` value associated with the `filterPattern` found in step 3.\n\n5. Get a list of CloudWatch alarms and filter on the `` captured in step 4.\n```\naws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]'\n```\n6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.\n\n7. Ensure there is at least one active subscriber to the SNS topic\n```\naws sns list-subscriptions-by-topic --topic-arn  \n```\nat least one subscription should have \"SubscriptionArn\" with valid aws ARN.\n```\nExample of valid \"SubscriptionArn\": \"arn:aws:sns::::\"\n```",
-          "AdditionalInformation": "Configuring log metric filter and alarm on Multi-region (global) CloudTrail\n- ensures that activities from all regions (used as well as unused) are monitored\n- ensures that activities on all supported global services are monitored\n- ensures that all management events across all regions are monitored",
+          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:  1. Create a metric filter based on filter pattern provided which checks for network gateways changes and the `` taken from audit step 1. ``` aws logs put-metric-filter --log-group-name  --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventName = CreateCustomerGateway) || ($.eventName = DeleteCustomerGateway) || ($.eventName = AttachInternetGateway) || ($.eventName = CreateInternetGateway) || ($.eventName = DeleteInternetGateway) || ($.eventName = DetachInternetGateway) }' ```  **Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.  2. Create an SNS topic that the alarm will notify ``` aws sns create-topic --name  ```  **Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.  3. Create an SNS subscription to the topic created in step 2 ``` aws sns subscribe --topic-arn  --protocol  --notification-endpoint  ```  **Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.  4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 ``` aws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions  ```",
+          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:  1. Identify the log group name configured for use with active multi-region CloudTrail:  - List all CloudTrails: `aws cloudtrail describe-trails`  - Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`  - From value associated with CloudWatchLogsLogGroupArn note ``  Example: for CloudWatchLogsLogGroupArn that looks like `arn:aws:logs:::log-group:NewGroup:*`, `` would be `NewGroup`  - Ensure Identified Multi region CloudTrail is active  `aws cloudtrail get-trail-status --name `  ensure `IsLogging` is set to `TRUE`  - Ensure identified Multi-region Cloudtrail captures all Management Events  `aws cloudtrail get-event-selectors --trail-name `  Ensure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`  2. Get a list of all associated metric filters for this ``: ``` aws logs describe-metric-filters --log-group-name \"\" ``` 3. Ensure the output from the above command contains the following: ``` \"filterPattern\": \"{ ($.eventName = CreateCustomerGateway) || ($.eventName = DeleteCustomerGateway) || ($.eventName = AttachInternetGateway) || ($.eventName = CreateInternetGateway) || ($.eventName = DeleteInternetGateway) || ($.eventName = DetachInternetGateway) }\" ``` 4. Note the `` value associated with the `filterPattern` found in step 3.  5. Get a list of CloudWatch alarms and filter on the `` captured in step 4. ``` aws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]' ``` 6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.  7. Ensure there is at least one active subscriber to the SNS topic ``` aws sns list-subscriptions-by-topic --topic-arn   ``` at least one subscription should have \"SubscriptionArn\" with valid aws ARN. ``` Example of valid \"SubscriptionArn\": \"arn:aws:sns::::\" ```",
+          "AdditionalInformation": "Configuring log metric filter and alarm on Multi-region (global) CloudTrail - ensures that activities from all regions (used as well as unused) are monitored - ensures that activities on all supported global services are monitored - ensures that all management events across all regions are monitored",
           "References": "https://docs.aws.amazon.com/awscloudtrail/latest/userguide/receive-cloudtrail-log-files-from-multiple-regions.html:https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudwatch-alarms-for-cloudtrail.html:https://docs.aws.amazon.com/sns/latest/dg/SubscribeTopic.html"
         }
       ]
@@ -924,9 +924,9 @@
           "Description": "Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs and establishing corresponding metric filters and alarms. Routing tables are used to route network traffic between subnets and to network gateways. It is recommended that a metric filter and alarm be established for changes to route tables.",
           "RationaleStatement": "Monitoring changes to route tables will help ensure that all VPC traffic flows through an expected path.",
           "ImpactStatement": "",
-          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:\n\n1. Create a metric filter based on filter pattern provided which checks for route table changes and the `` taken from audit step 1.\n```\naws logs put-metric-filter --log-group-name  --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventName = CreateRoute) || ($.eventName = CreateRouteTable) || ($.eventName = ReplaceRoute) || ($.eventName = ReplaceRouteTableAssociation) || ($.eventName = DeleteRouteTable) || ($.eventName = DeleteRoute) || ($.eventName = DisassociateRouteTable) }'\n```\n\n**Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.\n\n2. Create an SNS topic that the alarm will notify\n```\naws sns create-topic --name \n```\n\n**Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.\n\n3. Create an SNS subscription to the topic created in step 2\n```\naws sns subscribe --topic-arn  --protocol  --notification-endpoint \n```\n\n**Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.\n\n4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2\n```\naws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions \n```",
-          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:\n\n1. Identify the log group name configured for use with active multi-region CloudTrail:\n\n- List all CloudTrails: `aws cloudtrail describe-trails`\n\n- Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`\n\n- From value associated with CloudWatchLogsLogGroupArn note ``\n\nExample: for CloudWatchLogsLogGroupArn that looks like `arn:aws:logs:::log-group:NewGroup:*`, `` would be `NewGroup`\n\n- Ensure Identified Multi region CloudTrail is active\n\n`aws cloudtrail get-trail-status --name `\n\nensure `IsLogging` is set to `TRUE`\n\n- Ensure identified Multi-region Cloudtrail captures all Management Events\n\n`aws cloudtrail get-event-selectors --trail-name `\n\nEnsure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`\n\n2. Get a list of all associated metric filters for this ``:\n\n```\naws logs describe-metric-filters --log-group-name \"\"\n```\n\n3. Ensure the output from the above command contains the following:\n\n```\n\"filterPattern\": \"{ ($.eventName = CreateRoute) || ($.eventName = CreateRouteTable) || ($.eventName = ReplaceRoute) || ($.eventName = ReplaceRouteTableAssociation) || ($.eventName = DeleteRouteTable) || ($.eventName = DeleteRoute) || ($.eventName = DisassociateRouteTable) }\"\n```\n\n4. Note the `` value associated with the `filterPattern` found in step 3.\n\n5. Get a list of CloudWatch alarms and filter on the `` captured in step 4.\n\n```\naws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]'\n```\n\n6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.\n\n7. Ensure there is at least one active subscriber to the SNS topic\n\n```\naws sns list-subscriptions-by-topic --topic-arn  \n```\nat least one subscription should have \"SubscriptionArn\" with valid aws ARN.\n\n```\nExample of valid \"SubscriptionArn\": \"arn:aws:sns::::\"\n```",
-          "AdditionalInformation": "Configuring log metric filter and alarm on Multi-region (global) CloudTrail\n- ensures that activities from all regions (used as well as unused) are monitored\n- ensures that activities on all supported global services are monitored\n- ensures that all management events across all regions are monitored",
+          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:  1. Create a metric filter based on filter pattern provided which checks for route table changes and the `` taken from audit step 1. ``` aws logs put-metric-filter --log-group-name  --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventName = CreateRoute) || ($.eventName = CreateRouteTable) || ($.eventName = ReplaceRoute) || ($.eventName = ReplaceRouteTableAssociation) || ($.eventName = DeleteRouteTable) || ($.eventName = DeleteRoute) || ($.eventName = DisassociateRouteTable) }' ```  **Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.  2. Create an SNS topic that the alarm will notify ``` aws sns create-topic --name  ```  **Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.  3. Create an SNS subscription to the topic created in step 2 ``` aws sns subscribe --topic-arn  --protocol  --notification-endpoint  ```  **Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.  4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 ``` aws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions  ```",
+          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:  1. Identify the log group name configured for use with active multi-region CloudTrail:  - List all CloudTrails: `aws cloudtrail describe-trails`  - Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`  - From value associated with CloudWatchLogsLogGroupArn note ``  Example: for CloudWatchLogsLogGroupArn that looks like `arn:aws:logs:::log-group:NewGroup:*`, `` would be `NewGroup`  - Ensure Identified Multi region CloudTrail is active  `aws cloudtrail get-trail-status --name `  ensure `IsLogging` is set to `TRUE`  - Ensure identified Multi-region Cloudtrail captures all Management Events  `aws cloudtrail get-event-selectors --trail-name `  Ensure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`  2. Get a list of all associated metric filters for this ``:  ``` aws logs describe-metric-filters --log-group-name \"\" ```  3. Ensure the output from the above command contains the following:  ``` \"filterPattern\": \"{ ($.eventName = CreateRoute) || ($.eventName = CreateRouteTable) || ($.eventName = ReplaceRoute) || ($.eventName = ReplaceRouteTableAssociation) || ($.eventName = DeleteRouteTable) || ($.eventName = DeleteRoute) || ($.eventName = DisassociateRouteTable) }\" ```  4. Note the `` value associated with the `filterPattern` found in step 3.  5. Get a list of CloudWatch alarms and filter on the `` captured in step 4.  ``` aws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]' ```  6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.  7. Ensure there is at least one active subscriber to the SNS topic  ``` aws sns list-subscriptions-by-topic --topic-arn   ``` at least one subscription should have \"SubscriptionArn\" with valid aws ARN.  ``` Example of valid \"SubscriptionArn\": \"arn:aws:sns::::\" ```",
+          "AdditionalInformation": "Configuring log metric filter and alarm on Multi-region (global) CloudTrail - ensures that activities from all regions (used as well as unused) are monitored - ensures that activities on all supported global services are monitored - ensures that all management events across all regions are monitored",
           "References": "https://docs.aws.amazon.com/awscloudtrail/latest/userguide/receive-cloudtrail-log-files-from-multiple-regions.html:https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudwatch-alarms-for-cloudtrail.html:https://docs.aws.amazon.com/sns/latest/dg/SubscribeTopic.html"
         }
       ]
@@ -945,9 +945,9 @@
           "Description": "Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs and establishing corresponding metric filters and alarms. It is possible to have more than 1 VPC within an account, in addition it is also possible to create a peer connection between 2 VPCs enabling network traffic to route between VPCs. It is recommended that a metric filter and alarm be established for changes made to VPCs.",
           "RationaleStatement": "Monitoring changes to VPC will help ensure VPC traffic flow is not getting impacted.",
           "ImpactStatement": "",
-          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:\n\n1. Create a metric filter based on filter pattern provided which checks for VPC changes and the `` taken from audit step 1.\n```\naws logs put-metric-filter --log-group-name  --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventName = CreateVpc) || ($.eventName = DeleteVpc) || ($.eventName = ModifyVpcAttribute) || ($.eventName = AcceptVpcPeeringConnection) || ($.eventName = CreateVpcPeeringConnection) || ($.eventName = DeleteVpcPeeringConnection) || ($.eventName = RejectVpcPeeringConnection) || ($.eventName = AttachClassicLinkVpc) || ($.eventName = DetachClassicLinkVpc) || ($.eventName = DisableVpcClassicLink) || ($.eventName = EnableVpcClassicLink) }'\n```\n\n**Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.\n\n2. Create an SNS topic that the alarm will notify\n```\naws sns create-topic --name \n```\n\n**Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.\n\n3. Create an SNS subscription to the topic created in step 2\n```\naws sns subscribe --topic-arn  --protocol  --notification-endpoint \n```\n\n**Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.\n\n4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2\n```\naws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions \n```",
-          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:\n\n1. Identify the log group name configured for use with active multi-region CloudTrail:\n\n- List all CloudTrails: `aws cloudtrail describe-trails`\n\n- Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`\n\n- From value associated with CloudWatchLogsLogGroupArn note ``\n\nExample: for CloudWatchLogsLogGroupArn that looks like `arn:aws:logs:::log-group:NewGroup:*`, `` would be `NewGroup`\n\n- Ensure Identified Multi region CloudTrail is active\n\n`aws cloudtrail get-trail-status --name `\n\nensure `IsLogging` is set to `TRUE`\n\n- Ensure identified Multi-region Cloudtrail captures all Management Events\n\n`aws cloudtrail get-event-selectors --trail-name `\n\nEnsure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`\n\n2. Get a list of all associated metric filters for this ``:\n\n```\naws logs describe-metric-filters --log-group-name \"\"\n```\n\n3. Ensure the output from the above command contains the following:\n\n```\n\"filterPattern\": \"{ ($.eventName = CreateVpc) || ($.eventName = DeleteVpc) || ($.eventName = ModifyVpcAttribute) || ($.eventName = AcceptVpcPeeringConnection) || ($.eventName = CreateVpcPeeringConnection) || ($.eventName = DeleteVpcPeeringConnection) || ($.eventName = RejectVpcPeeringConnection) || ($.eventName = AttachClassicLinkVpc) || ($.eventName = DetachClassicLinkVpc) || ($.eventName = DisableVpcClassicLink) || ($.eventName = EnableVpcClassicLink) }\"\n```\n\n4. Note the `` value associated with the `filterPattern` found in step 3.\n\n5. Get a list of CloudWatch alarms and filter on the `` captured in step 4.\n\n```\naws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]'\n```\n\n6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.\n\n7. Ensure there is at least one active subscriber to the SNS topic\n\n```\naws sns list-subscriptions-by-topic --topic-arn  \n```\nat least one subscription should have \"SubscriptionArn\" with valid aws ARN.\n\n```\nExample of valid \"SubscriptionArn\": \"arn:aws:sns::::\"\n```",
-          "AdditionalInformation": "Configuring log metric filter and alarm on Multi-region (global) CloudTrail\n- ensures that activities from all regions (used as well as unused) are monitored\n- ensures that activities on all supported global services are monitored\n- ensures that all management events across all regions are monitored",
+          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:  1. Create a metric filter based on filter pattern provided which checks for VPC changes and the `` taken from audit step 1. ``` aws logs put-metric-filter --log-group-name  --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventName = CreateVpc) || ($.eventName = DeleteVpc) || ($.eventName = ModifyVpcAttribute) || ($.eventName = AcceptVpcPeeringConnection) || ($.eventName = CreateVpcPeeringConnection) || ($.eventName = DeleteVpcPeeringConnection) || ($.eventName = RejectVpcPeeringConnection) || ($.eventName = AttachClassicLinkVpc) || ($.eventName = DetachClassicLinkVpc) || ($.eventName = DisableVpcClassicLink) || ($.eventName = EnableVpcClassicLink) }' ```  **Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.  2. Create an SNS topic that the alarm will notify ``` aws sns create-topic --name  ```  **Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.  3. Create an SNS subscription to the topic created in step 2 ``` aws sns subscribe --topic-arn  --protocol  --notification-endpoint  ```  **Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.  4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 ``` aws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions  ```",
+          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:  1. Identify the log group name configured for use with active multi-region CloudTrail:  - List all CloudTrails: `aws cloudtrail describe-trails`  - Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`  - From value associated with CloudWatchLogsLogGroupArn note ``  Example: for CloudWatchLogsLogGroupArn that looks like `arn:aws:logs:::log-group:NewGroup:*`, `` would be `NewGroup`  - Ensure Identified Multi region CloudTrail is active  `aws cloudtrail get-trail-status --name `  ensure `IsLogging` is set to `TRUE`  - Ensure identified Multi-region Cloudtrail captures all Management Events  `aws cloudtrail get-event-selectors --trail-name `  Ensure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`  2. Get a list of all associated metric filters for this ``:  ``` aws logs describe-metric-filters --log-group-name \"\" ```  3. Ensure the output from the above command contains the following:  ``` \"filterPattern\": \"{ ($.eventName = CreateVpc) || ($.eventName = DeleteVpc) || ($.eventName = ModifyVpcAttribute) || ($.eventName = AcceptVpcPeeringConnection) || ($.eventName = CreateVpcPeeringConnection) || ($.eventName = DeleteVpcPeeringConnection) || ($.eventName = RejectVpcPeeringConnection) || ($.eventName = AttachClassicLinkVpc) || ($.eventName = DetachClassicLinkVpc) || ($.eventName = DisableVpcClassicLink) || ($.eventName = EnableVpcClassicLink) }\" ```  4. Note the `` value associated with the `filterPattern` found in step 3.  5. Get a list of CloudWatch alarms and filter on the `` captured in step 4.  ``` aws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]' ```  6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.  7. Ensure there is at least one active subscriber to the SNS topic  ``` aws sns list-subscriptions-by-topic --topic-arn   ``` at least one subscription should have \"SubscriptionArn\" with valid aws ARN.  ``` Example of valid \"SubscriptionArn\": \"arn:aws:sns::::\" ```",
+          "AdditionalInformation": "Configuring log metric filter and alarm on Multi-region (global) CloudTrail - ensures that activities from all regions (used as well as unused) are monitored - ensures that activities on all supported global services are monitored - ensures that all management events across all regions are monitored",
           "References": "https://docs.aws.amazon.com/awscloudtrail/latest/userguide/receive-cloudtrail-log-files-from-multiple-regions.html:https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudwatch-alarms-for-cloudtrail.html:https://docs.aws.amazon.com/sns/latest/dg/SubscribeTopic.html"
         }
       ]
@@ -966,8 +966,8 @@
           "Description": "Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs and establishing corresponding metric filters and alarms. It is recommended that a metric filter and alarm be established for AWS Organizations changes made in the master AWS Account.",
           "RationaleStatement": "Monitoring AWS Organizations changes can help you prevent any unwanted, accidental or intentional modifications that may lead to unauthorized access or other security breaches. This monitoring technique helps you to ensure that any unexpected changes performed within your AWS Organizations can be investigated and any unwanted changes can be rolled back.",
           "ImpactStatement": "",
-          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:\n\n1. Create a metric filter based on filter pattern provided which checks for AWS Organizations changes and the `` taken from audit step 1:\n```\naws logs put-metric-filter --log-group-name  --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventSource = organizations.amazonaws.com) && (($.eventName = \"AcceptHandshake\") || ($.eventName = \"AttachPolicy\") || ($.eventName = \"CreateAccount\") || ($.eventName = \"CreateOrganizationalUnit\") || ($.eventName = \"CreatePolicy\") || ($.eventName = \"DeclineHandshake\") || ($.eventName = \"DeleteOrganization\") || ($.eventName = \"DeleteOrganizationalUnit\") || ($.eventName = \"DeletePolicy\") || ($.eventName = \"DetachPolicy\") || ($.eventName = \"DisablePolicyType\") || ($.eventName = \"EnablePolicyType\") || ($.eventName = \"InviteAccountToOrganization\") || ($.eventName = \"LeaveOrganization\") || ($.eventName = \"MoveAccount\") || ($.eventName = \"RemoveAccountFromOrganization\") || ($.eventName = \"UpdatePolicy\") || ($.eventName = \"UpdateOrganizationalUnit\")) }'\n```\n**Note:** You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.\n\n2. Create an SNS topic that the alarm will notify:\n```\naws sns create-topic --name \n```\n**Note:** you can execute this command once and then re-use the same topic for all monitoring alarms.\n\n3. Create an SNS subscription to the topic created in step 2:\n```\naws sns subscribe --topic-arn  --protocol  --notification-endpoint \n```\n**Note:** you can execute this command once and then re-use the SNS subscription for all monitoring alarms.\n\n4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2:\n```\naws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions \n```",
-          "AuditProcedure": "1. Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:\n- Identify the log group name configured for use with active multi-region CloudTrail:\n- List all CloudTrails: \n```\naws cloudtrail describe-trails\n```\n- Identify Multi region Cloudtrails, Trails with `\"IsMultiRegionTrail\"` set to true\n- From value associated with CloudWatchLogsLogGroupArn note \n **Example:** for CloudWatchLogsLogGroupArn that looks like arn:aws:logs:::log-group:NewGroup:*,  would be NewGroup\n\n- Ensure Identified Multi region CloudTrail is active:\n```\naws cloudtrail get-trail-status --name \n```\nEnsure `IsLogging` is set to `TRUE`\n\n- Ensure identified Multi-region Cloudtrail captures all Management Events:\n```\naws cloudtrail get-event-selectors --trail-name \n```\n- Ensure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to true and `ReadWriteType` set to `All`.\n\n2. Get a list of all associated metric filters for this :\n```\naws logs describe-metric-filters --log-group-name \"\"\n```\n3. Ensure the output from the above command contains the following:\n```\n\"filterPattern\": \"{ ($.eventSource = organizations.amazonaws.com) && (($.eventName = \"AcceptHandshake\") || ($.eventName = \"AttachPolicy\") || ($.eventName = \"CreateAccount\") || ($.eventName = \"CreateOrganizationalUnit\") || ($.eventName = \"CreatePolicy\") || ($.eventName = \"DeclineHandshake\") || ($.eventName = \"DeleteOrganization\") || ($.eventName = \"DeleteOrganizationalUnit\") || ($.eventName = \"DeletePolicy\") || ($.eventName = \"DetachPolicy\") || ($.eventName = \"DisablePolicyType\") || ($.eventName = \"EnablePolicyType\") || ($.eventName = \"InviteAccountToOrganization\") || ($.eventName = \"LeaveOrganization\") || ($.eventName = \"MoveAccount\") || ($.eventName = \"RemoveAccountFromOrganization\") || ($.eventName = \"UpdatePolicy\") || ($.eventName = \"UpdateOrganizationalUnit\")) }\"\n```\n4. Note the `` value associated with the filterPattern found in step 3.\n\n5. Get a list of CloudWatch alarms and filter on the `` captured in step 4:\n```\naws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]'\n```\n6. Note the AlarmActions value - this will provide the SNS topic ARN value.\n\n7. Ensure there is at least one active subscriber to the SNS topic:\n```\naws sns list-subscriptions-by-topic --topic-arn  \n```\nat least one subscription should have \"SubscriptionArn\" with valid aws ARN.\nExample of valid \"SubscriptionArn\": \n```\n\"arn:aws:sns::::\"\n```",
+          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:  1. Create a metric filter based on filter pattern provided which checks for AWS Organizations changes and the `` taken from audit step 1: ``` aws logs put-metric-filter --log-group-name  --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventSource = organizations.amazonaws.com) && (($.eventName = \"AcceptHandshake\") || ($.eventName = \"AttachPolicy\") || ($.eventName = \"CreateAccount\") || ($.eventName = \"CreateOrganizationalUnit\") || ($.eventName = \"CreatePolicy\") || ($.eventName = \"DeclineHandshake\") || ($.eventName = \"DeleteOrganization\") || ($.eventName = \"DeleteOrganizationalUnit\") || ($.eventName = \"DeletePolicy\") || ($.eventName = \"DetachPolicy\") || ($.eventName = \"DisablePolicyType\") || ($.eventName = \"EnablePolicyType\") || ($.eventName = \"InviteAccountToOrganization\") || ($.eventName = \"LeaveOrganization\") || ($.eventName = \"MoveAccount\") || ($.eventName = \"RemoveAccountFromOrganization\") || ($.eventName = \"UpdatePolicy\") || ($.eventName = \"UpdateOrganizationalUnit\")) }' ``` **Note:** You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.  2. Create an SNS topic that the alarm will notify: ``` aws sns create-topic --name  ``` **Note:** you can execute this command once and then re-use the same topic for all monitoring alarms.  3. Create an SNS subscription to the topic created in step 2: ``` aws sns subscribe --topic-arn  --protocol  --notification-endpoint  ``` **Note:** you can execute this command once and then re-use the SNS subscription for all monitoring alarms.  4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2: ``` aws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions  ```",
+          "AuditProcedure": "1. Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured: - Identify the log group name configured for use with active multi-region CloudTrail: - List all CloudTrails:  ``` aws cloudtrail describe-trails ``` - Identify Multi region Cloudtrails, Trails with `\"IsMultiRegionTrail\"` set to true - From value associated with CloudWatchLogsLogGroupArn note   **Example:** for CloudWatchLogsLogGroupArn that looks like arn:aws:logs:::log-group:NewGroup:*,  would be NewGroup  - Ensure Identified Multi region CloudTrail is active: ``` aws cloudtrail get-trail-status --name  ``` Ensure `IsLogging` is set to `TRUE`  - Ensure identified Multi-region Cloudtrail captures all Management Events: ``` aws cloudtrail get-event-selectors --trail-name  ``` - Ensure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to true and `ReadWriteType` set to `All`.  2. Get a list of all associated metric filters for this : ``` aws logs describe-metric-filters --log-group-name \"\" ``` 3. Ensure the output from the above command contains the following: ``` \"filterPattern\": \"{ ($.eventSource = organizations.amazonaws.com) && (($.eventName = \"AcceptHandshake\") || ($.eventName = \"AttachPolicy\") || ($.eventName = \"CreateAccount\") || ($.eventName = \"CreateOrganizationalUnit\") || ($.eventName = \"CreatePolicy\") || ($.eventName = \"DeclineHandshake\") || ($.eventName = \"DeleteOrganization\") || ($.eventName = \"DeleteOrganizationalUnit\") || ($.eventName = \"DeletePolicy\") || ($.eventName = \"DetachPolicy\") || ($.eventName = \"DisablePolicyType\") || ($.eventName = \"EnablePolicyType\") || ($.eventName = \"InviteAccountToOrganization\") || ($.eventName = \"LeaveOrganization\") || ($.eventName = \"MoveAccount\") || ($.eventName = \"RemoveAccountFromOrganization\") || ($.eventName = \"UpdatePolicy\") || ($.eventName = \"UpdateOrganizationalUnit\")) }\" ``` 4. Note the `` value associated with the filterPattern found in step 3.  5. Get a list of CloudWatch alarms and filter on the `` captured in step 4: ``` aws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]' ``` 6. Note the AlarmActions value - this will provide the SNS topic ARN value.  7. Ensure there is at least one active subscriber to the SNS topic: ``` aws sns list-subscriptions-by-topic --topic-arn   ``` at least one subscription should have \"SubscriptionArn\" with valid aws ARN. Example of valid \"SubscriptionArn\":  ``` \"arn:aws:sns::::\" ```",
           "AdditionalInformation": "",
           "References": "https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudwatch-alarms-for-cloudtrail.html:https://docs.aws.amazon.com/organizations/latest/userguide/orgs_security_incident-response.html"
         }
@@ -987,9 +987,9 @@
           "Description": "Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs and establishing corresponding metric filters and alarms. It is recommended that a metric filter and alarm be established for console logins that are not protected by multi-factor authentication (MFA).",
           "RationaleStatement": "Monitoring for single-factor console logins will increase visibility into accounts that are not protected by MFA.",
           "ImpactStatement": "",
-          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:\n\n1. Create a metric filter based on filter pattern provided which checks for AWS Management Console sign-in without MFA and the `` taken from audit step 1.\n\nUse Command: \n\n```\naws logs put-metric-filter --log-group-name  --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventName = \"ConsoleLogin\") && ($.additionalEventData.MFAUsed != \"Yes\") }'\n```\n\nOr (To reduce false positives incase Single Sign-On (SSO) is used in organization):\n\n```\naws logs put-metric-filter --log-group-name  --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventName = \"ConsoleLogin\") && ($.additionalEventData.MFAUsed != \"Yes\") && ($.userIdentity.type = \"IAMUser\") && ($.responseElements.ConsoleLogin = \"Success\") }'\n```\n\n**Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.\n\n2. Create an SNS topic that the alarm will notify\n```\naws sns create-topic --name \n```\n\n**Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.\n\n3. Create an SNS subscription to the topic created in step 2\n```\naws sns subscribe --topic-arn  --protocol  --notification-endpoint \n```\n\n**Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.\n\n4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2\n```\naws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions \n```",
-          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:\n\n1. Identify the log group name configured for use with active multi-region CloudTrail:\n\n- List all `CloudTrails`:\n\n```\naws cloudtrail describe-trails\n```\n\n- Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`\n\n- From value associated with CloudWatchLogsLogGroupArn note ``\n\nExample: for CloudWatchLogsLogGroupArn that looks like `arn:aws:logs:::log-group:NewGroup:*`, `` would be `NewGroup`\n\n- Ensure Identified Multi region `CloudTrail` is active\n\n```\naws cloudtrail get-trail-status --name \n```\n\nEnsure in the output that `IsLogging` is set to `TRUE`\n\n- Ensure identified Multi-region 'Cloudtrail' captures all Management Events\n\n```\naws cloudtrail get-event-selectors --trail-name \n```\n\nEnsure in the output there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`\n\n2. Get a list of all associated metric filters for this ``:\n```\naws logs describe-metric-filters --log-group-name \"\"\n```\n3. Ensure the output from the above command contains the following:\n```\n\"filterPattern\": \"{ ($.eventName = \"ConsoleLogin\") && ($.additionalEventData.MFAUsed != \"Yes\") }\"\n```\n\nOr (To reduce false positives incase Single Sign-On (SSO) is used in organization):\n\n```\n\"filterPattern\": \"{ ($.eventName = \"ConsoleLogin\") && ($.additionalEventData.MFAUsed != \"Yes\") && ($.userIdentity.type = \"IAMUser\") && ($.responseElements.ConsoleLogin = \"Success\") }\"\n```\n\n4. Note the `` value associated with the `filterPattern` found in step 3.\n\n5. Get a list of CloudWatch alarms and filter on the `` captured in step 4.\n\n```\naws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]'\n```\n6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.\n\n7. Ensure there is at least one active subscriber to the SNS topic\n```\naws sns list-subscriptions-by-topic --topic-arn  \n```\nat least one subscription should have \"SubscriptionArn\" with valid aws ARN.\n```\nExample of valid \"SubscriptionArn\": \"arn:aws:sns::::\"\n```",
-          "AdditionalInformation": "Configuring log metric filter and alarm on Multi-region (global) CloudTrail\n- ensures that activities from all regions (used as well as unused) are monitored\n- ensures that activities on all supported global services are monitored\n- ensures that all management events across all regions are monitored\n-Filter pattern set to `{ ($.eventName = \"ConsoleLogin\") && ($.additionalEventData.MFAUsed != \"Yes\") && ($.userIdentity.type = \"IAMUser\") && ($.responseElements.ConsoleLogin = \"Success\"}` reduces false alarms raised when user logs in via SSO account.",
+          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:  1. Create a metric filter based on filter pattern provided which checks for AWS Management Console sign-in without MFA and the `` taken from audit step 1.  Use Command:   ``` aws logs put-metric-filter --log-group-name  --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventName = \"ConsoleLogin\") && ($.additionalEventData.MFAUsed != \"Yes\") }' ```  Or (To reduce false positives incase Single Sign-On (SSO) is used in organization):  ``` aws logs put-metric-filter --log-group-name  --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventName = \"ConsoleLogin\") && ($.additionalEventData.MFAUsed != \"Yes\") && ($.userIdentity.type = \"IAMUser\") && ($.responseElements.ConsoleLogin = \"Success\") }' ```  **Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.  2. Create an SNS topic that the alarm will notify ``` aws sns create-topic --name  ```  **Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.  3. Create an SNS subscription to the topic created in step 2 ``` aws sns subscribe --topic-arn  --protocol  --notification-endpoint  ```  **Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.  4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 ``` aws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions  ```",
+          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:  1. Identify the log group name configured for use with active multi-region CloudTrail:  - List all `CloudTrails`:  ``` aws cloudtrail describe-trails ```  - Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`  - From value associated with CloudWatchLogsLogGroupArn note ``  Example: for CloudWatchLogsLogGroupArn that looks like `arn:aws:logs:::log-group:NewGroup:*`, `` would be `NewGroup`  - Ensure Identified Multi region `CloudTrail` is active  ``` aws cloudtrail get-trail-status --name  ```  Ensure in the output that `IsLogging` is set to `TRUE`  - Ensure identified Multi-region 'Cloudtrail' captures all Management Events  ``` aws cloudtrail get-event-selectors --trail-name  ```  Ensure in the output there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`  2. Get a list of all associated metric filters for this ``: ``` aws logs describe-metric-filters --log-group-name \"\" ``` 3. Ensure the output from the above command contains the following: ``` \"filterPattern\": \"{ ($.eventName = \"ConsoleLogin\") && ($.additionalEventData.MFAUsed != \"Yes\") }\" ```  Or (To reduce false positives incase Single Sign-On (SSO) is used in organization):  ``` \"filterPattern\": \"{ ($.eventName = \"ConsoleLogin\") && ($.additionalEventData.MFAUsed != \"Yes\") && ($.userIdentity.type = \"IAMUser\") && ($.responseElements.ConsoleLogin = \"Success\") }\" ```  4. Note the `` value associated with the `filterPattern` found in step 3.  5. Get a list of CloudWatch alarms and filter on the `` captured in step 4.  ``` aws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]' ``` 6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.  7. Ensure there is at least one active subscriber to the SNS topic ``` aws sns list-subscriptions-by-topic --topic-arn   ``` at least one subscription should have \"SubscriptionArn\" with valid aws ARN. ``` Example of valid \"SubscriptionArn\": \"arn:aws:sns::::\" ```",
+          "AdditionalInformation": "Configuring log metric filter and alarm on Multi-region (global) CloudTrail - ensures that activities from all regions (used as well as unused) are monitored - ensures that activities on all supported global services are monitored - ensures that all management events across all regions are monitored -Filter pattern set to `{ ($.eventName = \"ConsoleLogin\") && ($.additionalEventData.MFAUsed != \"Yes\") && ($.userIdentity.type = \"IAMUser\") && ($.responseElements.ConsoleLogin = \"Success\"}` reduces false alarms raised when user logs in via SSO account.",
           "References": "https://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/viewing_metrics_with_cloudwatch.html:https://docs.aws.amazon.com/awscloudtrail/latest/userguide/receive-cloudtrail-log-files-from-multiple-regions.html:https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudwatch-alarms-for-cloudtrail.html:https://docs.aws.amazon.com/sns/latest/dg/SubscribeTopic.html"
         }
       ]
@@ -1008,9 +1008,9 @@
           "Description": "Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs and establishing corresponding metric filters and alarms. It is recommended that a metric filter and alarm be established for 'root' login attempts.",
           "RationaleStatement": "Monitoring for 'root' account logins will provide visibility into the use of a fully privileged account and an opportunity to reduce the use of it.",
           "ImpactStatement": "",
-          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:\n\n1. Create a metric filter based on filter pattern provided which checks for 'Root' account usage and the `` taken from audit step 1.\n```\naws logs put-metric-filter --log-group-name `` --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ $.userIdentity.type = \"Root\" && $.userIdentity.invokedBy NOT EXISTS && $.eventType != \"AwsServiceEvent\" }'\n```\n\n**Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.\n\n2. Create an SNS topic that the alarm will notify\n```\naws sns create-topic --name \n```\n\n**Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.\n\n3. Create an SNS subscription to the topic created in step 2\n```\naws sns subscribe --topic-arn  --protocol  --notification-endpoint \n```\n\n**Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.\n\n4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2\n```\naws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions \n```",
-          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:\n\n1. Identify the log group name configured for use with active multi-region CloudTrail:\n\n- List all CloudTrails:\n\n`aws cloudtrail describe-trails`\n\n- Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`\n\n- From value associated with CloudWatchLogsLogGroupArn note ``\n\nExample: for CloudWatchLogsLogGroupArn that looks like `arn:aws:logs:::log-group:NewGroup:*`, `` would be `NewGroup`\n\n- Ensure Identified Multi region CloudTrail is active\n\n`aws cloudtrail get-trail-status --name `\n\nensure `IsLogging` is set to `TRUE`\n\n- Ensure identified Multi-region Cloudtrail captures all Management Events\n\n`aws cloudtrail get-event-selectors --trail-name `\n\nEnsure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`\n\n2. Get a list of all associated metric filters for this ``:\n\n```\naws logs describe-metric-filters --log-group-name \"\"\n```\n\n3. Ensure the output from the above command contains the following:\n\n```\n\"filterPattern\": \"{ $.userIdentity.type = \"Root\" && $.userIdentity.invokedBy NOT EXISTS && $.eventType != \"AwsServiceEvent\" }\"\n```\n\n4. Note the `` value associated with the `filterPattern` found in step 3.\n\n5. Get a list of CloudWatch alarms and filter on the `` captured in step 4.\n\n```\naws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]'\n```\n\n6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.\n\n7. Ensure there is at least one active subscriber to the SNS topic\n\n```\naws sns list-subscriptions-by-topic --topic-arn  \n```\nat least one subscription should have \"SubscriptionArn\" with valid aws ARN.\n\n```\nExample of valid \"SubscriptionArn\": \"arn:aws:sns::::\"\n```",
-          "AdditionalInformation": "**Configuring log metric filter and alarm on Multi-region (global) CloudTrail**\n\n- ensures that activities from all regions (used as well as unused) are monitored\n\n- ensures that activities on all supported global services are monitored\n\n- ensures that all management events across all regions are monitored",
+          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:  1. Create a metric filter based on filter pattern provided which checks for 'Root' account usage and the `` taken from audit step 1. ``` aws logs put-metric-filter --log-group-name `` --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ $.userIdentity.type = \"Root\" && $.userIdentity.invokedBy NOT EXISTS && $.eventType != \"AwsServiceEvent\" }' ```  **Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.  2. Create an SNS topic that the alarm will notify ``` aws sns create-topic --name  ```  **Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.  3. Create an SNS subscription to the topic created in step 2 ``` aws sns subscribe --topic-arn  --protocol  --notification-endpoint  ```  **Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.  4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 ``` aws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions  ```",
+          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:  1. Identify the log group name configured for use with active multi-region CloudTrail:  - List all CloudTrails:  `aws cloudtrail describe-trails`  - Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`  - From value associated with CloudWatchLogsLogGroupArn note ``  Example: for CloudWatchLogsLogGroupArn that looks like `arn:aws:logs:::log-group:NewGroup:*`, `` would be `NewGroup`  - Ensure Identified Multi region CloudTrail is active  `aws cloudtrail get-trail-status --name `  ensure `IsLogging` is set to `TRUE`  - Ensure identified Multi-region Cloudtrail captures all Management Events  `aws cloudtrail get-event-selectors --trail-name `  Ensure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`  2. Get a list of all associated metric filters for this ``:  ``` aws logs describe-metric-filters --log-group-name \"\" ```  3. Ensure the output from the above command contains the following:  ``` \"filterPattern\": \"{ $.userIdentity.type = \"Root\" && $.userIdentity.invokedBy NOT EXISTS && $.eventType != \"AwsServiceEvent\" }\" ```  4. Note the `` value associated with the `filterPattern` found in step 3.  5. Get a list of CloudWatch alarms and filter on the `` captured in step 4.  ``` aws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]' ```  6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.  7. Ensure there is at least one active subscriber to the SNS topic  ``` aws sns list-subscriptions-by-topic --topic-arn   ``` at least one subscription should have \"SubscriptionArn\" with valid aws ARN.  ``` Example of valid \"SubscriptionArn\": \"arn:aws:sns::::\" ```",
+          "AdditionalInformation": "**Configuring log metric filter and alarm on Multi-region (global) CloudTrail**  - ensures that activities from all regions (used as well as unused) are monitored  - ensures that activities on all supported global services are monitored  - ensures that all management events across all regions are monitored",
           "References": "https://docs.aws.amazon.com/awscloudtrail/latest/userguide/receive-cloudtrail-log-files-from-multiple-regions.html:https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudwatch-alarms-for-cloudtrail.html:https://docs.aws.amazon.com/sns/latest/dg/SubscribeTopic.html"
         }
       ]
@@ -1029,9 +1029,9 @@
           "Description": "Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs and establishing corresponding metric filters and alarms. It is recommended that a metric filter and alarm be established changes made to Identity and Access Management (IAM) policies.",
           "RationaleStatement": "Monitoring changes to IAM policies will help ensure authentication and authorization controls remain intact.",
           "ImpactStatement": "",
-          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:\n\n1. Create a metric filter based on filter pattern provided which checks for IAM policy changes and the `` taken from audit step 1.\n```\naws logs put-metric-filter --log-group-name `` --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{($.eventName=DeleteGroupPolicy)||($.eventName=DeleteRolePolicy)||($.eventName=DeleteUserPolicy)||($.eventName=PutGroupPolicy)||($.eventName=PutRolePolicy)||($.eventName=PutUserPolicy)||($.eventName=CreatePolicy)||($.eventName=DeletePolicy)||($.eventName=CreatePolicyVersion)||($.eventName=DeletePolicyVersion)||($.eventName=AttachRolePolicy)||($.eventName=DetachRolePolicy)||($.eventName=AttachUserPolicy)||($.eventName=DetachUserPolicy)||($.eventName=AttachGroupPolicy)||($.eventName=DetachGroupPolicy)}'\n```\n\n**Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.\n\n2. Create an SNS topic that the alarm will notify\n```\naws sns create-topic --name \n```\n\n**Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.\n\n3. Create an SNS subscription to the topic created in step 2\n```\naws sns subscribe --topic-arn  --protocol  --notification-endpoint \n```\n\n**Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.\n\n4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2\n```\naws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions \n```",
-          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:\n\n1. Identify the log group name configured for use with active multi-region CloudTrail:\n\n- List all CloudTrails:\n\n`aws cloudtrail describe-trails`\n\n- Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`\n\n- From value associated with CloudWatchLogsLogGroupArn note ``\n\nExample: for CloudWatchLogsLogGroupArn that looks like `arn:aws:logs:::log-group:NewGroup:*`, `` would be `NewGroup`\n\n- Ensure Identified Multi region CloudTrail is active\n\n`aws cloudtrail get-trail-status --name `\n\nensure `IsLogging` is set to `TRUE`\n\n- Ensure identified Multi-region Cloudtrail captures all Management Events\n\n`aws cloudtrail get-event-selectors --trail-name `\n\nEnsure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`\n\n2. Get a list of all associated metric filters for this ``:\n\n```\naws logs describe-metric-filters --log-group-name \"\"\n```\n\n3. Ensure the output from the above command contains the following:\n\n```\n\"filterPattern\": \"{($.eventName=DeleteGroupPolicy)||($.eventName=DeleteRolePolicy)||($.eventName=DeleteUserPolicy)||($.eventName=PutGroupPolicy)||($.eventName=PutRolePolicy)||($.eventName=PutUserPolicy)||($.eventName=CreatePolicy)||($.eventName=DeletePolicy)||($.eventName=CreatePolicyVersion)||($.eventName=DeletePolicyVersion)||($.eventName=AttachRolePolicy)||($.eventName=DetachRolePolicy)||($.eventName=AttachUserPolicy)||($.eventName=DetachUserPolicy)||($.eventName=AttachGroupPolicy)||($.eventName=DetachGroupPolicy)}\"\n```\n\n4. Note the `` value associated with the `filterPattern` found in step 3.\n\n5. Get a list of CloudWatch alarms and filter on the `` captured in step 4.\n\n```\naws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]'\n```\n\n6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.\n\n7. Ensure there is at least one active subscriber to the SNS topic\n\n```\naws sns list-subscriptions-by-topic --topic-arn  \n```\nat least one subscription should have \"SubscriptionArn\" with valid aws ARN.\n\n```\nExample of valid \"SubscriptionArn\": \"arn:aws:sns::::\"\n```",
-          "AdditionalInformation": "Configuring log metric filter and alarm on Multi-region (global) CloudTrail\n- ensures that activities from all regions (used as well as unused) are monitored\n- ensures that activities on all supported global services are monitored\n- ensures that all management events across all regions are monitored",
+          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:  1. Create a metric filter based on filter pattern provided which checks for IAM policy changes and the `` taken from audit step 1. ``` aws logs put-metric-filter --log-group-name `` --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{($.eventName=DeleteGroupPolicy)||($.eventName=DeleteRolePolicy)||($.eventName=DeleteUserPolicy)||($.eventName=PutGroupPolicy)||($.eventName=PutRolePolicy)||($.eventName=PutUserPolicy)||($.eventName=CreatePolicy)||($.eventName=DeletePolicy)||($.eventName=CreatePolicyVersion)||($.eventName=DeletePolicyVersion)||($.eventName=AttachRolePolicy)||($.eventName=DetachRolePolicy)||($.eventName=AttachUserPolicy)||($.eventName=DetachUserPolicy)||($.eventName=AttachGroupPolicy)||($.eventName=DetachGroupPolicy)}' ```  **Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.  2. Create an SNS topic that the alarm will notify ``` aws sns create-topic --name  ```  **Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.  3. Create an SNS subscription to the topic created in step 2 ``` aws sns subscribe --topic-arn  --protocol  --notification-endpoint  ```  **Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.  4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 ``` aws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions  ```",
+          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:  1. Identify the log group name configured for use with active multi-region CloudTrail:  - List all CloudTrails:  `aws cloudtrail describe-trails`  - Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`  - From value associated with CloudWatchLogsLogGroupArn note ``  Example: for CloudWatchLogsLogGroupArn that looks like `arn:aws:logs:::log-group:NewGroup:*`, `` would be `NewGroup`  - Ensure Identified Multi region CloudTrail is active  `aws cloudtrail get-trail-status --name `  ensure `IsLogging` is set to `TRUE`  - Ensure identified Multi-region Cloudtrail captures all Management Events  `aws cloudtrail get-event-selectors --trail-name `  Ensure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`  2. Get a list of all associated metric filters for this ``:  ``` aws logs describe-metric-filters --log-group-name \"\" ```  3. Ensure the output from the above command contains the following:  ``` \"filterPattern\": \"{($.eventName=DeleteGroupPolicy)||($.eventName=DeleteRolePolicy)||($.eventName=DeleteUserPolicy)||($.eventName=PutGroupPolicy)||($.eventName=PutRolePolicy)||($.eventName=PutUserPolicy)||($.eventName=CreatePolicy)||($.eventName=DeletePolicy)||($.eventName=CreatePolicyVersion)||($.eventName=DeletePolicyVersion)||($.eventName=AttachRolePolicy)||($.eventName=DetachRolePolicy)||($.eventName=AttachUserPolicy)||($.eventName=DetachUserPolicy)||($.eventName=AttachGroupPolicy)||($.eventName=DetachGroupPolicy)}\" ```  4. Note the `` value associated with the `filterPattern` found in step 3.  5. Get a list of CloudWatch alarms and filter on the `` captured in step 4.  ``` aws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]' ```  6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.  7. Ensure there is at least one active subscriber to the SNS topic  ``` aws sns list-subscriptions-by-topic --topic-arn   ``` at least one subscription should have \"SubscriptionArn\" with valid aws ARN.  ``` Example of valid \"SubscriptionArn\": \"arn:aws:sns::::\" ```",
+          "AdditionalInformation": "Configuring log metric filter and alarm on Multi-region (global) CloudTrail - ensures that activities from all regions (used as well as unused) are monitored - ensures that activities on all supported global services are monitored - ensures that all management events across all regions are monitored",
           "References": "https://docs.aws.amazon.com/awscloudtrail/latest/userguide/receive-cloudtrail-log-files-from-multiple-regions.html:https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudwatch-alarms-for-cloudtrail.html:https://docs.aws.amazon.com/sns/latest/dg/SubscribeTopic.html"
         }
       ]
@@ -1050,9 +1050,9 @@
           "Description": "Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs and establishing corresponding metric filters and alarms. It is recommended that a metric filter and alarm be established for detecting changes to CloudTrail's configurations.",
           "RationaleStatement": "Monitoring changes to CloudTrail's configuration will help ensure sustained visibility to activities performed in the AWS account.",
           "ImpactStatement": "",
-          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:\n\n1. Create a metric filter based on filter pattern provided which checks for cloudtrail configuration changes and the `` taken from audit step 1.\n```\naws logs put-metric-filter --log-group-name  --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventName = CreateTrail) || ($.eventName = UpdateTrail) || ($.eventName = DeleteTrail) || ($.eventName = StartLogging) || ($.eventName = StopLogging) }'\n```\n\n**Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.\n\n2. Create an SNS topic that the alarm will notify\n```\naws sns create-topic --name \n```\n\n**Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.\n\n3. Create an SNS subscription to the topic created in step 2\n```\naws sns subscribe --topic-arn  --protocol  --notification-endpoint \n```\n\n**Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.\n\n4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2\n```\naws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions \n```",
-          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:\n\n1. Identify the log group name configured for use with active multi-region CloudTrail:\n\n- List all CloudTrails: `aws cloudtrail describe-trails`\n\n- Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`\n\n- From value associated with CloudWatchLogsLogGroupArn note ``\n\nExample: for CloudWatchLogsLogGroupArn that looks like `arn:aws:logs:::log-group:NewGroup:*`, `` would be `NewGroup`\n\n- Ensure Identified Multi region CloudTrail is active\n\n`aws cloudtrail get-trail-status --name `\n\nensure `IsLogging` is set to `TRUE`\n\n- Ensure identified Multi-region Cloudtrail captures all Management Events\n\n`aws cloudtrail get-event-selectors --trail-name `\n\nEnsure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`\n\n2. Get a list of all associated metric filters for this ``:\n\n```\naws logs describe-metric-filters --log-group-name \"\"\n```\n\n3. Ensure the output from the above command contains the following:\n\n```\n\"filterPattern\": \"{ ($.eventName = CreateTrail) || ($.eventName = UpdateTrail) || ($.eventName = DeleteTrail) || ($.eventName = StartLogging) || ($.eventName = StopLogging) }\"\n```\n\n4. Note the `` value associated with the `filterPattern` found in step 3.\n\n5. Get a list of CloudWatch alarms and filter on the `` captured in step 4.\n\n```\naws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]'\n```\n\n6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.\n\n7. Ensure there is at least one active subscriber to the SNS topic\n\n```\naws sns list-subscriptions-by-topic --topic-arn  \n```\nat least one subscription should have \"SubscriptionArn\" with valid aws ARN.\n\n```\nExample of valid \"SubscriptionArn\": \"arn:aws:sns::::\"\n```",
-          "AdditionalInformation": "Configuring log metric filter and alarm on Multi-region (global) CloudTrail\n- ensures that activities from all regions (used as well as unused) are monitored\n- ensures that activities on all supported global services are monitored\n- ensures that all management events across all regions are monitored",
+          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:  1. Create a metric filter based on filter pattern provided which checks for cloudtrail configuration changes and the `` taken from audit step 1. ``` aws logs put-metric-filter --log-group-name  --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventName = CreateTrail) || ($.eventName = UpdateTrail) || ($.eventName = DeleteTrail) || ($.eventName = StartLogging) || ($.eventName = StopLogging) }' ```  **Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.  2. Create an SNS topic that the alarm will notify ``` aws sns create-topic --name  ```  **Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.  3. Create an SNS subscription to the topic created in step 2 ``` aws sns subscribe --topic-arn  --protocol  --notification-endpoint  ```  **Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.  4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 ``` aws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions  ```",
+          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:  1. Identify the log group name configured for use with active multi-region CloudTrail:  - List all CloudTrails: `aws cloudtrail describe-trails`  - Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`  - From value associated with CloudWatchLogsLogGroupArn note ``  Example: for CloudWatchLogsLogGroupArn that looks like `arn:aws:logs:::log-group:NewGroup:*`, `` would be `NewGroup`  - Ensure Identified Multi region CloudTrail is active  `aws cloudtrail get-trail-status --name `  ensure `IsLogging` is set to `TRUE`  - Ensure identified Multi-region Cloudtrail captures all Management Events  `aws cloudtrail get-event-selectors --trail-name `  Ensure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`  2. Get a list of all associated metric filters for this ``:  ``` aws logs describe-metric-filters --log-group-name \"\" ```  3. Ensure the output from the above command contains the following:  ``` \"filterPattern\": \"{ ($.eventName = CreateTrail) || ($.eventName = UpdateTrail) || ($.eventName = DeleteTrail) || ($.eventName = StartLogging) || ($.eventName = StopLogging) }\" ```  4. Note the `` value associated with the `filterPattern` found in step 3.  5. Get a list of CloudWatch alarms and filter on the `` captured in step 4.  ``` aws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]' ```  6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.  7. Ensure there is at least one active subscriber to the SNS topic  ``` aws sns list-subscriptions-by-topic --topic-arn   ``` at least one subscription should have \"SubscriptionArn\" with valid aws ARN.  ``` Example of valid \"SubscriptionArn\": \"arn:aws:sns::::\" ```",
+          "AdditionalInformation": "Configuring log metric filter and alarm on Multi-region (global) CloudTrail - ensures that activities from all regions (used as well as unused) are monitored - ensures that activities on all supported global services are monitored - ensures that all management events across all regions are monitored",
           "References": "https://docs.aws.amazon.com/awscloudtrail/latest/userguide/receive-cloudtrail-log-files-from-multiple-regions.html:https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudwatch-alarms-for-cloudtrail.html:https://docs.aws.amazon.com/sns/latest/dg/SubscribeTopic.html"
         }
       ]
@@ -1071,9 +1071,9 @@
           "Description": "Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs and establishing corresponding metric filters and alarms. It is recommended that a metric filter and alarm be established for failed console authentication attempts.",
           "RationaleStatement": "Monitoring failed console logins may decrease lead time to detect an attempt to brute force a credential, which may provide an indicator, such as source IP, that can be used in other event correlation.",
           "ImpactStatement": "",
-          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:\n\n1. Create a metric filter based on filter pattern provided which checks for AWS management Console Login Failures and the `` taken from audit step 1.\n```\naws logs put-metric-filter --log-group-name  --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventName = ConsoleLogin) && ($.errorMessage = \"Failed authentication\") }'\n```\n**Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.\n\n2. Create an SNS topic that the alarm will notify\n```\naws sns create-topic --name \n```\n**Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.\n\n3. Create an SNS subscription to the topic created in step 2\n```\naws sns subscribe --topic-arn  --protocol  --notification-endpoint \n```\n**Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.\n\n4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2\n```\naws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions \n```",
-          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:\n\n1. Identify the log group name configured for use with active multi-region CloudTrail:\n\n- List all CloudTrails: `aws cloudtrail describe-trails`\n\n- Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`\n\n- From value associated with CloudWatchLogsLogGroupArn note ``\n\nExample: for CloudWatchLogsLogGroupArn that looks like `arn:aws:logs:::log-group:NewGroup:*`, `` would be `NewGroup`\n\n- Ensure Identified Multi region CloudTrail is active\n\n`aws cloudtrail get-trail-status --name `\n\nensure `IsLogging` is set to `TRUE`\n\n- Ensure identified Multi-region Cloudtrail captures all Management Events\n\n`aws cloudtrail get-event-selectors --trail-name `\n\nEnsure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`\n\n2. Get a list of all associated metric filters for this ``:\n```\naws logs describe-metric-filters --log-group-name \"\"\n```\n3. Ensure the output from the above command contains the following:\n```\n\"filterPattern\": \"{ ($.eventName = ConsoleLogin) && ($.errorMessage = \"Failed authentication\") }\"\n```\n\n4. Note the `` value associated with the `filterPattern` found in step 3.\n\n5. Get a list of CloudWatch alarms and filter on the `` captured in step 4.\n```\naws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]'\n```\n6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.\n\n7. Ensure there is at least one active subscriber to the SNS topic\n```\naws sns list-subscriptions-by-topic --topic-arn  \n```\nat least one subscription should have \"SubscriptionArn\" with valid aws ARN.\n```\nExample of valid \"SubscriptionArn\": \"arn:aws:sns::::\"\n```",
-          "AdditionalInformation": "Configuring log metric filter and alarm on Multi-region (global) CloudTrail\n- ensures that activities from all regions (used as well as unused) are monitored\n- ensures that activities on all supported global services are monitored\n- ensures that all management events across all regions are monitored",
+          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:  1. Create a metric filter based on filter pattern provided which checks for AWS management Console Login Failures and the `` taken from audit step 1. ``` aws logs put-metric-filter --log-group-name  --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventName = ConsoleLogin) && ($.errorMessage = \"Failed authentication\") }' ``` **Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.  2. Create an SNS topic that the alarm will notify ``` aws sns create-topic --name  ``` **Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.  3. Create an SNS subscription to the topic created in step 2 ``` aws sns subscribe --topic-arn  --protocol  --notification-endpoint  ``` **Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.  4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 ``` aws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions  ```",
+          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:  1. Identify the log group name configured for use with active multi-region CloudTrail:  - List all CloudTrails: `aws cloudtrail describe-trails`  - Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`  - From value associated with CloudWatchLogsLogGroupArn note ``  Example: for CloudWatchLogsLogGroupArn that looks like `arn:aws:logs:::log-group:NewGroup:*`, `` would be `NewGroup`  - Ensure Identified Multi region CloudTrail is active  `aws cloudtrail get-trail-status --name `  ensure `IsLogging` is set to `TRUE`  - Ensure identified Multi-region Cloudtrail captures all Management Events  `aws cloudtrail get-event-selectors --trail-name `  Ensure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`  2. Get a list of all associated metric filters for this ``: ``` aws logs describe-metric-filters --log-group-name \"\" ``` 3. Ensure the output from the above command contains the following: ``` \"filterPattern\": \"{ ($.eventName = ConsoleLogin) && ($.errorMessage = \"Failed authentication\") }\" ```  4. Note the `` value associated with the `filterPattern` found in step 3.  5. Get a list of CloudWatch alarms and filter on the `` captured in step 4. ``` aws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]' ``` 6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.  7. Ensure there is at least one active subscriber to the SNS topic ``` aws sns list-subscriptions-by-topic --topic-arn   ``` at least one subscription should have \"SubscriptionArn\" with valid aws ARN. ``` Example of valid \"SubscriptionArn\": \"arn:aws:sns::::\" ```",
+          "AdditionalInformation": "Configuring log metric filter and alarm on Multi-region (global) CloudTrail - ensures that activities from all regions (used as well as unused) are monitored - ensures that activities on all supported global services are monitored - ensures that all management events across all regions are monitored",
           "References": "https://docs.aws.amazon.com/awscloudtrail/latest/userguide/receive-cloudtrail-log-files-from-multiple-regions.html:https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudwatch-alarms-for-cloudtrail.html:https://docs.aws.amazon.com/sns/latest/dg/SubscribeTopic.html"
         }
       ]
@@ -1092,9 +1092,9 @@
           "Description": "Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs and establishing corresponding metric filters and alarms. It is recommended that a metric filter and alarm be established for customer created CMKs which have changed state to disabled or scheduled deletion.",
           "RationaleStatement": "Data encrypted with disabled or deleted keys will no longer be accessible.",
           "ImpactStatement": "",
-          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:\n\n1. Create a metric filter based on filter pattern provided which checks for disabled or scheduled for deletion CMK's and the `` taken from audit step 1.\n```\naws logs put-metric-filter --log-group-name  --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{($.eventSource = kms.amazonaws.com) && (($.eventName=DisableKey)||($.eventName=ScheduleKeyDeletion)) }'\n```\n**Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.\n\n2. Create an SNS topic that the alarm will notify\n```\naws sns create-topic --name \n```\n**Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.\n\n3. Create an SNS subscription to the topic created in step 2\n```\naws sns subscribe --topic-arn  --protocol  --notification-endpoint \n```\n**Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.\n\n4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2\n```\naws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions \n```",
-          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:\n\n1. Identify the log group name configured for use with active multi-region CloudTrail:\n\n- List all CloudTrails: `aws cloudtrail describe-trails`\n\n- Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`\n\n- From value associated with CloudWatchLogsLogGroupArn note ``\n\nExample: for CloudWatchLogsLogGroupArn that looks like `arn:aws:logs:::log-group:NewGroup:*`, `` would be `NewGroup`\n\n- Ensure Identified Multi region CloudTrail is active\n\n`aws cloudtrail get-trail-status --name `\n\nensure `IsLogging` is set to `TRUE`\n\n- Ensure identified Multi-region Cloudtrail captures all Management Events\n\n`aws cloudtrail get-event-selectors --trail-name `\n\nEnsure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`\n\n2. Get a list of all associated metric filters for this ``:\n```\naws logs describe-metric-filters --log-group-name \"\"\n```\n3. Ensure the output from the above command contains the following:\n```\n\"filterPattern\": \"{($.eventSource = kms.amazonaws.com) && (($.eventName=DisableKey)||($.eventName=ScheduleKeyDeletion)) }\"\n```\n4. Note the `` value associated with the `filterPattern` found in step 3.\n\n5. Get a list of CloudWatch alarms and filter on the `` captured in step 4.\n```\naws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]'\n```\n6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.\n\n7. Ensure there is at least one active subscriber to the SNS topic\n```\naws sns list-subscriptions-by-topic --topic-arn  \n```\nat least one subscription should have \"SubscriptionArn\" with valid aws ARN.\n```\nExample of valid \"SubscriptionArn\": \"arn:aws:sns::::\"\n```",
-          "AdditionalInformation": "Configuring log metric filter and alarm on Multi-region (global) CloudTrail\n- ensures that activities from all regions (used as well as unused) are monitored\n- ensures that activities on all supported global services are monitored\n- ensures that all management events across all regions are monitored",
+          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:  1. Create a metric filter based on filter pattern provided which checks for disabled or scheduled for deletion CMK's and the `` taken from audit step 1. ``` aws logs put-metric-filter --log-group-name  --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{($.eventSource = kms.amazonaws.com) && (($.eventName=DisableKey)||($.eventName=ScheduleKeyDeletion)) }' ``` **Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.  2. Create an SNS topic that the alarm will notify ``` aws sns create-topic --name  ``` **Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.  3. Create an SNS subscription to the topic created in step 2 ``` aws sns subscribe --topic-arn  --protocol  --notification-endpoint  ``` **Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.  4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 ``` aws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions  ```",
+          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:  1. Identify the log group name configured for use with active multi-region CloudTrail:  - List all CloudTrails: `aws cloudtrail describe-trails`  - Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`  - From value associated with CloudWatchLogsLogGroupArn note ``  Example: for CloudWatchLogsLogGroupArn that looks like `arn:aws:logs:::log-group:NewGroup:*`, `` would be `NewGroup`  - Ensure Identified Multi region CloudTrail is active  `aws cloudtrail get-trail-status --name `  ensure `IsLogging` is set to `TRUE`  - Ensure identified Multi-region Cloudtrail captures all Management Events  `aws cloudtrail get-event-selectors --trail-name `  Ensure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`  2. Get a list of all associated metric filters for this ``: ``` aws logs describe-metric-filters --log-group-name \"\" ``` 3. Ensure the output from the above command contains the following: ``` \"filterPattern\": \"{($.eventSource = kms.amazonaws.com) && (($.eventName=DisableKey)||($.eventName=ScheduleKeyDeletion)) }\" ``` 4. Note the `` value associated with the `filterPattern` found in step 3.  5. Get a list of CloudWatch alarms and filter on the `` captured in step 4. ``` aws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]' ``` 6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.  7. Ensure there is at least one active subscriber to the SNS topic ``` aws sns list-subscriptions-by-topic --topic-arn   ``` at least one subscription should have \"SubscriptionArn\" with valid aws ARN. ``` Example of valid \"SubscriptionArn\": \"arn:aws:sns::::\" ```",
+          "AdditionalInformation": "Configuring log metric filter and alarm on Multi-region (global) CloudTrail - ensures that activities from all regions (used as well as unused) are monitored - ensures that activities on all supported global services are monitored - ensures that all management events across all regions are monitored",
           "References": "https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudwatch-alarms-for-cloudtrail.html:https://docs.aws.amazon.com/awscloudtrail/latest/userguide/receive-cloudtrail-log-files-from-multiple-regions.html:https://docs.aws.amazon.com/sns/latest/dg/SubscribeTopic.html"
         }
       ]
@@ -1113,9 +1113,9 @@
           "Description": "Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs and establishing corresponding metric filters and alarms. It is recommended that a metric filter and alarm be established for changes to S3 bucket policies.",
           "RationaleStatement": "Monitoring changes to S3 bucket policies may reduce time to detect and correct permissive policies on sensitive S3 buckets.",
           "ImpactStatement": "",
-          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:\n\n1. Create a metric filter based on filter pattern provided which checks for S3 bucket policy changes and the `` taken from audit step 1.\n```\naws logs put-metric-filter --log-group-name  --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventSource = s3.amazonaws.com) && (($.eventName = PutBucketAcl) || ($.eventName = PutBucketPolicy) || ($.eventName = PutBucketCors) || ($.eventName = PutBucketLifecycle) || ($.eventName = PutBucketReplication) || ($.eventName = DeleteBucketPolicy) || ($.eventName = DeleteBucketCors) || ($.eventName = DeleteBucketLifecycle) || ($.eventName = DeleteBucketReplication)) }'\n```\n\n**Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.\n\n2. Create an SNS topic that the alarm will notify\n```\naws sns create-topic --name \n```\n\n**Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.\n\n3. Create an SNS subscription to the topic created in step 2\n```\naws sns subscribe --topic-arn  --protocol  --notification-endpoint \n```\n\n**Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.\n\n4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2\n```\naws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions \n```",
-          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:\n\n1. Identify the log group name configured for use with active multi-region CloudTrail:\n\n- List all CloudTrails: `aws cloudtrail describe-trails`\n\n- Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`\n\n- From value associated with CloudWatchLogsLogGroupArn note ``\n\nExample: for CloudWatchLogsLogGroupArn that looks like `arn:aws:logs:::log-group:NewGroup:*`, `` would be `NewGroup`\n\n- Ensure Identified Multi region CloudTrail is active\n\n`aws cloudtrail get-trail-status --name `\n\nensure `IsLogging` is set to `TRUE`\n\n- Ensure identified Multi-region Cloudtrail captures all Management Events\n\n`aws cloudtrail get-event-selectors --trail-name `\n\nEnsure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`\n\n2. Get a list of all associated metric filters for this ``:\n```\naws logs describe-metric-filters --log-group-name \"\"\n```\n3. Ensure the output from the above command contains the following:\n```\n\"filterPattern\": \"{ ($.eventSource = s3.amazonaws.com) && (($.eventName = PutBucketAcl) || ($.eventName = PutBucketPolicy) || ($.eventName = PutBucketCors) || ($.eventName = PutBucketLifecycle) || ($.eventName = PutBucketReplication) || ($.eventName = DeleteBucketPolicy) || ($.eventName = DeleteBucketCors) || ($.eventName = DeleteBucketLifecycle) || ($.eventName = DeleteBucketReplication)) }\"\n```\n4. Note the `` value associated with the `filterPattern` found in step 3.\n\n5. Get a list of CloudWatch alarms and filter on the `` captured in step 4.\n```\naws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]'\n```\n6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.\n\n7. Ensure there is at least one active subscriber to the SNS topic\n```\naws sns list-subscriptions-by-topic --topic-arn  \n```\nat least one subscription should have \"SubscriptionArn\" with valid aws ARN.\n```\nExample of valid \"SubscriptionArn\": \"arn:aws:sns::::\"\n```",
-          "AdditionalInformation": "Configuring log metric filter and alarm on Multi-region (global) CloudTrail\n- ensures that activities from all regions (used as well as unused) are monitored\n- ensures that activities on all supported global services are monitored\n- ensures that all management events across all regions are monitored",
+          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:  1. Create a metric filter based on filter pattern provided which checks for S3 bucket policy changes and the `` taken from audit step 1. ``` aws logs put-metric-filter --log-group-name  --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventSource = s3.amazonaws.com) && (($.eventName = PutBucketAcl) || ($.eventName = PutBucketPolicy) || ($.eventName = PutBucketCors) || ($.eventName = PutBucketLifecycle) || ($.eventName = PutBucketReplication) || ($.eventName = DeleteBucketPolicy) || ($.eventName = DeleteBucketCors) || ($.eventName = DeleteBucketLifecycle) || ($.eventName = DeleteBucketReplication)) }' ```  **Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.  2. Create an SNS topic that the alarm will notify ``` aws sns create-topic --name  ```  **Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.  3. Create an SNS subscription to the topic created in step 2 ``` aws sns subscribe --topic-arn  --protocol  --notification-endpoint  ```  **Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.  4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 ``` aws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions  ```",
+          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:  1. Identify the log group name configured for use with active multi-region CloudTrail:  - List all CloudTrails: `aws cloudtrail describe-trails`  - Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`  - From value associated with CloudWatchLogsLogGroupArn note ``  Example: for CloudWatchLogsLogGroupArn that looks like `arn:aws:logs:::log-group:NewGroup:*`, `` would be `NewGroup`  - Ensure Identified Multi region CloudTrail is active  `aws cloudtrail get-trail-status --name `  ensure `IsLogging` is set to `TRUE`  - Ensure identified Multi-region Cloudtrail captures all Management Events  `aws cloudtrail get-event-selectors --trail-name `  Ensure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`  2. Get a list of all associated metric filters for this ``: ``` aws logs describe-metric-filters --log-group-name \"\" ``` 3. Ensure the output from the above command contains the following: ``` \"filterPattern\": \"{ ($.eventSource = s3.amazonaws.com) && (($.eventName = PutBucketAcl) || ($.eventName = PutBucketPolicy) || ($.eventName = PutBucketCors) || ($.eventName = PutBucketLifecycle) || ($.eventName = PutBucketReplication) || ($.eventName = DeleteBucketPolicy) || ($.eventName = DeleteBucketCors) || ($.eventName = DeleteBucketLifecycle) || ($.eventName = DeleteBucketReplication)) }\" ``` 4. Note the `` value associated with the `filterPattern` found in step 3.  5. Get a list of CloudWatch alarms and filter on the `` captured in step 4. ``` aws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]' ``` 6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.  7. Ensure there is at least one active subscriber to the SNS topic ``` aws sns list-subscriptions-by-topic --topic-arn   ``` at least one subscription should have \"SubscriptionArn\" with valid aws ARN. ``` Example of valid \"SubscriptionArn\": \"arn:aws:sns::::\" ```",
+          "AdditionalInformation": "Configuring log metric filter and alarm on Multi-region (global) CloudTrail - ensures that activities from all regions (used as well as unused) are monitored - ensures that activities on all supported global services are monitored - ensures that all management events across all regions are monitored",
           "References": "https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudwatch-alarms-for-cloudtrail.html:https://docs.aws.amazon.com/awscloudtrail/latest/userguide/receive-cloudtrail-log-files-from-multiple-regions.html:https://docs.aws.amazon.com/sns/latest/dg/SubscribeTopic.html"
         }
       ]
@@ -1134,9 +1134,9 @@
           "Description": "Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs and establishing corresponding metric filters and alarms. It is recommended that a metric filter and alarm be established for detecting changes to CloudTrail's configurations.",
           "RationaleStatement": "Monitoring changes to AWS Config configuration will help ensure sustained visibility of configuration items within the AWS account.",
           "ImpactStatement": "",
-          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:\n\n1. Create a metric filter based on filter pattern provided which checks for AWS Configuration changes and the `` taken from audit step 1.\n```\naws logs put-metric-filter --log-group-name  --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventSource = config.amazonaws.com) && (($.eventName=StopConfigurationRecorder)||($.eventName=DeleteDeliveryChannel)||($.eventName=PutDeliveryChannel)||($.eventName=PutConfigurationRecorder)) }'\n```\n\n**Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.\n\n2. Create an SNS topic that the alarm will notify\n```\naws sns create-topic --name \n```\n\n**Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.\n\n3. Create an SNS subscription to topic created in step 2\n```\naws sns subscribe --topic-arn  --protocol  --notification-endpoint \n```\n\n**Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.\n\n4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2\n```\naws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions \n```",
-          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:\n\n1. Identify the log group name configured for use with active multi-region CloudTrail:\n\n- List all CloudTrails: `aws cloudtrail describe-trails`\n\n- Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`\n\n- From value associated with CloudWatchLogsLogGroupArn note ``\n\nExample: for CloudWatchLogsLogGroupArn that looks like `arn:aws:logs:::log-group:NewGroup:*`, `` would be `NewGroup`\n\n- Ensure Identified Multi region CloudTrail is active\n\n`aws cloudtrail get-trail-status --name `\n\nensure `IsLogging` is set to `TRUE`\n\n- Ensure identified Multi-region Cloudtrail captures all Management Events\n\n`aws cloudtrail get-event-selectors --trail-name `\n\nEnsure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`\n\n2. Get a list of all associated metric filters for this ``:\n```\naws logs describe-metric-filters --log-group-name \"\"\n```\n3. Ensure the output from the above command contains the following:\n```\n\"filterPattern\": \"{ ($.eventSource = config.amazonaws.com) && (($.eventName=StopConfigurationRecorder)||($.eventName=DeleteDeliveryChannel)||($.eventName=PutDeliveryChannel)||($.eventName=PutConfigurationRecorder)) }\"\n```\n4. Note the `` value associated with the `filterPattern` found in step 3.\n\n5. Get a list of CloudWatch alarms and filter on the `` captured in step 4.\n```\naws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]'\n```\n6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.\n\n7. Ensure there is at least one active subscriber to the SNS topic\n```\naws sns list-subscriptions-by-topic --topic-arn  \n```\nat least one subscription should have \"SubscriptionArn\" with valid aws ARN.\n```\nExample of valid \"SubscriptionArn\": \"arn:aws:sns::::\"\n```",
-          "AdditionalInformation": "Configuring log metric filter and alarm on Multi-region (global) CloudTrail\n- ensures that activities from all regions (used as well as unused) are monitored\n- ensures that activities on all supported global services are monitored\n- ensures that all management events across all regions are monitored",
+          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:  1. Create a metric filter based on filter pattern provided which checks for AWS Configuration changes and the `` taken from audit step 1. ``` aws logs put-metric-filter --log-group-name  --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventSource = config.amazonaws.com) && (($.eventName=StopConfigurationRecorder)||($.eventName=DeleteDeliveryChannel)||($.eventName=PutDeliveryChannel)||($.eventName=PutConfigurationRecorder)) }' ```  **Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.  2. Create an SNS topic that the alarm will notify ``` aws sns create-topic --name  ```  **Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.  3. Create an SNS subscription to topic created in step 2 ``` aws sns subscribe --topic-arn  --protocol  --notification-endpoint  ```  **Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.  4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 ``` aws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions  ```",
+          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:  1. Identify the log group name configured for use with active multi-region CloudTrail:  - List all CloudTrails: `aws cloudtrail describe-trails`  - Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`  - From value associated with CloudWatchLogsLogGroupArn note ``  Example: for CloudWatchLogsLogGroupArn that looks like `arn:aws:logs:::log-group:NewGroup:*`, `` would be `NewGroup`  - Ensure Identified Multi region CloudTrail is active  `aws cloudtrail get-trail-status --name `  ensure `IsLogging` is set to `TRUE`  - Ensure identified Multi-region Cloudtrail captures all Management Events  `aws cloudtrail get-event-selectors --trail-name `  Ensure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`  2. Get a list of all associated metric filters for this ``: ``` aws logs describe-metric-filters --log-group-name \"\" ``` 3. Ensure the output from the above command contains the following: ``` \"filterPattern\": \"{ ($.eventSource = config.amazonaws.com) && (($.eventName=StopConfigurationRecorder)||($.eventName=DeleteDeliveryChannel)||($.eventName=PutDeliveryChannel)||($.eventName=PutConfigurationRecorder)) }\" ``` 4. Note the `` value associated with the `filterPattern` found in step 3.  5. Get a list of CloudWatch alarms and filter on the `` captured in step 4. ``` aws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]' ``` 6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.  7. Ensure there is at least one active subscriber to the SNS topic ``` aws sns list-subscriptions-by-topic --topic-arn   ``` at least one subscription should have \"SubscriptionArn\" with valid aws ARN. ``` Example of valid \"SubscriptionArn\": \"arn:aws:sns::::\" ```",
+          "AdditionalInformation": "Configuring log metric filter and alarm on Multi-region (global) CloudTrail - ensures that activities from all regions (used as well as unused) are monitored - ensures that activities on all supported global services are monitored - ensures that all management events across all regions are monitored",
           "References": "https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudwatch-alarms-for-cloudtrail.html:https://docs.aws.amazon.com/awscloudtrail/latest/userguide/receive-cloudtrail-log-files-from-multiple-regions.html:https://docs.aws.amazon.com/sns/latest/dg/SubscribeTopic.html"
         }
       ]
@@ -1157,8 +1157,8 @@
           "Description": "The Network Access Control List (NACL) function provide stateless filtering of ingress and egress network traffic to AWS resources. It is recommended that no NACL allows unrestricted ingress access to remote server administration ports, such as SSH to port `22` and RDP to port `3389`.",
           "RationaleStatement": "Public access to remote server administration ports, such as 22 and 3389, increases resource attack surface and unnecessarily raises the risk of resource compromise.",
           "ImpactStatement": "",
-          "RemediationProcedure": "**From Console:**\n\nPerform the following:\n1. Login to the AWS Management Console at https://console.aws.amazon.com/vpc/home\n2. In the left pane, click `Network ACLs`\n3. For each network ACL to remediate, perform the following:\n - Select the network ACL\n - Click the `Inbound Rules` tab\n - Click `Edit inbound rules`\n - Either A) update the Source field to a range other than 0.0.0.0/0, or, B) Click `Delete` to remove the offending inbound rule\n - Click `Save`",
-          "AuditProcedure": "**From Console:**\n\nPerform the following to determine if the account is configured as prescribed:\n1. Login to the AWS Management Console at https://console.aws.amazon.com/vpc/home\n2. In the left pane, click `Network ACLs`\n3. For each network ACL, perform the following:\n - Select the network ACL\n - Click the `Inbound Rules` tab\n - Ensure no rule exists that has a port range that includes port `22`, `3389`, or other remote server administration ports for your environment and has a `Source` of `0.0.0.0/0` and shows `ALLOW`\n\n**Note:** A Port value of `ALL` or a port range such as `0-1024` are inclusive of port `22`, `3389`, and other remote server administration ports",
+          "RemediationProcedure": "**From Console:**  Perform the following: 1. Login to the AWS Management Console at https://console.aws.amazon.com/vpc/home 2. In the left pane, click `Network ACLs` 3. For each network ACL to remediate, perform the following:  - Select the network ACL  - Click the `Inbound Rules` tab  - Click `Edit inbound rules`  - Either A) update the Source field to a range other than 0.0.0.0/0, or, B) Click `Delete` to remove the offending inbound rule  - Click `Save`",
+          "AuditProcedure": "**From Console:**  Perform the following to determine if the account is configured as prescribed: 1. Login to the AWS Management Console at https://console.aws.amazon.com/vpc/home 2. In the left pane, click `Network ACLs` 3. For each network ACL, perform the following:  - Select the network ACL  - Click the `Inbound Rules` tab  - Ensure no rule exists that has a port range that includes port `22`, `3389`, or other remote server administration ports for your environment and has a `Source` of `0.0.0.0/0` and shows `ALLOW`  **Note:** A Port value of `ALL` or a port range such as `0-1024` are inclusive of port `22`, `3389`, and other remote server administration ports",
           "AdditionalInformation": "",
           "References": "https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-acls.html:https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Security.html#VPC_Security_Comparison"
         }
@@ -1180,8 +1180,8 @@
           "Description": "Security groups provide stateful filtering of ingress and egress network traffic to AWS resources. It is recommended that no security group allows unrestricted ingress access to remote server administration ports, such as SSH to port `22` and RDP to port `3389`.",
           "RationaleStatement": "Public access to remote server administration ports, such as 22 and 3389, increases resource attack surface and unnecessarily raises the risk of resource compromise.",
           "ImpactStatement": "When updating an existing environment, ensure that administrators have access to remote server administration ports through another mechanism before removing access by deleting the 0.0.0.0/0 inbound rule.",
-          "RemediationProcedure": "Perform the following to implement the prescribed state:\n\n1. Login to the AWS Management Console at [https://console.aws.amazon.com/vpc/home](https://console.aws.amazon.com/vpc/home)\n2. In the left pane, click `Security Groups` \n3. For each security group, perform the following:\n1. Select the security group\n2. Click the `Inbound Rules` tab\n3. Click the `Edit inbound rules` button\n4. Identify the rules to be edited or removed\n5. Either A) update the Source field to a range other than 0.0.0.0/0, or, B) Click `Delete` to remove the offending inbound rule\n6. Click `Save rules`",
-          "AuditProcedure": "Perform the following to determine if the account is configured as prescribed:\n\n1. Login to the AWS Management Console at [https://console.aws.amazon.com/vpc/home](https://console.aws.amazon.com/vpc/home)\n2. In the left pane, click `Security Groups` \n3. For each security group, perform the following:\n1. Select the security group\n2. Click the `Inbound Rules` tab\n3. Ensure no rule exists that has a port range that includes port `22`, `3389`, or other remote server administration ports for your environment and has a `Source` of `0.0.0.0/0` \n\n**Note:** A Port value of `ALL` or a port range such as `0-1024` are inclusive of port `22`, `3389`, and other remote server administration ports.",
+          "RemediationProcedure": "Perform the following to implement the prescribed state:  1. Login to the AWS Management Console at [https://console.aws.amazon.com/vpc/home](https://console.aws.amazon.com/vpc/home) 2. In the left pane, click `Security Groups`  3. For each security group, perform the following: 1. Select the security group 2. Click the `Inbound Rules` tab 3. Click the `Edit inbound rules` button 4. Identify the rules to be edited or removed 5. Either A) update the Source field to a range other than 0.0.0.0/0, or, B) Click `Delete` to remove the offending inbound rule 6. Click `Save rules`",
+          "AuditProcedure": "Perform the following to determine if the account is configured as prescribed:  1. Login to the AWS Management Console at [https://console.aws.amazon.com/vpc/home](https://console.aws.amazon.com/vpc/home) 2. In the left pane, click `Security Groups`  3. For each security group, perform the following: 1. Select the security group 2. Click the `Inbound Rules` tab 3. Ensure no rule exists that has a port range that includes port `22`, `3389`, or other remote server administration ports for your environment and has a `Source` of `0.0.0.0/0`   **Note:** A Port value of `ALL` or a port range such as `0-1024` are inclusive of port `22`, `3389`, and other remote server administration ports.",
           "AdditionalInformation": "",
           "References": "https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-security-groups.html#deleting-security-group-rule"
         }
@@ -1198,11 +1198,11 @@
           "Section": "5. Networking",
           "Profile": "Level 2",
           "AssessmentStatus": "Automated",
-          "Description": "A VPC comes with a default security group whose initial settings deny all inbound traffic, allow all outbound traffic, and allow all traffic between instances assigned to the security group. If you don't specify a security group when you launch an instance, the instance is automatically assigned to this default security group. Security groups provide stateful filtering of ingress/egress network traffic to AWS resources. It is recommended that the default security group restrict all traffic.\n\nThe default VPC in every region should have its default security group updated to comply. Any newly created VPCs will automatically contain a default security group that will need remediation to comply with this recommendation.\n\n**NOTE:** When implementing this recommendation, VPC flow logging is invaluable in determining the least privilege port access required by systems to work properly because it can log all packet acceptances and rejections occurring under the current security groups. This dramatically reduces the primary barrier to least privilege engineering - discovering the minimum ports required by systems in the environment. Even if the VPC flow logging recommendation in this benchmark is not adopted as a permanent security measure, it should be used during any period of discovery and engineering for least privileged security groups.",
+          "Description": "A VPC comes with a default security group whose initial settings deny all inbound traffic, allow all outbound traffic, and allow all traffic between instances assigned to the security group. If you don't specify a security group when you launch an instance, the instance is automatically assigned to this default security group. Security groups provide stateful filtering of ingress/egress network traffic to AWS resources. It is recommended that the default security group restrict all traffic.  The default VPC in every region should have its default security group updated to comply. Any newly created VPCs will automatically contain a default security group that will need remediation to comply with this recommendation.  **NOTE:** When implementing this recommendation, VPC flow logging is invaluable in determining the least privilege port access required by systems to work properly because it can log all packet acceptances and rejections occurring under the current security groups. This dramatically reduces the primary barrier to least privilege engineering - discovering the minimum ports required by systems in the environment. Even if the VPC flow logging recommendation in this benchmark is not adopted as a permanent security measure, it should be used during any period of discovery and engineering for least privileged security groups.",
           "RationaleStatement": "Configuring all VPC default security groups to restrict all traffic will encourage least privilege security group development and mindful placement of AWS resources into security groups which will in-turn reduce the exposure of those resources.",
           "ImpactStatement": "Implementing this recommendation in an existing VPC containing operating resources requires extremely careful migration planning as the default security groups are likely to be enabling many ports that are unknown. Enabling VPC flow logging (of accepts) in an existing environment that is known to be breach free will reveal the current pattern of ports being used for each instance to communicate successfully.",
-          "RemediationProcedure": "Security Group Members\n\nPerform the following to implement the prescribed state:\n\n1. Identify AWS resources that exist within the default security group\n2. Create a set of least privilege security groups for those resources\n3. Place the resources in those security groups\n4. Remove the resources noted in #1 from the default security group\n\nSecurity Group State\n\n1. Login to the AWS Management Console at [https://console.aws.amazon.com/vpc/home](https://console.aws.amazon.com/vpc/home)\n2. Repeat the next steps for all VPCs - including the default VPC in each AWS region:\n3. In the left pane, click `Security Groups` \n4. For each default security group, perform the following:\n1. Select the `default` security group\n2. Click the `Inbound Rules` tab\n3. Remove any inbound rules\n4. Click the `Outbound Rules` tab\n5. Remove any Outbound rules\n\nRecommended:\n\nIAM groups allow you to edit the \"name\" field. After remediating default groups rules for all VPCs in all regions, edit this field to add text similar to \"DO NOT USE. DO NOT ADD RULES\"",
-          "AuditProcedure": "Perform the following to determine if the account is configured as prescribed:\n\nSecurity Group State\n\n1. Login to the AWS Management Console at [https://console.aws.amazon.com/vpc/home](https://console.aws.amazon.com/vpc/home)\n2. Repeat the next steps for all VPCs - including the default VPC in each AWS region:\n3. In the left pane, click `Security Groups` \n4. For each default security group, perform the following:\n1. Select the `default` security group\n2. Click the `Inbound Rules` tab\n3. Ensure no rule exist\n4. Click the `Outbound Rules` tab\n5. Ensure no rules exist\n\nSecurity Group Members\n\n1. Login to the AWS Management Console at [https://console.aws.amazon.com/vpc/home](https://console.aws.amazon.com/vpc/home)\n2. Repeat the next steps for all default groups in all VPCs - including the default VPC in each AWS region:\n3. In the left pane, click `Security Groups` \n4. Copy the id of the default security group.\n5. Change to the EC2 Management Console at https://console.aws.amazon.com/ec2/v2/home\n6. In the filter column type 'Security Group ID : < security group id from #4 >'",
+          "RemediationProcedure": "Security Group Members  Perform the following to implement the prescribed state:  1. Identify AWS resources that exist within the default security group 2. Create a set of least privilege security groups for those resources 3. Place the resources in those security groups 4. Remove the resources noted in #1 from the default security group  Security Group State  1. Login to the AWS Management Console at [https://console.aws.amazon.com/vpc/home](https://console.aws.amazon.com/vpc/home) 2. Repeat the next steps for all VPCs - including the default VPC in each AWS region: 3. In the left pane, click `Security Groups`  4. For each default security group, perform the following: 1. Select the `default` security group 2. Click the `Inbound Rules` tab 3. Remove any inbound rules 4. Click the `Outbound Rules` tab 5. Remove any Outbound rules  Recommended:  IAM groups allow you to edit the \"name\" field. After remediating default groups rules for all VPCs in all regions, edit this field to add text similar to \"DO NOT USE. DO NOT ADD RULES\"",
+          "AuditProcedure": "Perform the following to determine if the account is configured as prescribed:  Security Group State  1. Login to the AWS Management Console at [https://console.aws.amazon.com/vpc/home](https://console.aws.amazon.com/vpc/home) 2. Repeat the next steps for all VPCs - including the default VPC in each AWS region: 3. In the left pane, click `Security Groups`  4. For each default security group, perform the following: 1. Select the `default` security group 2. Click the `Inbound Rules` tab 3. Ensure no rule exist 4. Click the `Outbound Rules` tab 5. Ensure no rules exist  Security Group Members  1. Login to the AWS Management Console at [https://console.aws.amazon.com/vpc/home](https://console.aws.amazon.com/vpc/home) 2. Repeat the next steps for all default groups in all VPCs - including the default VPC in each AWS region: 3. In the left pane, click `Security Groups`  4. Copy the id of the default security group. 5. Change to the EC2 Management Console at https://console.aws.amazon.com/ec2/v2/home 6. In the filter column type 'Security Group ID : < security group id from #4 >'",
           "AdditionalInformation": "",
           "References": "https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html:https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-security-groups.html#default-security-group"
         }
@@ -1222,8 +1222,8 @@
           "Description": "Once a VPC peering connection is established, routing tables must be updated to establish any connections between the peered VPCs. These routes can be as specific as desired - even peering a VPC to only a single host on the other side of the connection.",
           "RationaleStatement": "Being highly selective in peering routing tables is a very effective way of minimizing the impact of breach as resources outside of these routes are inaccessible to the peered VPC.",
           "ImpactStatement": "",
-          "RemediationProcedure": "Remove and add route table entries to ensure that the least number of subnets or hosts as is required to accomplish the purpose for peering are routable.\n\n**From Command Line:**\n\n1. For each __ containing routes non compliant with your routing policy (which grants more than desired \"least access\"), delete the non compliant route:\n```\naws ec2 delete-route --route-table-id  --destination-cidr-block \n```\n 2. Create a new compliant route:\n```\naws ec2 create-route --route-table-id  --destination-cidr-block  --vpc-peering-connection-id \n```",
-          "AuditProcedure": "Review routing tables of peered VPCs for whether they route all subnets of each VPC and whether that is necessary to accomplish the intended purposes for peering the VPCs.\n\n**From Command Line:**\n\n1. List all the route tables from a VPC and check if \"GatewayId\" is pointing to a __ (e.g. pcx-1a2b3c4d) and if \"DestinationCidrBlock\" is as specific as desired.\n```\naws ec2 describe-route-tables --filter \"Name=vpc-id,Values=\" --query \"RouteTables[*].{RouteTableId:RouteTableId, VpcId:VpcId, Routes:Routes, AssociatedSubnets:Associations[*].SubnetId}\"\n```",
+          "RemediationProcedure": "Remove and add route table entries to ensure that the least number of subnets or hosts as is required to accomplish the purpose for peering are routable.  **From Command Line:**  1. For each __ containing routes non compliant with your routing policy (which grants more than desired \"least access\"), delete the non compliant route: ``` aws ec2 delete-route --route-table-id  --destination-cidr-block  ```  2. Create a new compliant route: ``` aws ec2 create-route --route-table-id  --destination-cidr-block  --vpc-peering-connection-id  ```",
+          "AuditProcedure": "Review routing tables of peered VPCs for whether they route all subnets of each VPC and whether that is necessary to accomplish the intended purposes for peering the VPCs.  **From Command Line:**  1. List all the route tables from a VPC and check if \"GatewayId\" is pointing to a __ (e.g. pcx-1a2b3c4d) and if \"DestinationCidrBlock\" is as specific as desired. ``` aws ec2 describe-route-tables --filter \"Name=vpc-id,Values=\" --query \"RouteTables[*].{RouteTableId:RouteTableId, VpcId:VpcId, Routes:Routes, AssociatedSubnets:Associations[*].SubnetId}\" ```",
           "AdditionalInformation": "If an organization has AWS transit gateway implemented in their VPC architecture they should look to apply the recommendation above for \"least access\" routing architecture at the AWS transit gateway level in combination with what must be implemented at the standard VPC route table. More specifically, to route traffic between two or more VPCs via a transit gateway VPCs must have an attachment to a transit gateway route table as well as a route, therefore to avoid routing traffic between VPCs an attachment to the transit gateway route table should only be added where there is an intention to route traffic between the VPCs. As transit gateways are able to host multiple route tables it is possible to group VPCs by attaching them to a common route table.",
           "References": "https://docs.aws.amazon.com/AmazonVPC/latest/PeeringGuide/peering-configurations-partial-access.html:https://docs.aws.amazon.com/cli/latest/reference/ec2/create-vpc-peering-connection.html"
         }
diff --git a/prowler/compliance/aws/cis_1.5_aws.json b/prowler/compliance/aws/cis_1.5_aws.json
index d127ba64..a74f6920 100644
--- a/prowler/compliance/aws/cis_1.5_aws.json
+++ b/prowler/compliance/aws/cis_1.5_aws.json
@@ -15,11 +15,11 @@
           "Section": "1. Identity and Access Management",
           "Profile": "Level 1",
           "AssessmentStatus": "Manual",
-          "Description": "Ensure contact email and telephone details for AWS accounts are current and map to more than one individual in your organization.\n\nAn AWS account supports a number of contact details, and AWS will use these to contact the account owner if activity judged to be in breach of Acceptable Use Policy or indicative of likely security compromise is observed by the AWS Abuse team. Contact details should not be for a single individual, as circumstances may arise where that individual is unavailable. Email contact details should point to a mail alias which forwards email to multiple individuals within the organization; where feasible, phone contact details should point to a PABX hunt group or other call-forwarding system.",
+          "Description": "Ensure contact email and telephone details for AWS accounts are current and map to more than one individual in your organization.  An AWS account supports a number of contact details, and AWS will use these to contact the account owner if activity judged to be in breach of Acceptable Use Policy or indicative of likely security compromise is observed by the AWS Abuse team. Contact details should not be for a single individual, as circumstances may arise where that individual is unavailable. Email contact details should point to a mail alias which forwards email to multiple individuals within the organization; where feasible, phone contact details should point to a PABX hunt group or other call-forwarding system.",
           "RationaleStatement": "If an AWS account is observed to be behaving in a prohibited or suspicious manner, AWS will attempt to contact the account owner by email and phone using the contact details listed. If this is unsuccessful and the account behavior needs urgent mitigation, proactive measures may be taken, including throttling of traffic between the account exhibiting suspicious behavior and the AWS API endpoints and the Internet. This will result in impaired service to and from the account in question, so it is in both the customers' and AWS' best interests that prompt contact can be established. This is best achieved by setting AWS account contact details to point to resources which have multiple individuals as recipients, such as email aliases and PABX hunt groups.",
           "ImpactStatement": "",
-          "RemediationProcedure": "This activity can only be performed via the AWS Console, with a user who has permission to read and write Billing information (aws-portal:\\*Billing ).\n\n1. Sign in to the AWS Management Console and open the `Billing and Cost Management` console at https://console.aws.amazon.com/billing/home#/.\n2. On the navigation bar, choose your account name, and then choose `My Account`.\n3. On the `Account Settings` page, next to `Account Settings`, choose `Edit`.\n4. Next to the field that you need to update, choose `Edit`.\n5. After you have entered your changes, choose `Save changes`.\n6. After you have made your changes, choose `Done`.\n7. To edit your contact information, under `Contact Information`, choose `Edit`.\n8. For the fields that you want to change, type your updated information, and then choose `Update`.",
-          "AuditProcedure": "This activity can only be performed via the AWS Console, with a user who has permission to read and write Billing information (aws-portal:\\*Billing )\n\n1. Sign in to the AWS Management Console and open the `Billing and Cost Management` console at https://console.aws.amazon.com/billing/home#/.\n2. On the navigation bar, choose your account name, and then choose `My Account`.\n3. On the `Account Settings` page, review and verify the current details.\n4. Under `Contact Information`, review and verify the current details.",
+          "RemediationProcedure": "This activity can only be performed via the AWS Console, with a user who has permission to read and write Billing information (aws-portal:\\*Billing ).  1. Sign in to the AWS Management Console and open the `Billing and Cost Management` console at https://console.aws.amazon.com/billing/home#/. 2. On the navigation bar, choose your account name, and then choose `My Account`. 3. On the `Account Settings` page, next to `Account Settings`, choose `Edit`. 4. Next to the field that you need to update, choose `Edit`. 5. After you have entered your changes, choose `Save changes`. 6. After you have made your changes, choose `Done`. 7. To edit your contact information, under `Contact Information`, choose `Edit`. 8. For the fields that you want to change, type your updated information, and then choose `Update`.",
+          "AuditProcedure": "This activity can only be performed via the AWS Console, with a user who has permission to read and write Billing information (aws-portal:\\*Billing )  1. Sign in to the AWS Management Console and open the `Billing and Cost Management` console at https://console.aws.amazon.com/billing/home#/. 2. On the navigation bar, choose your account name, and then choose `My Account`. 3. On the `Account Settings` page, review and verify the current details. 4. Under `Contact Information`, review and verify the current details.",
           "AdditionalInformation": "",
           "References": "https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/manage-account-payment.html#contact-info"
         }
@@ -39,9 +39,9 @@
           "Description": "Multi-Factor Authentication (MFA) adds an extra layer of authentication assurance beyond traditional credentials. With MFA enabled, when a user signs in to the AWS Console, they will be prompted for their user name and password as well as for an authentication code from their physical or virtual MFA token. It is recommended that MFA be enabled for all accounts that have a console password.",
           "RationaleStatement": "Enabling MFA provides increased security for console access as it requires the authenticating principal to possess a device that displays a time-sensitive key and have knowledge of a credential.",
           "ImpactStatement": "AWS will soon end support for SMS multi-factor authentication (MFA). New customers are not allowed to use this feature. We recommend that existing customers switch to one of the following alternative methods of MFA.",
-          "RemediationProcedure": "Perform the following to enable MFA:\n\n**From Console:**\n\n1. Sign in to the AWS Management Console and open the IAM console at 'https://console.aws.amazon.com/iam/'\n2. In the left pane, select `Users`.\n3. In the `User Name` list, choose the name of the intended MFA user.\n4. Choose the `Security Credentials` tab, and then choose `Manage MFA Device`.\n5. In the `Manage MFA Device wizard`, choose `Virtual MFA` device, and then choose `Continue`.\n\n IAM generates and displays configuration information for the virtual MFA device, including a QR code graphic. The graphic is a representation of the 'secret configuration key' that is available for manual entry on devices that do not support QR codes.\n\n6. Open your virtual MFA application. (For a list of apps that you can use for hosting virtual MFA devices, see Virtual MFA Applications at https://aws.amazon.com/iam/details/mfa/#Virtual_MFA_Applications). If the virtual MFA application supports multiple accounts (multiple virtual MFA devices), choose the option to create a new account (a new virtual MFA device).\n7. Determine whether the MFA app supports QR codes, and then do one of the following:\n\n - Use the app to scan the QR code. For example, you might choose the camera icon or choose an option similar to Scan code, and then use the device's camera to scan the code.\n - In the Manage MFA Device wizard, choose Show secret key for manual configuration, and then type the secret configuration key into your MFA application.\n\n When you are finished, the virtual MFA device starts generating one-time passwords.\n\n8. In the `Manage MFA Device wizard`, in the `MFA Code 1 box`, type the `one-time password` that currently appears in the virtual MFA device. Wait up to 30 seconds for the device to generate a new one-time password. Then type the second `one-time password` into the `MFA Code 2 box`.\n\n9. Click `Assign MFA`.",
-          "AuditProcedure": "Perform the following to determine if a MFA device is enabled for all IAM users having a console password:\n\n**From Console:**\n\n1. Open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).\n2. In the left pane, select `Users` \n3. If the `MFA` or `Password age` columns are not visible in the table, click the gear icon at the upper right corner of the table and ensure a checkmark is next to both, then click `Close`.\n4. Ensure that for each user where the `Password age` column shows a password age, the `MFA` column shows `Virtual`, `U2F Security Key`, or `Hardware`.\n\n**From Command Line:**\n\n1. Run the following command (OSX/Linux/UNIX) to generate a list of all IAM users along with their password and MFA status:\n```\n aws iam generate-credential-report\n```\n```\n aws iam get-credential-report --query 'Content' --output text | base64 -d | cut -d, -f1,4,8 \n```\n2. The output of this command will produce a table similar to the following:\n```\n user,password_enabled,mfa_active\n elise,false,false\n brandon,true,true\n rakesh,false,false\n helene,false,false\n paras,true,true\n anitha,false,false \n```\n3. For any column having `password_enabled` set to `true` , ensure `mfa_active` is also set to `true.`",
-          "AdditionalInformation": "**Forced IAM User Self-Service Remediation**\n\nAmazon has published a pattern that forces users to self-service setup MFA before they have access to their complete permissions set. Until they complete this step, they cannot access their full permissions. This pattern can be used on new AWS accounts. It can also be used on existing accounts - it is recommended users are given instructions and a grace period to accomplish MFA enrollment before active enforcement on existing AWS accounts.",
+          "RemediationProcedure": "Perform the following to enable MFA:  **From Console:**  1. Sign in to the AWS Management Console and open the IAM console at 'https://console.aws.amazon.com/iam/' 2. In the left pane, select `Users`. 3. In the `User Name` list, choose the name of the intended MFA user. 4. Choose the `Security Credentials` tab, and then choose `Manage MFA Device`. 5. In the `Manage MFA Device wizard`, choose `Virtual MFA` device, and then choose `Continue`.   IAM generates and displays configuration information for the virtual MFA device, including a QR code graphic. The graphic is a representation of the 'secret configuration key' that is available for manual entry on devices that do not support QR codes.  6. Open your virtual MFA application. (For a list of apps that you can use for hosting virtual MFA devices, see Virtual MFA Applications at https://aws.amazon.com/iam/details/mfa/#Virtual_MFA_Applications). If the virtual MFA application supports multiple accounts (multiple virtual MFA devices), choose the option to create a new account (a new virtual MFA device). 7. Determine whether the MFA app supports QR codes, and then do one of the following:   - Use the app to scan the QR code. For example, you might choose the camera icon or choose an option similar to Scan code, and then use the device's camera to scan the code.  - In the Manage MFA Device wizard, choose Show secret key for manual configuration, and then type the secret configuration key into your MFA application.   When you are finished, the virtual MFA device starts generating one-time passwords.  8. In the `Manage MFA Device wizard`, in the `MFA Code 1 box`, type the `one-time password` that currently appears in the virtual MFA device. Wait up to 30 seconds for the device to generate a new one-time password. Then type the second `one-time password` into the `MFA Code 2 box`.  9. Click `Assign MFA`.",
+          "AuditProcedure": "Perform the following to determine if a MFA device is enabled for all IAM users having a console password:  **From Console:**  1. Open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/). 2. In the left pane, select `Users`  3. If the `MFA` or `Password age` columns are not visible in the table, click the gear icon at the upper right corner of the table and ensure a checkmark is next to both, then click `Close`. 4. Ensure that for each user where the `Password age` column shows a password age, the `MFA` column shows `Virtual`, `U2F Security Key`, or `Hardware`.  **From Command Line:**  1. Run the following command (OSX/Linux/UNIX) to generate a list of all IAM users along with their password and MFA status: ```  aws iam generate-credential-report ``` ```  aws iam get-credential-report --query 'Content' --output text | base64 -d | cut -d, -f1,4,8  ``` 2. The output of this command will produce a table similar to the following: ```  user,password_enabled,mfa_active  elise,false,false  brandon,true,true  rakesh,false,false  helene,false,false  paras,true,true  anitha,false,false  ``` 3. For any column having `password_enabled` set to `true` , ensure `mfa_active` is also set to `true.`",
+          "AdditionalInformation": "**Forced IAM User Self-Service Remediation**  Amazon has published a pattern that forces users to self-service setup MFA before they have access to their complete permissions set. Until they complete this step, they cannot access their full permissions. This pattern can be used on new AWS accounts. It can also be used on existing accounts - it is recommended users are given instructions and a grace period to accomplish MFA enrollment before active enforcement on existing AWS accounts.",
           "References": "https://tools.ietf.org/html/rfc6238:https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa.html:https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#enable-mfa-for-privileged-users:https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa_enable_virtual.html:https://blogs.aws.amazon.com/security/post/Tx2SJJYE082KBUK/How-to-Delegate-Management-of-Multi-Factor-Authentication-to-AWS-IAM-Users"
         }
       ]
@@ -57,11 +57,11 @@
           "Section": "1. Identity and Access Management",
           "Profile": "Level 1",
           "AssessmentStatus": "Automated",
-          "Description": "AWS console defaults to no check boxes selected when creating a new IAM user. When cerating the IAM User credentials you have to determine what type of access they require. \n\nProgrammatic access: The IAM user might need to make API calls, use the AWS CLI, or use the Tools for Windows PowerShell. In that case, create an access key (access key ID and a secret access key) for that user. \n\nAWS Management Console access: If the user needs to access the AWS Management Console, create a password for the user.",
-          "RationaleStatement": "Requiring the additional steps be taken by the user for programmatic access after their profile has been created will give a stronger indication of intent that access keys are [a] necessary for their work and [b] once the access key is established on an account that the keys may be in use somewhere in the organization.\n\n**Note**: Even if it is known the user will need access keys, require them to create the keys themselves or put in a support ticket to have them created as a separate step from user creation.",
+          "Description": "AWS console defaults to no check boxes selected when creating a new IAM user. When cerating the IAM User credentials you have to determine what type of access they require.   Programmatic access: The IAM user might need to make API calls, use the AWS CLI, or use the Tools for Windows PowerShell. In that case, create an access key (access key ID and a secret access key) for that user.   AWS Management Console access: If the user needs to access the AWS Management Console, create a password for the user.",
+          "RationaleStatement": "Requiring the additional steps be taken by the user for programmatic access after their profile has been created will give a stronger indication of intent that access keys are [a] necessary for their work and [b] once the access key is established on an account that the keys may be in use somewhere in the organization.  **Note**: Even if it is known the user will need access keys, require them to create the keys themselves or put in a support ticket to have them created as a separate step from user creation.",
           "ImpactStatement": "",
-          "RemediationProcedure": "Perform the following to delete access keys that do not pass the audit:\n\n**From Console:**\n\n1. Login to the AWS Management Console:\n2. Click `Services` \n3. Click `IAM` \n4. Click on `Users` \n5. Click on `Security Credentials` \n6. As an Administrator \n - Click on the X `(Delete)` for keys that were created at the same time as the user profile but have not been used.\n7. As an IAM User\n - Click on the X `(Delete)` for keys that were created at the same time as the user profile but have not been used.\n\n**From Command Line:**\n```\naws iam delete-access-key --access-key-id  --user-name \n```",
-          "AuditProcedure": "Perform the following to determine if access keys were created upon user creation and are being used and rotated as prescribed:\n\n**From Console:**\n\n1. Login to the AWS Management Console\n2. Click `Services` \n3. Click `IAM` \n4. Click on a User where column `Password age` and `Access key age` is not set to `None`\n5. Click on `Security credentials` Tab\n6. Compare the user 'Creation time` to the Access Key `Created` date.\n6. For any that match, the key was created during initial user setup.\n\n- Keys that were created at the same time as the user profile and do not have a last used date should be deleted. Refer to the remediation below.\n\n**From Command Line:**\n\n1. Run the following command (OSX/Linux/UNIX) to generate a list of all IAM users along with their access keys utilization:\n```\n aws iam generate-credential-report\n```\n```\n aws iam get-credential-report --query 'Content' --output text | base64 -d | cut -d, -f1,4,9,11,14,16\n```\n2. The output of this command will produce a table similar to the following:\n```\nuser,password_enabled,access_key_1_active,access_key_1_last_used_date,access_key_2_active,access_key_2_last_used_date\n elise,false,true,2015-04-16T15:14:00+00:00,false,N/A\n brandon,true,true,N/A,false,N/A\n rakesh,false,false,N/A,false,N/A\n helene,false,true,2015-11-18T17:47:00+00:00,false,N/A\n paras,true,true,2016-08-28T12:04:00+00:00,true,2016-03-04T10:11:00+00:00\n anitha,true,true,2016-06-08T11:43:00+00:00,true,N/A \n```\n3. For any user having `password_enabled` set to `true` AND `access_key_last_used_date` set to `N/A` refer to the remediation below.",
+          "RemediationProcedure": "Perform the following to delete access keys that do not pass the audit:  **From Console:**  1. Login to the AWS Management Console: 2. Click `Services`  3. Click `IAM`  4. Click on `Users`  5. Click on `Security Credentials`  6. As an Administrator   - Click on the X `(Delete)` for keys that were created at the same time as the user profile but have not been used. 7. As an IAM User  - Click on the X `(Delete)` for keys that were created at the same time as the user profile but have not been used.  **From Command Line:** ``` aws iam delete-access-key --access-key-id  --user-name  ```",
+          "AuditProcedure": "Perform the following to determine if access keys were created upon user creation and are being used and rotated as prescribed:  **From Console:**  1. Login to the AWS Management Console 2. Click `Services`  3. Click `IAM`  4. Click on a User where column `Password age` and `Access key age` is not set to `None` 5. Click on `Security credentials` Tab 6. Compare the user 'Creation time` to the Access Key `Created` date. 6. For any that match, the key was created during initial user setup.  - Keys that were created at the same time as the user profile and do not have a last used date should be deleted. Refer to the remediation below.  **From Command Line:**  1. Run the following command (OSX/Linux/UNIX) to generate a list of all IAM users along with their access keys utilization: ```  aws iam generate-credential-report ``` ```  aws iam get-credential-report --query 'Content' --output text | base64 -d | cut -d, -f1,4,9,11,14,16 ``` 2. The output of this command will produce a table similar to the following: ``` user,password_enabled,access_key_1_active,access_key_1_last_used_date,access_key_2_active,access_key_2_last_used_date  elise,false,true,2015-04-16T15:14:00+00:00,false,N/A  brandon,true,true,N/A,false,N/A  rakesh,false,false,N/A,false,N/A  helene,false,true,2015-11-18T17:47:00+00:00,false,N/A  paras,true,true,2016-08-28T12:04:00+00:00,true,2016-03-04T10:11:00+00:00  anitha,true,true,2016-06-08T11:43:00+00:00,true,N/A  ``` 3. For any user having `password_enabled` set to `true` AND `access_key_last_used_date` set to `N/A` refer to the remediation below.",
           "AdditionalInformation": "Credential report does not appear to contain \"Key Creation Date\"",
           "References": "https://docs.aws.amazon.com/cli/latest/reference/iam/delete-access-key.html:https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html"
         }
@@ -82,8 +82,8 @@
           "Description": "AWS IAM users can access AWS resources using different types of credentials, such as passwords or access keys. It is recommended that all credentials that have been unused in 45 or greater days be deactivated or removed.",
           "RationaleStatement": "Disabling or removing unnecessary credentials will reduce the window of opportunity for credentials associated with a compromised or abandoned account to be used.",
           "ImpactStatement": "",
-          "RemediationProcedure": "**From Console:**\n\nPerform the following to manage Unused Password (IAM user console access)\n\n1. Login to the AWS Management Console:\n2. Click `Services` \n3. Click `IAM` \n4. Click on `Users` \n5. Click on `Security Credentials` \n6. Select user whose `Console last sign-in` is greater than 45 days\n7. Click `Security credentials`\n8. In section `Sign-in credentials`, `Console password` click `Manage` \n9. Under Console Access select `Disable`\n10.Click `Apply`\n\nPerform the following to deactivate Access Keys:\n\n1. Login to the AWS Management Console:\n2. Click `Services` \n3. Click `IAM` \n4. Click on `Users` \n5. Click on `Security Credentials` \n6. Select any access keys that are over 45 days old and that have been used and \n - Click on `Make Inactive`\n7. Select any access keys that are over 45 days old and that have not been used and \n - Click the X to `Delete`",
-          "AuditProcedure": "Perform the following to determine if unused credentials exist:\n\n**From Console:**\n\n1. Login to the AWS Management Console\n2. Click `Services` \n3. Click `IAM`\n4. Click on `Users`\n5. Click the `Settings` (gear) icon.\n6. Select `Console last sign-in`, `Access key last used`, and `Access Key Id`\n7. Click on `Close` \n8. Check and ensure that `Console last sign-in` is less than 45 days ago.\n\n**Note** - `Never` means the user has never logged in.\n\n9. Check and ensure that `Access key age` is less than 45 days and that `Access key last used` does not say `None`\n\nIf the user hasn't signed into the Console in the last 45 days or Access keys are over 45 days old refer to the remediation.\n\n**From Command Line:**\n\n**Download Credential Report:**\n\n1. Run the following commands:\n```\n aws iam generate-credential-report\n\n aws iam get-credential-report --query 'Content' --output text | base64 -d | cut -d, -f1,4,5,6,9,10,11,14,15,16 | grep -v '^'\n```\n\n**Ensure unused credentials do not exist:**\n\n2. For each user having `password_enabled` set to `TRUE` , ensure `password_last_used_date` is less than `45` days ago.\n\n- When `password_enabled` is set to `TRUE` and `password_last_used` is set to `No_Information` , ensure `password_last_changed` is less than 45 days ago.\n\n3. For each user having an `access_key_1_active` or `access_key_2_active` to `TRUE` , ensure the corresponding `access_key_n_last_used_date` is less than `45` days ago.\n\n- When a user having an `access_key_x_active` (where x is 1 or 2) to `TRUE` and corresponding access_key_x_last_used_date is set to `N/A', ensure `access_key_x_last_rotated` is less than 45 days ago.",
+          "RemediationProcedure": "**From Console:**  Perform the following to manage Unused Password (IAM user console access)  1. Login to the AWS Management Console: 2. Click `Services`  3. Click `IAM`  4. Click on `Users`  5. Click on `Security Credentials`  6. Select user whose `Console last sign-in` is greater than 45 days 7. Click `Security credentials` 8. In section `Sign-in credentials`, `Console password` click `Manage`  9. Under Console Access select `Disable` 10.Click `Apply`  Perform the following to deactivate Access Keys:  1. Login to the AWS Management Console: 2. Click `Services`  3. Click `IAM`  4. Click on `Users`  5. Click on `Security Credentials`  6. Select any access keys that are over 45 days old and that have been used and   - Click on `Make Inactive` 7. Select any access keys that are over 45 days old and that have not been used and   - Click the X to `Delete`",
+          "AuditProcedure": "Perform the following to determine if unused credentials exist:  **From Console:**  1. Login to the AWS Management Console 2. Click `Services`  3. Click `IAM` 4. Click on `Users` 5. Click the `Settings` (gear) icon. 6. Select `Console last sign-in`, `Access key last used`, and `Access Key Id` 7. Click on `Close`  8. Check and ensure that `Console last sign-in` is less than 45 days ago.  **Note** - `Never` means the user has never logged in.  9. Check and ensure that `Access key age` is less than 45 days and that `Access key last used` does not say `None`  If the user hasn't signed into the Console in the last 45 days or Access keys are over 45 days old refer to the remediation.  **From Command Line:**  **Download Credential Report:**  1. Run the following commands: ```  aws iam generate-credential-report   aws iam get-credential-report --query 'Content' --output text | base64 -d | cut -d, -f1,4,5,6,9,10,11,14,15,16 | grep -v '^' ```  **Ensure unused credentials do not exist:**  2. For each user having `password_enabled` set to `TRUE` , ensure `password_last_used_date` is less than `45` days ago.  - When `password_enabled` is set to `TRUE` and `password_last_used` is set to `No_Information` , ensure `password_last_changed` is less than 45 days ago.  3. For each user having an `access_key_1_active` or `access_key_2_active` to `TRUE` , ensure the corresponding `access_key_n_last_used_date` is less than `45` days ago.  - When a user having an `access_key_x_active` (where x is 1 or 2) to `TRUE` and corresponding access_key_x_last_used_date is set to `N/A', ensure `access_key_x_last_rotated` is less than 45 days ago.",
           "AdditionalInformation": " is excluded in the audit since the root account should not be used for day to day business and would likely be unused for more than 45 days.",
           "References": "https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#remove-credentials:https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_finding-unused.html:https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_passwords_admin-change-user.html:https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html"
         }
@@ -103,8 +103,8 @@
           "Description": "Access keys are long-term credentials for an IAM user or the AWS account 'root' user. You can use access keys to sign programmatic requests to the AWS CLI or AWS API (directly or using the AWS SDK)",
           "RationaleStatement": "Access keys are long-term credentials for an IAM user or the AWS account 'root' user. You can use access keys to sign programmatic requests to the AWS CLI or AWS API. One of the best ways to protect your account is to not allow users to have multiple access keys.",
           "ImpactStatement": "",
-          "RemediationProcedure": "**From Console:**\n\n1. Sign in to the AWS Management Console and navigate to IAM dashboard at `https://console.aws.amazon.com/iam/`.\n2. In the left navigation panel, choose `Users`.\n3. Click on the IAM user name that you want to examine.\n4. On the IAM user configuration page, select `Security Credentials` tab.\n5. In `Access Keys` section, choose one access key that is less than 90 days old. This should be the only active key used by this IAM user to access AWS resources programmatically. Test your application(s) to make sure that the chosen access key is working.\n6. In the same `Access Keys` section, identify your non-operational access keys (other than the chosen one) and deactivate it by clicking the `Make Inactive` link.\n7. If you receive the `Change Key Status` confirmation box, click `Deactivate` to switch off the selected key.\n8. Repeat steps no. 3 – 7 for each IAM user in your AWS account.\n\n**From Command Line:**\n\n1. Using the IAM user and access key information provided in the `Audit CLI`, choose one access key that is less than 90 days old. This should be the only active key used by this IAM user to access AWS resources programmatically. Test your application(s) to make sure that the chosen access key is working.\n\n2. Run the `update-access-key` command below using the IAM user name and the non-operational access key IDs to deactivate the unnecessary key(s). Refer to the Audit section to identify the unnecessary access key ID for the selected IAM user\n\n**Note** - the command does not return any output:\n```\naws iam update-access-key --access-key-id  --status Inactive --user-name \n```\n3. To confirm that the selected access key pair has been successfully `deactivated` run the `list-access-keys` audit command again for that IAM User:\n```\naws iam list-access-keys --user-name \n```\n- The command output should expose the metadata for each access key associated with the IAM user. If the non-operational key pair(s) `Status` is set to `Inactive`, the key has been successfully deactivated and the IAM user access configuration adheres now to this recommendation.\n\n4. Repeat steps no. 1 – 3 for each IAM user in your AWS account.",
-          "AuditProcedure": "**From Console:**\n\n1. Sign in to the AWS Management Console and navigate to IAM dashboard at `https://console.aws.amazon.com/iam/`.\n2. In the left navigation panel, choose `Users`.\n3. Click on the IAM user name that you want to examine.\n4. On the IAM user configuration page, select `Security Credentials` tab.\n5. Under `Access Keys` section, in the Status column, check the current status for each access key associated with the IAM user. If the selected IAM user has more than one access key activated then the users access configuration does not adhere to security best practices and the risk of accidental exposures increases.\n- Repeat steps no. 3 – 5 for each IAM user in your AWS account.\n\n**From Command Line:**\n\n1. Run `list-users` command to list all IAM users within your account:\n```\naws iam list-users --query \"Users[*].UserName\"\n```\nThe command output should return an array that contains all your IAM user names.\n\n2. Run `list-access-keys` command using the IAM user name list to return the current status of each access key associated with the selected IAM user:\n```\naws iam list-access-keys --user-name \n```\nThe command output should expose the metadata `(\"Username\", \"AccessKeyId\", \"Status\", \"CreateDate\")` for each access key on that user account.\n\n3. Check the `Status` property value for each key returned to determine each keys current state. If the `Status` property value for more than one IAM access key is set to `Active`, the user access configuration does not adhere to this recommendation, refer to the remediation below.\n\n- Repeat steps no. 2 and 3 for each IAM user in your AWS account.",
+          "RemediationProcedure": "**From Console:**  1. Sign in to the AWS Management Console and navigate to IAM dashboard at `https://console.aws.amazon.com/iam/`. 2. In the left navigation panel, choose `Users`. 3. Click on the IAM user name that you want to examine. 4. On the IAM user configuration page, select `Security Credentials` tab. 5. In `Access Keys` section, choose one access key that is less than 90 days old. This should be the only active key used by this IAM user to access AWS resources programmatically. Test your application(s) to make sure that the chosen access key is working. 6. In the same `Access Keys` section, identify your non-operational access keys (other than the chosen one) and deactivate it by clicking the `Make Inactive` link. 7. If you receive the `Change Key Status` confirmation box, click `Deactivate` to switch off the selected key. 8. Repeat steps no. 3 – 7 for each IAM user in your AWS account.  **From Command Line:**  1. Using the IAM user and access key information provided in the `Audit CLI`, choose one access key that is less than 90 days old. This should be the only active key used by this IAM user to access AWS resources programmatically. Test your application(s) to make sure that the chosen access key is working.  2. Run the `update-access-key` command below using the IAM user name and the non-operational access key IDs to deactivate the unnecessary key(s). Refer to the Audit section to identify the unnecessary access key ID for the selected IAM user  **Note** - the command does not return any output: ``` aws iam update-access-key --access-key-id  --status Inactive --user-name  ``` 3. To confirm that the selected access key pair has been successfully `deactivated` run the `list-access-keys` audit command again for that IAM User: ``` aws iam list-access-keys --user-name  ``` - The command output should expose the metadata for each access key associated with the IAM user. If the non-operational key pair(s) `Status` is set to `Inactive`, the key has been successfully deactivated and the IAM user access configuration adheres now to this recommendation.  4. Repeat steps no. 1 – 3 for each IAM user in your AWS account.",
+          "AuditProcedure": "**From Console:**  1. Sign in to the AWS Management Console and navigate to IAM dashboard at `https://console.aws.amazon.com/iam/`. 2. In the left navigation panel, choose `Users`. 3. Click on the IAM user name that you want to examine. 4. On the IAM user configuration page, select `Security Credentials` tab. 5. Under `Access Keys` section, in the Status column, check the current status for each access key associated with the IAM user. If the selected IAM user has more than one access key activated then the users access configuration does not adhere to security best practices and the risk of accidental exposures increases. - Repeat steps no. 3 – 5 for each IAM user in your AWS account.  **From Command Line:**  1. Run `list-users` command to list all IAM users within your account: ``` aws iam list-users --query \"Users[*].UserName\" ``` The command output should return an array that contains all your IAM user names.  2. Run `list-access-keys` command using the IAM user name list to return the current status of each access key associated with the selected IAM user: ``` aws iam list-access-keys --user-name  ``` The command output should expose the metadata `(\"Username\", \"AccessKeyId\", \"Status\", \"CreateDate\")` for each access key on that user account.  3. Check the `Status` property value for each key returned to determine each keys current state. If the `Status` property value for more than one IAM access key is set to `Active`, the user access configuration does not adhere to this recommendation, refer to the remediation below.  - Repeat steps no. 2 and 3 for each IAM user in your AWS account.",
           "AdditionalInformation": "",
           "References": "https://docs.aws.amazon.com/general/latest/gr/aws-access-keys-best-practices.html:https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html"
         }
@@ -122,10 +122,10 @@
           "Profile": "Level 1",
           "AssessmentStatus": "Automated",
           "Description": "Access keys consist of an access key ID and secret access key, which are used to sign programmatic requests that you make to AWS. AWS users need their own access keys to make programmatic calls to AWS from the AWS Command Line Interface (AWS CLI), Tools for Windows PowerShell, the AWS SDKs, or direct HTTP calls using the APIs for individual AWS services. It is recommended that all access keys be regularly rotated.",
-          "RationaleStatement": "Rotating access keys will reduce the window of opportunity for an access key that is associated with a compromised or terminated account to be used.\n\nAccess keys should be rotated to ensure that data cannot be accessed with an old key which might have been lost, cracked, or stolen.",
+          "RationaleStatement": "Rotating access keys will reduce the window of opportunity for an access key that is associated with a compromised or terminated account to be used.  Access keys should be rotated to ensure that data cannot be accessed with an old key which might have been lost, cracked, or stolen.",
           "ImpactStatement": "",
-          "RemediationProcedure": "Perform the following to rotate access keys:\n\n**From Console:**\n\n1. Go to Management Console (https://console.aws.amazon.com/iam)\n2. Click on `Users`\n3. Click on `Security Credentials` \n4. As an Administrator \n - Click on `Make Inactive` for keys that have not been rotated in `90` Days\n5. As an IAM User\n - Click on `Make Inactive` or `Delete` for keys which have not been rotated or used in `90` Days\n6. Click on `Create Access Key` \n7. Update programmatic call with new Access Key credentials\n\n**From Command Line:**\n\n1. While the first access key is still active, create a second access key, which is active by default. Run the following command:\n```\naws iam create-access-key\n```\n\nAt this point, the user has two active access keys.\n\n2. Update all applications and tools to use the new access key.\n3. Determine whether the first access key is still in use by using this command:\n```\naws iam get-access-key-last-used\n```\n4. One approach is to wait several days and then check the old access key for any use before proceeding.\n\nEven if step Step 3 indicates no use of the old key, it is recommended that you do not immediately delete the first access key. Instead, change the state of the first access key to Inactive using this command:\n```\naws iam update-access-key\n```\n5. Use only the new access key to confirm that your applications are working. Any applications and tools that still use the original access key will stop working at this point because they no longer have access to AWS resources. If you find such an application or tool, you can switch its state back to Active to reenable the first access key. Then return to step Step 2 and update this application to use the new key.\n\n6. After you wait some period of time to ensure that all applications and tools have been updated, you can delete the first access key with this command:\n```\naws iam delete-access-key\n```",
-          "AuditProcedure": "Perform the following to determine if access keys are rotated as prescribed:\n\n**From Console:**\n\n1. Go to Management Console (https://console.aws.amazon.com/iam)\n2. Click on `Users`\n3. Click `setting` icon\n4. Select `Console last sign-in`\n5. Click `Close`\n6. Ensure that `Access key age` is less than 90 days ago. note) `None` in the `Access key age` means the user has not used the access key.\n\n**From Command Line:**\n\n```\naws iam generate-credential-report\naws iam get-credential-report --query 'Content' --output text | base64 -d\n```\nThe `access_key_1_last_rotated` field in this file notes The date and time, in ISO 8601 date-time format, when the user's access key was created or last changed. If the user does not have an active access key, the value in this field is N/A (not applicable).",
+          "RemediationProcedure": "Perform the following to rotate access keys:  **From Console:**  1. Go to Management Console (https://console.aws.amazon.com/iam) 2. Click on `Users` 3. Click on `Security Credentials`  4. As an Administrator   - Click on `Make Inactive` for keys that have not been rotated in `90` Days 5. As an IAM User  - Click on `Make Inactive` or `Delete` for keys which have not been rotated or used in `90` Days 6. Click on `Create Access Key`  7. Update programmatic call with new Access Key credentials  **From Command Line:**  1. While the first access key is still active, create a second access key, which is active by default. Run the following command: ``` aws iam create-access-key ```  At this point, the user has two active access keys.  2. Update all applications and tools to use the new access key. 3. Determine whether the first access key is still in use by using this command: ``` aws iam get-access-key-last-used ``` 4. One approach is to wait several days and then check the old access key for any use before proceeding.  Even if step Step 3 indicates no use of the old key, it is recommended that you do not immediately delete the first access key. Instead, change the state of the first access key to Inactive using this command: ``` aws iam update-access-key ``` 5. Use only the new access key to confirm that your applications are working. Any applications and tools that still use the original access key will stop working at this point because they no longer have access to AWS resources. If you find such an application or tool, you can switch its state back to Active to reenable the first access key. Then return to step Step 2 and update this application to use the new key.  6. After you wait some period of time to ensure that all applications and tools have been updated, you can delete the first access key with this command: ``` aws iam delete-access-key ```",
+          "AuditProcedure": "Perform the following to determine if access keys are rotated as prescribed:  **From Console:**  1. Go to Management Console (https://console.aws.amazon.com/iam) 2. Click on `Users` 3. Click `setting` icon 4. Select `Console last sign-in` 5. Click `Close` 6. Ensure that `Access key age` is less than 90 days ago. note) `None` in the `Access key age` means the user has not used the access key.  **From Command Line:**  ``` aws iam generate-credential-report aws iam get-credential-report --query 'Content' --output text | base64 -d ``` The `access_key_1_last_rotated` field in this file notes The date and time, in ISO 8601 date-time format, when the user's access key was created or last changed. If the user does not have an active access key, the value in this field is N/A (not applicable).",
           "AdditionalInformation": "",
           "References": "https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#rotate-credentials:https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_finding-unused.html:https://docs.aws.amazon.com/general/latest/gr/managing-aws-access-keys.html:https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html"
         }
@@ -142,11 +142,11 @@
           "Section": "1. Identity and Access Management",
           "Profile": "Level 1",
           "AssessmentStatus": "Automated",
-          "Description": "IAM users are granted access to services, functions, and data through IAM policies. There are three ways to define policies for a user: 1) Edit the user policy directly, aka an inline, or user, policy; 2) attach a policy directly to a user; 3) add the user to an IAM group that has an attached policy. \n\nOnly the third implementation is recommended.",
+          "Description": "IAM users are granted access to services, functions, and data through IAM policies. There are three ways to define policies for a user: 1) Edit the user policy directly, aka an inline, or user, policy; 2) attach a policy directly to a user; 3) add the user to an IAM group that has an attached policy.   Only the third implementation is recommended.",
           "RationaleStatement": "Assigning IAM policy only through groups unifies permissions management to a single, flexible layer consistent with organizational functional roles. By unifying permissions management, the likelihood of excessive permissions is reduced.",
           "ImpactStatement": "",
-          "RemediationProcedure": "Perform the following to create an IAM group and assign a policy to it:\n\n1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).\n2. In the navigation pane, click `Groups` and then click `Create New Group` .\n3. In the `Group Name` box, type the name of the group and then click `Next Step` .\n4. In the list of policies, select the check box for each policy that you want to apply to all members of the group. Then click `Next Step` .\n5. Click `Create Group` \n\nPerform the following to add a user to a given group:\n\n1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).\n2. In the navigation pane, click `Groups` \n3. Select the group to add a user to\n4. Click `Add Users To Group` \n5. Select the users to be added to the group\n6. Click `Add Users` \n\nPerform the following to remove a direct association between a user and policy:\n\n1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).\n2. In the left navigation pane, click on Users\n3. For each user:\n - Select the user\n - Click on the `Permissions` tab\n - Expand `Permissions policies` \n - Click `X` for each policy; then click Detach or Remove (depending on policy type)",
-          "AuditProcedure": "Perform the following to determine if an inline policy is set or a policy is directly attached to users:\n\n1. Run the following to get a list of IAM users:\n```\n aws iam list-users --query 'Users[*].UserName' --output text \n```\n2. For each user returned, run the following command to determine if any policies are attached to them:\n```\n aws iam list-attached-user-policies --user-name \n aws iam list-user-policies --user-name  \n```\n3. If any policies are returned, the user has an inline policy or direct policy attachment.",
+          "RemediationProcedure": "Perform the following to create an IAM group and assign a policy to it:  1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/). 2. In the navigation pane, click `Groups` and then click `Create New Group` . 3. In the `Group Name` box, type the name of the group and then click `Next Step` . 4. In the list of policies, select the check box for each policy that you want to apply to all members of the group. Then click `Next Step` . 5. Click `Create Group`   Perform the following to add a user to a given group:  1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/). 2. In the navigation pane, click `Groups`  3. Select the group to add a user to 4. Click `Add Users To Group`  5. Select the users to be added to the group 6. Click `Add Users`   Perform the following to remove a direct association between a user and policy:  1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/). 2. In the left navigation pane, click on Users 3. For each user:  - Select the user  - Click on the `Permissions` tab  - Expand `Permissions policies`   - Click `X` for each policy; then click Detach or Remove (depending on policy type)",
+          "AuditProcedure": "Perform the following to determine if an inline policy is set or a policy is directly attached to users:  1. Run the following to get a list of IAM users: ```  aws iam list-users --query 'Users[*].UserName' --output text  ``` 2. For each user returned, run the following command to determine if any policies are attached to them: ```  aws iam list-attached-user-policies --user-name   aws iam list-user-policies --user-name   ``` 3. If any policies are returned, the user has an inline policy or direct policy attachment.",
           "AdditionalInformation": "",
           "References": "http://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html:http://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_managed-vs-inline.html"
         }
@@ -165,10 +165,10 @@
           "Profile": "Level 1",
           "AssessmentStatus": "Automated",
           "Description": "IAM policies are the means by which privileges are granted to users, groups, or roles. It is recommended and considered a standard security advice to grant _least privilege_ -that is, granting only the permissions required to perform a task. Determine what users need to do and then craft policies for them that let the users perform _only_ those tasks, instead of allowing full administrative privileges.",
-          "RationaleStatement": "It's more secure to start with a minimum set of permissions and grant additional permissions as necessary, rather than starting with permissions that are too lenient and then trying to tighten them later.\n\nProviding full administrative privileges instead of restricting to the minimum set of permissions that the user is required to do exposes the resources to potentially unwanted actions.\n\nIAM policies that have a statement with \"Effect\": \"Allow\" with \"Action\": \"\\*\" over \"Resource\": \"\\*\" should be removed.",
+          "RationaleStatement": "It's more secure to start with a minimum set of permissions and grant additional permissions as necessary, rather than starting with permissions that are too lenient and then trying to tighten them later.  Providing full administrative privileges instead of restricting to the minimum set of permissions that the user is required to do exposes the resources to potentially unwanted actions.  IAM policies that have a statement with \"Effect\": \"Allow\" with \"Action\": \"\\*\" over \"Resource\": \"\\*\" should be removed.",
           "ImpactStatement": "",
-          "RemediationProcedure": "**From Console:**\n\nPerform the following to detach the policy that has full administrative privileges:\n\n1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).\n2. In the navigation pane, click Policies and then search for the policy name found in the audit step.\n3. Select the policy that needs to be deleted.\n4. In the policy action menu, select first `Detach` \n5. Select all Users, Groups, Roles that have this policy attached\n6. Click `Detach Policy` \n7. In the policy action menu, select `Detach` \n\n**From Command Line:**\n\nPerform the following to detach the policy that has full administrative privileges as found in the audit step:\n\n1. Lists all IAM users, groups, and roles that the specified managed policy is attached to.\n\n```\n aws iam list-entities-for-policy --policy-arn \n```\n2. Detach the policy from all IAM Users:\n```\n aws iam detach-user-policy --user-name  --policy-arn \n```\n3. Detach the policy from all IAM Groups:\n```\n aws iam detach-group-policy --group-name  --policy-arn \n```\n4. Detach the policy from all IAM Roles:\n```\n aws iam detach-role-policy --role-name  --policy-arn \n```",
-          "AuditProcedure": "Perform the following to determine what policies are created:\n\n**From Command Line:**\n\n1. Run the following to get a list of IAM policies:\n```\n aws iam list-policies --only-attached --output text\n```\n2. For each policy returned, run the following command to determine if any policies is allowing full administrative privileges on the account:\n```\n aws iam get-policy-version --policy-arn  --version-id \n```\n3. In output ensure policy should not have any Statement block with `\"Effect\": \"Allow\"` and `Action` set to `\"*\"` and `Resource` set to `\"*\"`",
+          "RemediationProcedure": "**From Console:**  Perform the following to detach the policy that has full administrative privileges:  1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/). 2. In the navigation pane, click Policies and then search for the policy name found in the audit step. 3. Select the policy that needs to be deleted. 4. In the policy action menu, select first `Detach`  5. Select all Users, Groups, Roles that have this policy attached 6. Click `Detach Policy`  7. In the policy action menu, select `Detach`   **From Command Line:**  Perform the following to detach the policy that has full administrative privileges as found in the audit step:  1. Lists all IAM users, groups, and roles that the specified managed policy is attached to.  ```  aws iam list-entities-for-policy --policy-arn  ``` 2. Detach the policy from all IAM Users: ```  aws iam detach-user-policy --user-name  --policy-arn  ``` 3. Detach the policy from all IAM Groups: ```  aws iam detach-group-policy --group-name  --policy-arn  ``` 4. Detach the policy from all IAM Roles: ```  aws iam detach-role-policy --role-name  --policy-arn  ```",
+          "AuditProcedure": "Perform the following to determine what policies are created:  **From Command Line:**  1. Run the following to get a list of IAM policies: ```  aws iam list-policies --only-attached --output text ``` 2. For each policy returned, run the following command to determine if any policies is allowing full administrative privileges on the account: ```  aws iam get-policy-version --policy-arn  --version-id  ``` 3. In output ensure policy should not have any Statement block with `\"Effect\": \"Allow\"` and `Action` set to `\"*\"` and `Resource` set to `\"*\"`",
           "AdditionalInformation": "",
           "References": "https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html:https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_managed-vs-inline.html:https://docs.aws.amazon.com/cli/latest/reference/iam/index.html#cli-aws-iam"
         }
@@ -188,8 +188,8 @@
           "Description": "AWS provides a support center that can be used for incident notification and response, as well as technical support and customer services. Create an IAM Role to allow authorized users to manage incidents with AWS Support.",
           "RationaleStatement": "By implementing least privilege for access control, an IAM Role will require an appropriate IAM Policy to allow Support Center Access in order to manage Incidents with AWS Support.",
           "ImpactStatement": "All AWS Support plans include an unlimited number of account and billing support cases, with no long-term contracts. Support billing calculations are performed on a per-account basis for all plans. Enterprise Support plan customers have the option to include multiple enabled accounts in an aggregated monthly billing calculation. Monthly charges for the Business and Enterprise support plans are based on each month's AWS usage charges, subject to a monthly minimum, billed in advance.",
-          "RemediationProcedure": "**From Command Line:**\n\n1. Create an IAM role for managing incidents with AWS:\n - Create a trust relationship policy document that allows  to manage AWS incidents, and save it locally as /tmp/TrustPolicy.json:\n```\n {\n \"Version\": \"2012-10-17\",\n \"Statement\": [\n {\n \"Effect\": \"Allow\",\n \"Principal\": {\n \"AWS\": \"\"\n },\n \"Action\": \"sts:AssumeRole\"\n }\n ]\n }\n```\n2. Create the IAM role using the above trust policy:\n```\naws iam create-role --role-name  --assume-role-policy-document file:///tmp/TrustPolicy.json\n```\n3. Attach 'AWSSupportAccess' managed policy to the created IAM role:\n```\naws iam attach-role-policy --policy-arn arn:aws:iam::aws:policy/AWSSupportAccess --role-name \n```",
-          "AuditProcedure": "**From Command Line:**\n\n1. List IAM policies, filter for the 'AWSSupportAccess' managed policy, and note the \"Arn\" element value:\n```\naws iam list-policies --query \"Policies[?PolicyName == 'AWSSupportAccess']\"\n```\n2. Check if the 'AWSSupportAccess' policy is attached to any role:\n\n```\naws iam list-entities-for-policy --policy-arn arn:aws:iam::aws:policy/AWSSupportAccess\n```\n\n3. In Output, Ensure `PolicyRoles` does not return empty. 'Example: Example: PolicyRoles: [ ]'\n\nIf it returns empty refer to the remediation below.",
+          "RemediationProcedure": "**From Command Line:**  1. Create an IAM role for managing incidents with AWS:  - Create a trust relationship policy document that allows  to manage AWS incidents, and save it locally as /tmp/TrustPolicy.json: ```  {  \"Version\": \"2012-10-17\",  \"Statement\": [  {  \"Effect\": \"Allow\",  \"Principal\": {  \"AWS\": \"\"  },  \"Action\": \"sts:AssumeRole\"  }  ]  } ``` 2. Create the IAM role using the above trust policy: ``` aws iam create-role --role-name  --assume-role-policy-document file:///tmp/TrustPolicy.json ``` 3. Attach 'AWSSupportAccess' managed policy to the created IAM role: ``` aws iam attach-role-policy --policy-arn arn:aws:iam::aws:policy/AWSSupportAccess --role-name  ```",
+          "AuditProcedure": "**From Command Line:**  1. List IAM policies, filter for the 'AWSSupportAccess' managed policy, and note the \"Arn\" element value: ``` aws iam list-policies --query \"Policies[?PolicyName == 'AWSSupportAccess']\" ``` 2. Check if the 'AWSSupportAccess' policy is attached to any role:  ``` aws iam list-entities-for-policy --policy-arn arn:aws:iam::aws:policy/AWSSupportAccess ```  3. In Output, Ensure `PolicyRoles` does not return empty. 'Example: Example: PolicyRoles: [ ]'  If it returns empty refer to the remediation below.",
           "AdditionalInformation": "AWSSupportAccess policy is a global AWS resource. It has same ARN as `arn:aws:iam::aws:policy/AWSSupportAccess` for every account.",
           "References": "https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_managed-vs-inline.html:https://aws.amazon.com/premiumsupport/pricing/:https://docs.aws.amazon.com/cli/latest/reference/iam/list-policies.html:https://docs.aws.amazon.com/cli/latest/reference/iam/attach-role-policy.html:https://docs.aws.amazon.com/cli/latest/reference/iam/list-entities-for-policy.html"
         }
@@ -207,10 +207,10 @@
           "Profile": "Level 2",
           "AssessmentStatus": "Manual",
           "Description": "AWS access from within AWS instances can be done by either encoding AWS keys into AWS API calls or by assigning the instance to a role which has an appropriate permissions policy for the required access. \"AWS Access\" means accessing the APIs of AWS in order to access AWS resources or manage AWS account resources.",
-          "RationaleStatement": "AWS IAM roles reduce the risks associated with sharing and rotating credentials that can be used outside of AWS itself. If credentials are compromised, they can be used from outside of the AWS account they give access to. In contrast, in order to leverage role permissions an attacker would need to gain and maintain access to a specific instance to use the privileges associated with it.\n\nAdditionally, if credentials are encoded into compiled applications or other hard to change mechanisms, then they are even more unlikely to be properly rotated due to service disruption risks. As time goes on, credentials that cannot be rotated are more likely to be known by an increasing number of individuals who no longer work for the organization owning the credentials.",
+          "RationaleStatement": "AWS IAM roles reduce the risks associated with sharing and rotating credentials that can be used outside of AWS itself. If credentials are compromised, they can be used from outside of the AWS account they give access to. In contrast, in order to leverage role permissions an attacker would need to gain and maintain access to a specific instance to use the privileges associated with it.  Additionally, if credentials are encoded into compiled applications or other hard to change mechanisms, then they are even more unlikely to be properly rotated due to service disruption risks. As time goes on, credentials that cannot be rotated are more likely to be known by an increasing number of individuals who no longer work for the organization owning the credentials.",
           "ImpactStatement": "",
-          "RemediationProcedure": "IAM roles can only be associated at the launch of an instance. To remediate an instance to add it to a role you must create a new instance.\n\nIf the instance has no external dependencies on its current private ip or public addresses are elastic IPs:\n\n1. In AWS IAM create a new role. Assign a permissions policy if needed permissions are already known.\n2. In the AWS console launch a new instance with identical settings to the existing instance, and ensure that the newly created role is selected.\n3. Shutdown both the existing instance and the new instance.\n4. Detach disks from both instances.\n5. Attach the existing instance disks to the new instance.\n6. Boot the new instance and you should have the same machine, but with the associated role.\n\n**Note:** if your environment has dependencies on a dynamically assigned PRIVATE IP address you can create an AMI from the existing instance, destroy the old one and then when launching from the AMI, manually assign the previous private IP address.\n\n**Note: **if your environment has dependencies on a dynamically assigned PUBLIC IP address there is not a way ensure the address is retained and assign an instance role. Dependencies on dynamically assigned public IP addresses are a bad practice and, if possible, you may wish to rebuild the instance with a new elastic IP address and make the investment to remediate affected systems while assigning the system to a role.",
-          "AuditProcedure": "Where an instance is associated with a Role:\n\nFor instances that are known to perform AWS actions, ensure that they belong to an instance role that has the necessary permissions:\n\n1. Login to AWS Console (with appropriate permissions to View Identity Access Management Account Settings)\n2. Open the EC2 Dashboard and choose \"Instances\"\n3. Click the EC2 instance that performs AWS actions, in the lower pane details find \"IAM Role\"\n4. If the Role is blank, the instance is not assigned to one.\n5. If the Role is filled in, it does not mean the instance might not \\*also\\* have credentials encoded on it for some activities.\n\nWhere an Instance Contains Embedded Credentials:\n\n- On the instance that is known to perform AWS actions, audit all scripts and environment variables to ensure that none of them contain AWS credentials.\n\nWhere an Instance Application Contains Embedded Credentials:\n\n- Applications that run on an instance may also have credentials embedded. This is a bad practice, but even worse if the source code is stored in a public code repository such as github. When an application contains credentials can be determined by eliminating all other sources of credentials and if the application can still access AWS resources - it likely contains embedded credentials. Another method is to examine all source code and configuration files of the application.",
+          "RemediationProcedure": "IAM roles can only be associated at the launch of an instance. To remediate an instance to add it to a role you must create a new instance.  If the instance has no external dependencies on its current private ip or public addresses are elastic IPs:  1. In AWS IAM create a new role. Assign a permissions policy if needed permissions are already known. 2. In the AWS console launch a new instance with identical settings to the existing instance, and ensure that the newly created role is selected. 3. Shutdown both the existing instance and the new instance. 4. Detach disks from both instances. 5. Attach the existing instance disks to the new instance. 6. Boot the new instance and you should have the same machine, but with the associated role.  **Note:** if your environment has dependencies on a dynamically assigned PRIVATE IP address you can create an AMI from the existing instance, destroy the old one and then when launching from the AMI, manually assign the previous private IP address.  **Note: **if your environment has dependencies on a dynamically assigned PUBLIC IP address there is not a way ensure the address is retained and assign an instance role. Dependencies on dynamically assigned public IP addresses are a bad practice and, if possible, you may wish to rebuild the instance with a new elastic IP address and make the investment to remediate affected systems while assigning the system to a role.",
+          "AuditProcedure": "Where an instance is associated with a Role:  For instances that are known to perform AWS actions, ensure that they belong to an instance role that has the necessary permissions:  1. Login to AWS Console (with appropriate permissions to View Identity Access Management Account Settings) 2. Open the EC2 Dashboard and choose \"Instances\" 3. Click the EC2 instance that performs AWS actions, in the lower pane details find \"IAM Role\" 4. If the Role is blank, the instance is not assigned to one. 5. If the Role is filled in, it does not mean the instance might not \\*also\\* have credentials encoded on it for some activities.  Where an Instance Contains Embedded Credentials:  - On the instance that is known to perform AWS actions, audit all scripts and environment variables to ensure that none of them contain AWS credentials.  Where an Instance Application Contains Embedded Credentials:  - Applications that run on an instance may also have credentials embedded. This is a bad practice, but even worse if the source code is stored in a public code repository such as github. When an application contains credentials can be determined by eliminating all other sources of credentials and if the application can still access AWS resources - it likely contains embedded credentials. Another method is to examine all source code and configuration files of the application.",
           "AdditionalInformation": "",
           "References": "https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2.html:https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html"
         }
@@ -227,11 +227,11 @@
           "Section": "1. Identity and Access Management",
           "Profile": "Level 1",
           "AssessmentStatus": "Automated",
-          "Description": "To enable HTTPS connections to your website or application in AWS, you need an SSL/TLS server certificate. You can use ACM or IAM to store and deploy server certificates. \nUse IAM as a certificate manager only when you must support HTTPS connections in a region that is not supported by ACM. IAM securely encrypts your private keys and stores the encrypted version in IAM SSL certificate storage. IAM supports deploying server certificates in all regions, but you must obtain your certificate from an external provider for use with AWS. You cannot upload an ACM certificate to IAM. Additionally, you cannot manage your certificates from the IAM Console.",
+          "Description": "To enable HTTPS connections to your website or application in AWS, you need an SSL/TLS server certificate. You can use ACM or IAM to store and deploy server certificates.  Use IAM as a certificate manager only when you must support HTTPS connections in a region that is not supported by ACM. IAM securely encrypts your private keys and stores the encrypted version in IAM SSL certificate storage. IAM supports deploying server certificates in all regions, but you must obtain your certificate from an external provider for use with AWS. You cannot upload an ACM certificate to IAM. Additionally, you cannot manage your certificates from the IAM Console.",
           "RationaleStatement": "Removing expired SSL/TLS certificates eliminates the risk that an invalid certificate will be deployed accidentally to a resource such as AWS Elastic Load Balancer (ELB), which can damage the credibility of the application/website behind the ELB. As a best practice, it is recommended to delete expired certificates.",
-          "ImpactStatement": "Deleting the certificate could have implications for your application if you are using an expired server certificate with Elastic Load Balancing, CloudFront, etc.\nOne has to make configurations at respective services to ensure there is no interruption in application functionality.",
-          "RemediationProcedure": "**From Console:**\n\nRemoving expired certificates via AWS Management Console is not currently supported. To delete SSL/TLS certificates stored in IAM via the AWS API use the Command Line Interface (CLI).\n\n**From Command Line:**\n\nTo delete Expired Certificate run following command by replacing  with the name of the certificate to delete:\n\n```\naws iam delete-server-certificate --server-certificate-name \n```\n\nWhen the preceding command is successful, it does not return any output.",
-          "AuditProcedure": "**From Console:**\n\nGetting the certificates expiration information via AWS Management Console is not currently supported. \nTo request information about the SSL/TLS certificates stored in IAM via the AWS API use the Command Line Interface (CLI).\n\n**From Command Line:**\n\nRun list-server-certificates command to list all the IAM-stored server certificates:\n\n```\naws iam list-server-certificates\n```\n\nThe command output should return an array that contains all the SSL/TLS certificates currently stored in IAM and their metadata (name, ID, expiration date, etc):\n\n```\n{\n \"ServerCertificateMetadataList\": [\n {\n \"ServerCertificateId\": \"EHDGFRW7EJFYTE88D\",\n \"ServerCertificateName\": \"MyServerCertificate\",\n \"Expiration\": \"2018-07-10T23:59:59Z\",\n \"Path\": \"/\",\n \"Arn\": \"arn:aws:iam::012345678910:server-certificate/MySSLCertificate\",\n \"UploadDate\": \"2018-06-10T11:56:08Z\"\n }\n ]\n}\n```\n\nVerify the `ServerCertificateName` and `Expiration` parameter value (expiration date) for each SSL/TLS certificate returned by the list-server-certificates command and determine if there are any expired server certificates currently stored in AWS IAM. If so, use the AWS API to remove them.\n\nIf this command returns:\n```\n{ { \"ServerCertificateMetadataList\": [] }\n```\nThis means that there are no expired certificates, It DOES NOT mean that no certificates exist.",
+          "ImpactStatement": "Deleting the certificate could have implications for your application if you are using an expired server certificate with Elastic Load Balancing, CloudFront, etc. One has to make configurations at respective services to ensure there is no interruption in application functionality.",
+          "RemediationProcedure": "**From Console:**  Removing expired certificates via AWS Management Console is not currently supported. To delete SSL/TLS certificates stored in IAM via the AWS API use the Command Line Interface (CLI).  **From Command Line:**  To delete Expired Certificate run following command by replacing  with the name of the certificate to delete:  ``` aws iam delete-server-certificate --server-certificate-name  ```  When the preceding command is successful, it does not return any output.",
+          "AuditProcedure": "**From Console:**  Getting the certificates expiration information via AWS Management Console is not currently supported.  To request information about the SSL/TLS certificates stored in IAM via the AWS API use the Command Line Interface (CLI).  **From Command Line:**  Run list-server-certificates command to list all the IAM-stored server certificates:  ``` aws iam list-server-certificates ```  The command output should return an array that contains all the SSL/TLS certificates currently stored in IAM and their metadata (name, ID, expiration date, etc):  ``` {  \"ServerCertificateMetadataList\": [  {  \"ServerCertificateId\": \"EHDGFRW7EJFYTE88D\",  \"ServerCertificateName\": \"MyServerCertificate\",  \"Expiration\": \"2018-07-10T23:59:59Z\",  \"Path\": \"/\",  \"Arn\": \"arn:aws:iam::012345678910:server-certificate/MySSLCertificate\",  \"UploadDate\": \"2018-06-10T11:56:08Z\"  }  ] } ```  Verify the `ServerCertificateName` and `Expiration` parameter value (expiration date) for each SSL/TLS certificate returned by the list-server-certificates command and determine if there are any expired server certificates currently stored in AWS IAM. If so, use the AWS API to remove them.  If this command returns: ``` { { \"ServerCertificateMetadataList\": [] } ``` This means that there are no expired certificates, It DOES NOT mean that no certificates exist.",
           "AdditionalInformation": "",
           "References": "https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_server-certs.html:https://docs.aws.amazon.com/cli/latest/reference/iam/delete-server-certificate.html"
         }
@@ -251,8 +251,8 @@
           "Description": "AWS provides customers with the option of specifying the contact information for account's security team. It is recommended that this information be provided.",
           "RationaleStatement": "Specifying security-specific contact information will help ensure that security advisories sent by AWS reach the team in your organization that is best equipped to respond to them.",
           "ImpactStatement": "",
-          "RemediationProcedure": "Perform the following to establish security contact information:\n\n**From Console:**\n\n1. Click on your account name at the top right corner of the console.\n2. From the drop-down menu Click `My Account` \n3. Scroll down to the `Alternate Contacts` section\n4. Enter contact information in the `Security` section\n\n**Note:** Consider specifying an internal email distribution list to ensure emails are regularly monitored by more than one individual.",
-          "AuditProcedure": "Perform the following to determine if security contact information is present:\n\n**From Console:**\n\n1. Click on your account name at the top right corner of the console\n2. From the drop-down menu Click `My Account` \n3. Scroll down to the `Alternate Contacts` section\n4. Ensure contact information is specified in the `Security` section",
+          "RemediationProcedure": "Perform the following to establish security contact information:  **From Console:**  1. Click on your account name at the top right corner of the console. 2. From the drop-down menu Click `My Account`  3. Scroll down to the `Alternate Contacts` section 4. Enter contact information in the `Security` section  **Note:** Consider specifying an internal email distribution list to ensure emails are regularly monitored by more than one individual.",
+          "AuditProcedure": "Perform the following to determine if security contact information is present:  **From Console:**  1. Click on your account name at the top right corner of the console 2. From the drop-down menu Click `My Account`  3. Scroll down to the `Alternate Contacts` section 4. Ensure contact information is specified in the `Security` section",
           "AdditionalInformation": "",
           "References": ""
         }
@@ -269,11 +269,11 @@
           "Section": "1. Identity and Access Management",
           "Profile": "Level 1",
           "AssessmentStatus": "Automated",
-          "Description": "Enable IAM Access analyzer for IAM policies about all resources in each region.\n\nIAM Access Analyzer is a technology introduced at AWS reinvent 2019. After the Analyzer is enabled in IAM, scan results are displayed on the console showing the accessible resources. Scans show resources that other accounts and federated users can access, such as KMS keys and IAM roles. So the results allow you to determine if an unintended user is allowed, making it easier for administrators to monitor least privileges access.\nAccess Analyzer analyzes only policies that are applied to resources in the same AWS Region.",
+          "Description": "Enable IAM Access analyzer for IAM policies about all resources in each region.  IAM Access Analyzer is a technology introduced at AWS reinvent 2019. After the Analyzer is enabled in IAM, scan results are displayed on the console showing the accessible resources. Scans show resources that other accounts and federated users can access, such as KMS keys and IAM roles. So the results allow you to determine if an unintended user is allowed, making it easier for administrators to monitor least privileges access. Access Analyzer analyzes only policies that are applied to resources in the same AWS Region.",
           "RationaleStatement": "AWS IAM Access Analyzer helps you identify the resources in your organization and accounts, such as Amazon S3 buckets or IAM roles, that are shared with an external entity. This lets you identify unintended access to your resources and data. Access Analyzer identifies resources that are shared with external principals by using logic-based reasoning to analyze the resource-based policies in your AWS environment. IAM Access Analyzer continuously monitors all policies for S3 bucket, IAM roles, KMS(Key Management Service) keys, AWS Lambda functions, and Amazon SQS(Simple Queue Service) queues.",
           "ImpactStatement": "",
-          "RemediationProcedure": "**From Console:**\n\nPerform the following to enable IAM Access analyzer for IAM policies:\n\n1. Open the IAM console at `https://console.aws.amazon.com/iam/.`\n2. Choose `Access analyzer`.\n3. Choose `Create analyzer`.\n4. On the `Create analyzer` page, confirm that the `Region` displayed is the Region where you want to enable Access Analyzer.\n5. Enter a name for the analyzer. `Optional as it will generate a name for you automatically`.\n6. Add any tags that you want to apply to the analyzer. `Optional`. \n7. Choose `Create Analyzer`.\n8. Repeat these step for each active region\n\n**From Command Line:**\n\nRun the following command:\n```\naws accessanalyzer create-analyzer --analyzer-name  --type \n```\nRepeat this command above for each active region.\n\n**Note:** The IAM Access Analyzer is successfully configured only when the account you use has the necessary permissions.",
-          "AuditProcedure": "**From Console:**\n\n1. Open the IAM console at `https://console.aws.amazon.com/iam/`\n2. Choose `Access analyzer`\n3. Click 'Analyzers'\n4. Ensure that at least one analyzer is present\n5. Ensure that the `STATUS` is set to `Active`\n6. Repeat these step for each active region\n\n**From Command Line:**\n\n1. Run the following command:\n```\naws accessanalyzer list-analyzers | grep status\n```\n2. Ensure that at least one Analyzer the `status` is set to `ACTIVE`\n\n3. Repeat the steps above for each active region.\n\nIf an Access analyzer is not listed for each region or the status is not set to active refer to the remediation procedure below.",
+          "RemediationProcedure": "**From Console:**  Perform the following to enable IAM Access analyzer for IAM policies:  1. Open the IAM console at `https://console.aws.amazon.com/iam/.` 2. Choose `Access analyzer`. 3. Choose `Create analyzer`. 4. On the `Create analyzer` page, confirm that the `Region` displayed is the Region where you want to enable Access Analyzer. 5. Enter a name for the analyzer. `Optional as it will generate a name for you automatically`. 6. Add any tags that you want to apply to the analyzer. `Optional`.  7. Choose `Create Analyzer`. 8. Repeat these step for each active region  **From Command Line:**  Run the following command: ``` aws accessanalyzer create-analyzer --analyzer-name  --type  ``` Repeat this command above for each active region.  **Note:** The IAM Access Analyzer is successfully configured only when the account you use has the necessary permissions.",
+          "AuditProcedure": "**From Console:**  1. Open the IAM console at `https://console.aws.amazon.com/iam/` 2. Choose `Access analyzer` 3. Click 'Analyzers' 4. Ensure that at least one analyzer is present 5. Ensure that the `STATUS` is set to `Active` 6. Repeat these step for each active region  **From Command Line:**  1. Run the following command: ``` aws accessanalyzer list-analyzers | grep status ``` 2. Ensure that at least one Analyzer the `status` is set to `ACTIVE`  3. Repeat the steps above for each active region.  If an Access analyzer is not listed for each region or the status is not set to active refer to the remediation procedure below.",
           "AdditionalInformation": "",
           "References": "https://docs.aws.amazon.com/IAM/latest/UserGuide/what-is-access-analyzer.html:https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-getting-started.html:https://docs.aws.amazon.com/cli/latest/reference/accessanalyzer/get-analyzer.html:https://docs.aws.amazon.com/cli/latest/reference/accessanalyzer/create-analyzer.html"
         }
@@ -294,7 +294,7 @@
           "RationaleStatement": "Centralizing IAM user management to a single identity store reduces complexity and thus the likelihood of access management errors.",
           "ImpactStatement": "",
           "RemediationProcedure": "The remediation procedure will vary based on the individual organization's implementation of identity federation and/or AWS Organizations with the acceptance criteria that no non-service IAM users, and non-root accounts, are present outside the account providing centralized IAM user management.",
-          "AuditProcedure": "For multi-account AWS environments with an external identity provider... \n\n1. Determine the master account for identity federation or IAM user management\n2. Login to that account through the AWS Management Console\n3. Click `Services` \n4. Click `IAM` \n5. Click `Identity providers`\n6. Verify the configuration\n\nThen..., determine all accounts that should not have local users present. For each account...\n\n1. Determine all accounts that should not have local users present\n2. Log into the AWS Management Console\n3. Switch role into each identified account\n4. Click `Services` \n5. Click `IAM` \n6. Click `Users`\n7. Confirm that no IAM users representing individuals are present\n\nFor multi-account AWS environments implementing AWS Organizations without an external identity provider... \n\n1. Determine all accounts that should not have local users present\n2. Log into the AWS Management Console\n3. Switch role into each identified account\n4. Click `Services` \n5. Click `IAM` \n6. Click `Users`\n7. Confirm that no IAM users representing individuals are present",
+          "AuditProcedure": "For multi-account AWS environments with an external identity provider...   1. Determine the master account for identity federation or IAM user management 2. Login to that account through the AWS Management Console 3. Click `Services`  4. Click `IAM`  5. Click `Identity providers` 6. Verify the configuration  Then..., determine all accounts that should not have local users present. For each account...  1. Determine all accounts that should not have local users present 2. Log into the AWS Management Console 3. Switch role into each identified account 4. Click `Services`  5. Click `IAM`  6. Click `Users` 7. Confirm that no IAM users representing individuals are present  For multi-account AWS environments implementing AWS Organizations without an external identity provider...   1. Determine all accounts that should not have local users present 2. Log into the AWS Management Console 3. Switch role into each identified account 4. Click `Services`  5. Click `IAM`  6. Click `Users` 7. Confirm that no IAM users representing individuals are present",
           "AdditionalInformation": "",
           "References": ""
         }
@@ -314,8 +314,8 @@
           "Description": "The AWS support portal allows account owners to establish security questions that can be used to authenticate individuals calling AWS customer service for support. It is recommended that security questions be established.",
           "RationaleStatement": "When creating a new AWS account, a default super user is automatically created. This account is referred to as the 'root user' or 'root' account. It is recommended that the use of this account be limited and highly controlled. During events in which the 'root' password is no longer accessible or the MFA token associated with 'root' is lost/destroyed it is possible, through authentication using secret questions and associated answers, to recover 'root' user login access.",
           "ImpactStatement": "",
-          "RemediationProcedure": "**From Console:**\n\n1. Login to the AWS Account as the 'root' user\n2. Click on the __ from the top right of the console\n3. From the drop-down menu Click _My Account_\n4. Scroll down to the `Configure Security Questions` section\n5. Click on `Edit` \n6. Click on each `Question` \n - From the drop-down select an appropriate question\n - Click on the `Answer` section\n - Enter an appropriate answer \n - Follow process for all 3 questions\n7. Click `Update` when complete\n8. Save Questions and Answers and place in a secure physical location",
-          "AuditProcedure": "**From Console:**\n\n1. Login to the AWS account as the 'root' user\n2. On the top right you will see the __\n3. Click on the __\n4. From the drop-down menu Click `My Account` \n5. In the `Configure Security Challenge Questions` section on the `Personal Information` page, configure three security challenge questions.\n6. Click `Save questions` .",
+          "RemediationProcedure": "**From Console:**  1. Login to the AWS Account as the 'root' user 2. Click on the __ from the top right of the console 3. From the drop-down menu Click _My Account_ 4. Scroll down to the `Configure Security Questions` section 5. Click on `Edit`  6. Click on each `Question`   - From the drop-down select an appropriate question  - Click on the `Answer` section  - Enter an appropriate answer   - Follow process for all 3 questions 7. Click `Update` when complete 8. Save Questions and Answers and place in a secure physical location",
+          "AuditProcedure": "**From Console:**  1. Login to the AWS account as the 'root' user 2. On the top right you will see the __ 3. Click on the __ 4. From the drop-down menu Click `My Account`  5. In the `Configure Security Challenge Questions` section on the `Personal Information` page, configure three security challenge questions. 6. Click `Save questions` .",
           "AdditionalInformation": "",
           "References": ""
         }
@@ -335,8 +335,8 @@
           "Description": "The 'root' user account is the most privileged user in an AWS account. AWS Access Keys provide programmatic access to a given AWS account. It is recommended that all access keys associated with the 'root' user account be removed.",
           "RationaleStatement": "Removing access keys associated with the 'root' user account limits vectors by which the account can be compromised. Additionally, removing the 'root' access keys encourages the creation and use of role based accounts that are least privileged.",
           "ImpactStatement": "",
-          "RemediationProcedure": "Perform the following to delete or disable active 'root' user access keys\n\n**From Console:**\n\n1. Sign in to the AWS Management Console as 'root' and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).\n2. Click on __ at the top right and select `My Security Credentials` from the drop down list\n3. On the pop out screen Click on `Continue to Security Credentials` \n4. Click on `Access Keys` _(Access Key ID and Secret Access Key)_\n5. Under the `Status` column if there are any Keys which are Active\n - Click on `Make Inactive` - (Temporarily disable Key - may be needed again)\n - Click `Delete` - (Deleted keys cannot be recovered)",
-          "AuditProcedure": "Perform the following to determine if the 'root' user account has access keys:\n\n**From Console:**\n\n1. Login to the AWS Management Console\n2. Click `Services` \n3. Click `IAM` \n4. Click on `Credential Report` \n5. This will download a `.csv` file which contains credential usage for all IAM users within an AWS Account - open this file\n6. For the `` user, ensure the `access_key_1_active` and `access_key_2_active` fields are set to `FALSE` .\n\n**From Command Line:**\n\nRun the following command:\n```\n aws iam get-account-summary | grep \"AccountAccessKeysPresent\" \n```\nIf no 'root' access keys exist the output will show \"AccountAccessKeysPresent\": 0,. \n\nIf the output shows a \"1\" than 'root' keys exist, refer to the remediation procedure below.",
+          "RemediationProcedure": "Perform the following to delete or disable active 'root' user access keys  **From Console:**  1. Sign in to the AWS Management Console as 'root' and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/). 2. Click on __ at the top right and select `My Security Credentials` from the drop down list 3. On the pop out screen Click on `Continue to Security Credentials`  4. Click on `Access Keys` _(Access Key ID and Secret Access Key)_ 5. Under the `Status` column if there are any Keys which are Active  - Click on `Make Inactive` - (Temporarily disable Key - may be needed again)  - Click `Delete` - (Deleted keys cannot be recovered)",
+          "AuditProcedure": "Perform the following to determine if the 'root' user account has access keys:  **From Console:**  1. Login to the AWS Management Console 2. Click `Services`  3. Click `IAM`  4. Click on `Credential Report`  5. This will download a `.csv` file which contains credential usage for all IAM users within an AWS Account - open this file 6. For the `` user, ensure the `access_key_1_active` and `access_key_2_active` fields are set to `FALSE` .  **From Command Line:**  Run the following command: ```  aws iam get-account-summary | grep \"AccountAccessKeysPresent\"  ``` If no 'root' access keys exist the output will show \"AccountAccessKeysPresent\": 0,.   If the output shows a \"1\" than 'root' keys exist, refer to the remediation procedure below.",
           "AdditionalInformation": "IAM User account \"root\" for us-gov cloud regions is not enabled by default. However, on request to AWS support enables 'root' access only through access-keys (CLI, API methods) for us-gov cloud region.",
           "References": "http://docs.aws.amazon.com/general/latest/gr/aws-access-keys-best-practices.html:http://docs.aws.amazon.com/general/latest/gr/managing-aws-access-keys.html:http://docs.aws.amazon.com/IAM/latest/APIReference/API_GetAccountSummary.html:https://aws.amazon.com/blogs/security/an-easier-way-to-determine-the-presence-of-aws-account-access-keys/"
         }
@@ -353,11 +353,11 @@
           "Section": "1. Identity and Access Management",
           "Profile": "Level 1",
           "AssessmentStatus": "Automated",
-          "Description": "The 'root' user account is the most privileged user in an AWS account. Multi-factor Authentication (MFA) adds an extra layer of protection on top of a username and password. With MFA enabled, when a user signs in to an AWS website, they will be prompted for their username and password as well as for an authentication code from their AWS MFA device.\n\n**Note:** When virtual MFA is used for 'root' accounts, it is recommended that the device used is NOT a personal device, but rather a dedicated mobile device (tablet or phone) that is managed to be kept charged and secured independent of any individual personal devices. (\"non-personal virtual MFA\") This lessens the risks of losing access to the MFA due to device loss, device trade-in or if the individual owning the device is no longer employed at the company.",
+          "Description": "The 'root' user account is the most privileged user in an AWS account. Multi-factor Authentication (MFA) adds an extra layer of protection on top of a username and password. With MFA enabled, when a user signs in to an AWS website, they will be prompted for their username and password as well as for an authentication code from their AWS MFA device.  **Note:** When virtual MFA is used for 'root' accounts, it is recommended that the device used is NOT a personal device, but rather a dedicated mobile device (tablet or phone) that is managed to be kept charged and secured independent of any individual personal devices. (\"non-personal virtual MFA\") This lessens the risks of losing access to the MFA due to device loss, device trade-in or if the individual owning the device is no longer employed at the company.",
           "RationaleStatement": "Enabling MFA provides increased security for console access as it requires the authenticating principal to possess a device that emits a time-sensitive key and have knowledge of a credential.",
           "ImpactStatement": "",
-          "RemediationProcedure": "Perform the following to establish MFA for the 'root' user account:\n\n1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).\n\n Note: to manage MFA devices for the 'root' AWS account, you must use your 'root' account credentials to sign in to AWS. You cannot manage MFA devices for the 'root' account using other credentials.\n\n2. Choose `Dashboard` , and under `Security Status` , expand `Activate MFA` on your root account.\n3. Choose `Activate MFA` \n4. In the wizard, choose `A virtual MFA` device and then choose `Next Step` .\n5. IAM generates and displays configuration information for the virtual MFA device, including a QR code graphic. The graphic is a representation of the 'secret configuration key' that is available for manual entry on devices that do not support QR codes.\n6. Open your virtual MFA application. (For a list of apps that you can use for hosting virtual MFA devices, see [Virtual MFA Applications](http://aws.amazon.com/iam/details/mfa/#Virtual_MFA_Applications).) If the virtual MFA application supports multiple accounts (multiple virtual MFA devices), choose the option to create a new account (a new virtual MFA device).\n7. Determine whether the MFA app supports QR codes, and then do one of the following:\n\n - Use the app to scan the QR code. For example, you might choose the camera icon or choose an option similar to Scan code, and then use the device's camera to scan the code.\n - In the Manage MFA Device wizard, choose Show secret key for manual configuration, and then type the secret configuration key into your MFA application.\n\nWhen you are finished, the virtual MFA device starts generating one-time passwords.\n\nIn the Manage MFA Device wizard, in the Authentication Code 1 box, type the one-time password that currently appears in the virtual MFA device. Wait up to 30 seconds for the device to generate a new one-time password. Then type the second one-time password into the Authentication Code 2 box. Choose Assign Virtual MFA.",
-          "AuditProcedure": "Perform the following to determine if the 'root' user account has MFA setup:\n\n**From Console:**\n\n1. Login to the AWS Management Console\n2. Click `Services` \n3. Click `IAM` \n4. Click on `Credential Report` \n5. This will download a `.csv` file which contains credential usage for all IAM users within an AWS Account - open this file\n6. For the `` user, ensure the `mfa_active` field is set to `TRUE` .\n\n**From Command Line:**\n\n1. Run the following command:\n```\n aws iam get-account-summary | grep \"AccountMFAEnabled\"\n```\n2. Ensure the AccountMFAEnabled property is set to 1",
+          "RemediationProcedure": "Perform the following to establish MFA for the 'root' user account:  1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).   Note: to manage MFA devices for the 'root' AWS account, you must use your 'root' account credentials to sign in to AWS. You cannot manage MFA devices for the 'root' account using other credentials.  2. Choose `Dashboard` , and under `Security Status` , expand `Activate MFA` on your root account. 3. Choose `Activate MFA`  4. In the wizard, choose `A virtual MFA` device and then choose `Next Step` . 5. IAM generates and displays configuration information for the virtual MFA device, including a QR code graphic. The graphic is a representation of the 'secret configuration key' that is available for manual entry on devices that do not support QR codes. 6. Open your virtual MFA application. (For a list of apps that you can use for hosting virtual MFA devices, see [Virtual MFA Applications](http://aws.amazon.com/iam/details/mfa/#Virtual_MFA_Applications).) If the virtual MFA application supports multiple accounts (multiple virtual MFA devices), choose the option to create a new account (a new virtual MFA device). 7. Determine whether the MFA app supports QR codes, and then do one of the following:   - Use the app to scan the QR code. For example, you might choose the camera icon or choose an option similar to Scan code, and then use the device's camera to scan the code.  - In the Manage MFA Device wizard, choose Show secret key for manual configuration, and then type the secret configuration key into your MFA application.  When you are finished, the virtual MFA device starts generating one-time passwords.  In the Manage MFA Device wizard, in the Authentication Code 1 box, type the one-time password that currently appears in the virtual MFA device. Wait up to 30 seconds for the device to generate a new one-time password. Then type the second one-time password into the Authentication Code 2 box. Choose Assign Virtual MFA.",
+          "AuditProcedure": "Perform the following to determine if the 'root' user account has MFA setup:  **From Console:**  1. Login to the AWS Management Console 2. Click `Services`  3. Click `IAM`  4. Click on `Credential Report`  5. This will download a `.csv` file which contains credential usage for all IAM users within an AWS Account - open this file 6. For the `` user, ensure the `mfa_active` field is set to `TRUE` .  **From Command Line:**  1. Run the following command: ```  aws iam get-account-summary | grep \"AccountMFAEnabled\" ``` 2. Ensure the AccountMFAEnabled property is set to 1",
           "AdditionalInformation": "IAM User account \"root\" for us-gov cloud regions does not have console access. This recommendation is not applicable for us-gov cloud regions.",
           "References": "https://docs.aws.amazon.com/IAM/latest/UserGuide/id_root-user.html#id_root-user_manage_mfa:https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa_enable_virtual.html#enable-virt-mfa-for-root"
         }
@@ -375,10 +375,10 @@
           "Profile": "Level 2",
           "AssessmentStatus": "Automated",
           "Description": "The 'root' user account is the most privileged user in an AWS account. MFA adds an extra layer of protection on top of a user name and password. With MFA enabled, when a user signs in to an AWS website, they will be prompted for their user name and password as well as for an authentication code from their AWS MFA device. For Level 2, it is recommended that the 'root' user account be protected with a hardware MFA.",
-          "RationaleStatement": "A hardware MFA has a smaller attack surface than a virtual MFA. For example, a hardware MFA does not suffer the attack surface introduced by the mobile smartphone on which a virtual MFA resides.\n\n**Note**: Using hardware MFA for many, many AWS accounts may create a logistical device management issue. If this is the case, consider implementing this Level 2 recommendation selectively to the highest security AWS accounts and the Level 1 recommendation applied to the remaining accounts.",
+          "RationaleStatement": "A hardware MFA has a smaller attack surface than a virtual MFA. For example, a hardware MFA does not suffer the attack surface introduced by the mobile smartphone on which a virtual MFA resides.  **Note**: Using hardware MFA for many, many AWS accounts may create a logistical device management issue. If this is the case, consider implementing this Level 2 recommendation selectively to the highest security AWS accounts and the Level 1 recommendation applied to the remaining accounts.",
           "ImpactStatement": "",
-          "RemediationProcedure": "Perform the following to establish a hardware MFA for the 'root' user account:\n\n1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).\nNote: to manage MFA devices for the AWS 'root' user account, you must use your 'root' account credentials to sign in to AWS. You cannot manage MFA devices for the 'root' account using other credentials.\n2. Choose `Dashboard` , and under `Security Status` , expand `Activate MFA` on your root account.\n3. Choose `Activate MFA` \n4. In the wizard, choose `A hardware MFA` device and then choose `Next Step` .\n5. In the `Serial Number` box, enter the serial number that is found on the back of the MFA device.\n6. In the `Authentication Code 1` box, enter the six-digit number displayed by the MFA device. You might need to press the button on the front of the device to display the number.\n7. Wait 30 seconds while the device refreshes the code, and then enter the next six-digit number into the `Authentication Code 2` box. You might need to press the button on the front of the device again to display the second number.\n8. Choose `Next Step` . The MFA device is now associated with the AWS account. The next time you use your AWS account credentials to sign in, you must type a code from the hardware MFA device.\n\nRemediation for this recommendation is not available through AWS CLI.",
-          "AuditProcedure": "Perform the following to determine if the 'root' user account has a hardware MFA setup:\n\n1. Run the following command to determine if the 'root' account has MFA setup:\n```\n aws iam get-account-summary | grep \"AccountMFAEnabled\"\n```\n\nThe `AccountMFAEnabled` property is set to `1` will ensure that the 'root' user account has MFA (Virtual or Hardware) Enabled.\nIf `AccountMFAEnabled` property is set to `0` the account is not compliant with this recommendation.\n\n2. If `AccountMFAEnabled` property is set to `1`, determine 'root' account has Hardware MFA enabled.\nRun the following command to list all virtual MFA devices:\n```\n aws iam list-virtual-mfa-devices \n```\nIf the output contains one MFA with the following Serial Number, it means the MFA is virtual, not hardware and the account is not compliant with this recommendation:\n\n `\"SerialNumber\": \"arn:aws:iam::__:mfa/root-account-mfa-device\"`",
+          "RemediationProcedure": "Perform the following to establish a hardware MFA for the 'root' user account:  1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/). Note: to manage MFA devices for the AWS 'root' user account, you must use your 'root' account credentials to sign in to AWS. You cannot manage MFA devices for the 'root' account using other credentials. 2. Choose `Dashboard` , and under `Security Status` , expand `Activate MFA` on your root account. 3. Choose `Activate MFA`  4. In the wizard, choose `A hardware MFA` device and then choose `Next Step` . 5. In the `Serial Number` box, enter the serial number that is found on the back of the MFA device. 6. In the `Authentication Code 1` box, enter the six-digit number displayed by the MFA device. You might need to press the button on the front of the device to display the number. 7. Wait 30 seconds while the device refreshes the code, and then enter the next six-digit number into the `Authentication Code 2` box. You might need to press the button on the front of the device again to display the second number. 8. Choose `Next Step` . The MFA device is now associated with the AWS account. The next time you use your AWS account credentials to sign in, you must type a code from the hardware MFA device.  Remediation for this recommendation is not available through AWS CLI.",
+          "AuditProcedure": "Perform the following to determine if the 'root' user account has a hardware MFA setup:  1. Run the following command to determine if the 'root' account has MFA setup: ```  aws iam get-account-summary | grep \"AccountMFAEnabled\" ```  The `AccountMFAEnabled` property is set to `1` will ensure that the 'root' user account has MFA (Virtual or Hardware) Enabled. If `AccountMFAEnabled` property is set to `0` the account is not compliant with this recommendation.  2. If `AccountMFAEnabled` property is set to `1`, determine 'root' account has Hardware MFA enabled. Run the following command to list all virtual MFA devices: ```  aws iam list-virtual-mfa-devices  ``` If the output contains one MFA with the following Serial Number, it means the MFA is virtual, not hardware and the account is not compliant with this recommendation:   `\"SerialNumber\": \"arn:aws:iam::__:mfa/root-account-mfa-device\"`",
           "AdditionalInformation": "IAM User account 'root' for us-gov cloud regions does not have console access. This control is not applicable for us-gov cloud regions.",
           "References": "https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa_enable_virtual.html:https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa_enable_physical.html#enable-hw-mfa-for-root"
         }
@@ -398,9 +398,9 @@
           "Description": "With the creation of an AWS account, a 'root user' is created that cannot be disabled or deleted. That user has unrestricted access to and control over all resources in the AWS account. It is highly recommended that the use of this account be avoided for everyday tasks.",
           "RationaleStatement": "The 'root user' has unrestricted access to and control over all account resources. Use of it is inconsistent with the principles of least privilege and separation of duties, and can lead to unnecessary harm due to error or account compromise.",
           "ImpactStatement": "",
-          "RemediationProcedure": "If you find that the 'root' user account is being used for daily activity to include administrative tasks that do not require the 'root' user:\n\n1. Change the 'root' user password.\n2. Deactivate or delete any access keys associate with the 'root' user.\n\n**Remember, anyone who has 'root' user credentials for your AWS account has unrestricted access to and control of all the resources in your account, including billing information.",
-          "AuditProcedure": "**From Console:**\n\n1. Login to the AWS Management Console at `https://console.aws.amazon.com/iam/`\n2. In the left pane, click `Credential Report`\n3. Click on `Download Report`\n4. Open of Save the file locally\n5. Locate the `` under the user column\n6. Review `password_last_used, access_key_1_last_used_date, access_key_2_last_used_date` to determine when the 'root user' was last used.\n\n**From Command Line:**\n\nRun the following CLI commands to provide a credential report for determining the last time the 'root user' was used:\n```\naws iam generate-credential-report\n```\n```\naws iam get-credential-report --query 'Content' --output text | base64 -d | cut -d, -f1,5,11,16 | grep -B1 ''\n```\n\nReview `password_last_used`, `access_key_1_last_used_date`, `access_key_2_last_used_date` to determine when the _root user_ was last used.\n\n**Note:** There are a few conditions under which the use of the 'root' user account is required. Please see the reference links for all of the tasks that require use of the 'root' user.",
-          "AdditionalInformation": "The 'root' user for us-gov cloud regions is not enabled by default. However, on request to AWS support, they can enable the 'root' user and grant access only through access-keys (CLI, API methods) for us-gov cloud region. If the 'root' user for us-gov cloud regions is enabled, this recommendation is applicable.\n\nMonitoring usage of the 'root' user can be accomplished by implementing recommendation 3.3 Ensure a log metric filter and alarm exist for usage of the 'root' user.",
+          "RemediationProcedure": "If you find that the 'root' user account is being used for daily activity to include administrative tasks that do not require the 'root' user:  1. Change the 'root' user password. 2. Deactivate or delete any access keys associate with the 'root' user.  **Remember, anyone who has 'root' user credentials for your AWS account has unrestricted access to and control of all the resources in your account, including billing information.",
+          "AuditProcedure": "**From Console:**  1. Login to the AWS Management Console at `https://console.aws.amazon.com/iam/` 2. In the left pane, click `Credential Report` 3. Click on `Download Report` 4. Open of Save the file locally 5. Locate the `` under the user column 6. Review `password_last_used, access_key_1_last_used_date, access_key_2_last_used_date` to determine when the 'root user' was last used.  **From Command Line:**  Run the following CLI commands to provide a credential report for determining the last time the 'root user' was used: ``` aws iam generate-credential-report ``` ``` aws iam get-credential-report --query 'Content' --output text | base64 -d | cut -d, -f1,5,11,16 | grep -B1 '' ```  Review `password_last_used`, `access_key_1_last_used_date`, `access_key_2_last_used_date` to determine when the _root user_ was last used.  **Note:** There are a few conditions under which the use of the 'root' user account is required. Please see the reference links for all of the tasks that require use of the 'root' user.",
+          "AdditionalInformation": "The 'root' user for us-gov cloud regions is not enabled by default. However, on request to AWS support, they can enable the 'root' user and grant access only through access-keys (CLI, API methods) for us-gov cloud region. If the 'root' user for us-gov cloud regions is enabled, this recommendation is applicable.  Monitoring usage of the 'root' user can be accomplished by implementing recommendation 3.3 Ensure a log metric filter and alarm exist for usage of the 'root' user.",
           "References": "https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html:https://docs.aws.amazon.com/IAM/latest/UserGuide/id_root-user.html:https://docs.aws.amazon.com/general/latest/gr/aws_tasks-that-require-root.html"
         }
       ]
@@ -419,8 +419,8 @@
           "Description": "Password policies are, in part, used to enforce password complexity requirements. IAM password policies can be used to ensure password are at least a given length. It is recommended that the password policy require a minimum password length 14.",
           "RationaleStatement": "Setting a password complexity policy increases account resiliency against brute force login attempts.",
           "ImpactStatement": "",
-          "RemediationProcedure": "Perform the following to set the password policy as prescribed:\n\n**From Console:**\n\n1. Login to AWS Console (with appropriate permissions to View Identity Access Management Account Settings)\n2. Go to IAM Service on the AWS Console\n3. Click on Account Settings on the Left Pane\n4. Set \"Minimum password length\" to `14` or greater.\n5. Click \"Apply password policy\"\n\n**From Command Line:**\n```\n aws iam update-account-password-policy --minimum-password-length 14\n```\nNote: All commands starting with \"aws iam update-account-password-policy\" can be combined into a single command.",
-          "AuditProcedure": "Perform the following to ensure the password policy is configured as prescribed:\n\n**From Console:**\n\n1. Login to AWS Console (with appropriate permissions to View Identity Access Management Account Settings)\n2. Go to IAM Service on the AWS Console\n3. Click on Account Settings on the Left Pane\n4. Ensure \"Minimum password length\" is set to 14 or greater.\n\n**From Command Line:**\n```\naws iam get-account-password-policy\n```\nEnsure the output of the above command includes \"MinimumPasswordLength\": 14 (or higher)",
+          "RemediationProcedure": "Perform the following to set the password policy as prescribed:  **From Console:**  1. Login to AWS Console (with appropriate permissions to View Identity Access Management Account Settings) 2. Go to IAM Service on the AWS Console 3. Click on Account Settings on the Left Pane 4. Set \"Minimum password length\" to `14` or greater. 5. Click \"Apply password policy\"  **From Command Line:** ```  aws iam update-account-password-policy --minimum-password-length 14 ``` Note: All commands starting with \"aws iam update-account-password-policy\" can be combined into a single command.",
+          "AuditProcedure": "Perform the following to ensure the password policy is configured as prescribed:  **From Console:**  1. Login to AWS Console (with appropriate permissions to View Identity Access Management Account Settings) 2. Go to IAM Service on the AWS Console 3. Click on Account Settings on the Left Pane 4. Ensure \"Minimum password length\" is set to 14 or greater.  **From Command Line:** ``` aws iam get-account-password-policy ``` Ensure the output of the above command includes \"MinimumPasswordLength\": 14 (or higher)",
           "AdditionalInformation": "",
           "References": "https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_passwords_account-policy.html:https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#configure-strong-password-policy"
         }
@@ -440,8 +440,8 @@
           "Description": "IAM password policies can prevent the reuse of a given password by the same user. It is recommended that the password policy prevent the reuse of passwords.",
           "RationaleStatement": "Preventing password reuse increases account resiliency against brute force login attempts.",
           "ImpactStatement": "",
-          "RemediationProcedure": "Perform the following to set the password policy as prescribed:\n\n**From Console:**\n\n1. Login to AWS Console (with appropriate permissions to View Identity Access Management Account Settings)\n2. Go to IAM Service on the AWS Console\n3. Click on Account Settings on the Left Pane\n4. Check \"Prevent password reuse\"\n5. Set \"Number of passwords to remember\" is set to `24` \n\n**From Command Line:**\n```\n aws iam update-account-password-policy --password-reuse-prevention 24\n```\nNote: All commands starting with \"aws iam update-account-password-policy\" can be combined into a single command.",
-          "AuditProcedure": "Perform the following to ensure the password policy is configured as prescribed:\n\n**From Console:**\n\n1. Login to AWS Console (with appropriate permissions to View Identity Access Management Account Settings)\n2. Go to IAM Service on the AWS Console\n3. Click on Account Settings on the Left Pane\n4. Ensure \"Prevent password reuse\" is checked\n5. Ensure \"Number of passwords to remember\" is set to 24\n\n**From Command Line:**\n```\naws iam get-account-password-policy \n```\nEnsure the output of the above command includes \"PasswordReusePrevention\": 24",
+          "RemediationProcedure": "Perform the following to set the password policy as prescribed:  **From Console:**  1. Login to AWS Console (with appropriate permissions to View Identity Access Management Account Settings) 2. Go to IAM Service on the AWS Console 3. Click on Account Settings on the Left Pane 4. Check \"Prevent password reuse\" 5. Set \"Number of passwords to remember\" is set to `24`   **From Command Line:** ```  aws iam update-account-password-policy --password-reuse-prevention 24 ``` Note: All commands starting with \"aws iam update-account-password-policy\" can be combined into a single command.",
+          "AuditProcedure": "Perform the following to ensure the password policy is configured as prescribed:  **From Console:**  1. Login to AWS Console (with appropriate permissions to View Identity Access Management Account Settings) 2. Go to IAM Service on the AWS Console 3. Click on Account Settings on the Left Pane 4. Ensure \"Prevent password reuse\" is checked 5. Ensure \"Number of passwords to remember\" is set to 24  **From Command Line:** ``` aws iam get-account-password-policy  ``` Ensure the output of the above command includes \"PasswordReusePrevention\": 24",
           "AdditionalInformation": "",
           "References": "https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_passwords_account-policy.html:https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#configure-strong-password-policy"
         }
@@ -461,8 +461,8 @@
           "Description": "Amazon S3 provides a variety of no, or low, cost encryption options to protect data at rest.",
           "RationaleStatement": "Encrypting data at rest reduces the likelihood that it is unintentionally exposed and can nullify the impact of disclosure if the encryption remains unbroken.",
           "ImpactStatement": "Amazon S3 buckets with default bucket encryption using SSE-KMS cannot be used as destination buckets for Amazon S3 server access logging. Only SSE-S3 default encryption is supported for server access log destination buckets.",
-          "RemediationProcedure": "**From Console:**\n\n1. Login to AWS Management Console and open the Amazon S3 console using https://console.aws.amazon.com/s3/ \n2. Select a Bucket.\n3. Click on 'Properties'.\n4. Click edit on `Default Encryption`.\n5. Select either `AES-256`, `AWS-KMS`, `SSE-KMS` or `SSE-S3`.\n6. Click `Save`\n7. Repeat for all the buckets in your AWS account lacking encryption.\n\n**From Command Line:**\n\nRun either \n```\naws s3api put-bucket-encryption --bucket  --server-side-encryption-configuration '{\"Rules\": [{\"ApplyServerSideEncryptionByDefault\": {\"SSEAlgorithm\": \"AES256\"}}]}'\n```\n or \n```\naws s3api put-bucket-encryption --bucket  --server-side-encryption-configuration '{\"Rules\": [{\"ApplyServerSideEncryptionByDefault\": {\"SSEAlgorithm\": \"aws:kms\",\"KMSMasterKeyID\": \"aws/s3\"}}]}'\n```\n\n**Note:** the KMSMasterKeyID can be set to the master key of your choosing; aws/s3 is an AWS preconfigured default.",
-          "AuditProcedure": "**From Console:**\n\n1. Login to AWS Management Console and open the Amazon S3 console using https://console.aws.amazon.com/s3/ \n2. Select a Bucket.\n3. Click on 'Properties'.\n4. Verify that `Default Encryption` is enabled, and displays either `AES-256`, `AWS-KMS`, `SSE-KMS` or `SSE-S3`.\n5. Repeat for all the buckets in your AWS account.\n\n**From Command Line:**\n\n1. Run command to list buckets\n```\naws s3 ls\n```\n2. For each bucket, run \n```\naws s3api get-bucket-encryption --bucket \n```\n3. Verify that either \n```\n\"SSEAlgorithm\": \"AES256\"\n```\n or \n```\n\"SSEAlgorithm\": \"aws:kms\"```\n is displayed.",
+          "RemediationProcedure": "**From Console:**  1. Login to AWS Management Console and open the Amazon S3 console using https://console.aws.amazon.com/s3/  2. Select a Bucket. 3. Click on 'Properties'. 4. Click edit on `Default Encryption`. 5. Select either `AES-256`, `AWS-KMS`, `SSE-KMS` or `SSE-S3`. 6. Click `Save` 7. Repeat for all the buckets in your AWS account lacking encryption.  **From Command Line:**  Run either  ``` aws s3api put-bucket-encryption --bucket  --server-side-encryption-configuration '{\"Rules\": [{\"ApplyServerSideEncryptionByDefault\": {\"SSEAlgorithm\": \"AES256\"}}]}' ```  or  ``` aws s3api put-bucket-encryption --bucket  --server-side-encryption-configuration '{\"Rules\": [{\"ApplyServerSideEncryptionByDefault\": {\"SSEAlgorithm\": \"aws:kms\",\"KMSMasterKeyID\": \"aws/s3\"}}]}' ```  **Note:** the KMSMasterKeyID can be set to the master key of your choosing; aws/s3 is an AWS preconfigured default.",
+          "AuditProcedure": "**From Console:**  1. Login to AWS Management Console and open the Amazon S3 console using https://console.aws.amazon.com/s3/  2. Select a Bucket. 3. Click on 'Properties'. 4. Verify that `Default Encryption` is enabled, and displays either `AES-256`, `AWS-KMS`, `SSE-KMS` or `SSE-S3`. 5. Repeat for all the buckets in your AWS account.  **From Command Line:**  1. Run command to list buckets ``` aws s3 ls ``` 2. For each bucket, run  ``` aws s3api get-bucket-encryption --bucket  ``` 3. Verify that either  ``` \"SSEAlgorithm\": \"AES256\" ```  or  ``` \"SSEAlgorithm\": \"aws:kms\"```  is displayed.",
           "AdditionalInformation": "S3 bucket encryption only applies to objects as they are placed in the bucket. Enabling S3 bucket encryption does **not** encrypt objects previously stored within the bucket.",
           "References": "https://docs.aws.amazon.com/AmazonS3/latest/user-guide/default-bucket-encryption.html:https://docs.aws.amazon.com/AmazonS3/latest/dev/bucket-encryption.html#bucket-encryption-related-resources"
         }
@@ -482,8 +482,8 @@
           "Description": "At the Amazon S3 bucket level, you can configure permissions through a bucket policy making the objects accessible only through HTTPS.",
           "RationaleStatement": "By default, Amazon S3 allows both HTTP and HTTPS requests. To achieve only allowing access to Amazon S3 objects through HTTPS you also have to explicitly deny access to HTTP requests. Bucket policies that allow HTTPS requests without explicitly denying HTTP requests will not comply with this recommendation.",
           "ImpactStatement": "",
-          "RemediationProcedure": "**From Console:**\n\n1. Login to AWS Management Console and open the Amazon S3 console using https://console.aws.amazon.com/s3/\n2. Select the Check box next to the Bucket.\n3. Click on 'Permissions'.\n4. Click 'Bucket Policy'\n5. Add this to the existing policy filling in the required information\n```\n{\n \"Sid\": \",\n \"Effect\": \"Deny\",\n \"Principal\": \"*\",\n \"Action\": \"s3:*\",\n \"Resource\": \"arn:aws:s3:::/*\",\n \"Condition\": {\n \"Bool\": {\n \"aws:SecureTransport\": \"false\"\n }\n }\n }\n```\n6. Save\n7. Repeat for all the buckets in your AWS account that contain sensitive data.\n\n**From Console** \n\nusing AWS Policy Generator:\n\n1. Repeat steps 1-4 above.\n2. Click on `Policy Generator` at the bottom of the Bucket Policy Editor\n3. Select Policy Type\n`S3 Bucket Policy`\n4. Add Statements\n- `Effect` = Deny\n- `Principal` = *\n- `AWS Service` = Amazon S3\n- `Actions` = *\n- `Amazon Resource Name` = \n5. Generate Policy\n6. Copy the text and add it to the Bucket Policy.\n\n**From Command Line:**\n\n1. Export the bucket policy to a json file.\n```\naws s3api get-bucket-policy --bucket  --query Policy --output text > policy.json\n```\n\n2. Modify the policy.json file by adding in this statement:\n```\n{\n \"Sid\": \",\n \"Effect\": \"Deny\",\n \"Principal\": \"*\",\n \"Action\": \"s3:*\",\n \"Resource\": \"arn:aws:s3:::/*\",\n \"Condition\": {\n \"Bool\": {\n \"aws:SecureTransport\": \"false\"\n }\n }\n }\n```\n3. Apply this modified policy back to the S3 bucket:\n```\naws s3api put-bucket-policy --bucket  --policy file://policy.json\n```",
-          "AuditProcedure": "To allow access to HTTPS you can use a condition that checks for the key `\"aws:SecureTransport: true\"`. This means that the request is sent through HTTPS but that HTTP can still be used. So to make sure you do not allow HTTP access confirm that there is a bucket policy that explicitly denies access for HTTP requests and that it contains the key \"aws:SecureTransport\": \"false\".\n\n**From Console:**\n\n1. Login to AWS Management Console and open the Amazon S3 console using https://console.aws.amazon.com/s3/\n2. Select the Check box next to the Bucket.\n3. Click on 'Permissions', then Click on `Bucket Policy`.\n4. Ensure that a policy is listed that matches:\n```\n'{\n \"Sid\": ,\n \"Effect\": \"Deny\",\n \"Principal\": \"*\",\n \"Action\": \"s3:*\",\n \"Resource\": \"arn:aws:s3:::/*\",\n \"Condition\": {\n \"Bool\": {\n \"aws:SecureTransport\": \"false\"\n }'\n```\n`` and `` will be specific to your account\n\n5. Repeat for all the buckets in your AWS account.\n\n**From Command Line:**\n\n1. List all of the S3 Buckets \n```\naws s3 ls\n```\n2. Using the list of buckets run this command on each of them:\n```\naws s3api get-bucket-policy --bucket  | grep aws:SecureTransport\n```\n3. Confirm that `aws:SecureTransport` is set to false `aws:SecureTransport:false`\n4. Confirm that the policy line has Effect set to Deny 'Effect:Deny'",
+          "RemediationProcedure": "**From Console:**  1. Login to AWS Management Console and open the Amazon S3 console using https://console.aws.amazon.com/s3/ 2. Select the Check box next to the Bucket. 3. Click on 'Permissions'. 4. Click 'Bucket Policy' 5. Add this to the existing policy filling in the required information ``` {  \"Sid\": \",  \"Effect\": \"Deny\",  \"Principal\": \"*\",  \"Action\": \"s3:*\",  \"Resource\": \"arn:aws:s3:::/*\",  \"Condition\": {  \"Bool\": {  \"aws:SecureTransport\": \"false\"  }  }  } ``` 6. Save 7. Repeat for all the buckets in your AWS account that contain sensitive data.  **From Console**   using AWS Policy Generator:  1. Repeat steps 1-4 above. 2. Click on `Policy Generator` at the bottom of the Bucket Policy Editor 3. Select Policy Type `S3 Bucket Policy` 4. Add Statements - `Effect` = Deny - `Principal` = * - `AWS Service` = Amazon S3 - `Actions` = * - `Amazon Resource Name` =  5. Generate Policy 6. Copy the text and add it to the Bucket Policy.  **From Command Line:**  1. Export the bucket policy to a json file. ``` aws s3api get-bucket-policy --bucket  --query Policy --output text > policy.json ```  2. Modify the policy.json file by adding in this statement: ``` {  \"Sid\": \",  \"Effect\": \"Deny\",  \"Principal\": \"*\",  \"Action\": \"s3:*\",  \"Resource\": \"arn:aws:s3:::/*\",  \"Condition\": {  \"Bool\": {  \"aws:SecureTransport\": \"false\"  }  }  } ``` 3. Apply this modified policy back to the S3 bucket: ``` aws s3api put-bucket-policy --bucket  --policy file://policy.json ```",
+          "AuditProcedure": "To allow access to HTTPS you can use a condition that checks for the key `\"aws:SecureTransport: true\"`. This means that the request is sent through HTTPS but that HTTP can still be used. So to make sure you do not allow HTTP access confirm that there is a bucket policy that explicitly denies access for HTTP requests and that it contains the key \"aws:SecureTransport\": \"false\".  **From Console:**  1. Login to AWS Management Console and open the Amazon S3 console using https://console.aws.amazon.com/s3/ 2. Select the Check box next to the Bucket. 3. Click on 'Permissions', then Click on `Bucket Policy`. 4. Ensure that a policy is listed that matches: ``` '{  \"Sid\": ,  \"Effect\": \"Deny\",  \"Principal\": \"*\",  \"Action\": \"s3:*\",  \"Resource\": \"arn:aws:s3:::/*\",  \"Condition\": {  \"Bool\": {  \"aws:SecureTransport\": \"false\"  }' ``` `` and `` will be specific to your account  5. Repeat for all the buckets in your AWS account.  **From Command Line:**  1. List all of the S3 Buckets  ``` aws s3 ls ``` 2. Using the list of buckets run this command on each of them: ``` aws s3api get-bucket-policy --bucket  | grep aws:SecureTransport ``` 3. Confirm that `aws:SecureTransport` is set to false `aws:SecureTransport:false` 4. Confirm that the policy line has Effect set to Deny 'Effect:Deny'",
           "AdditionalInformation": "",
           "References": "https://aws.amazon.com/premiumsupport/knowledge-center/s3-bucket-policy-for-config-rule/:https://aws.amazon.com/blogs/security/how-to-use-bucket-policies-and-apply-defense-in-depth-to-help-secure-your-amazon-s3-data/:https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/get-bucket-policy.html"
         }
@@ -503,8 +503,8 @@
           "Description": "Once MFA Delete is enabled on your sensitive and classified S3 bucket it requires the user to have two forms of authentication.",
           "RationaleStatement": "Adding MFA delete to an S3 bucket, requires additional authentication when you change the version state of your bucket or you delete and object version adding another layer of security in the event your security credentials are compromised or unauthorized access is granted.",
           "ImpactStatement": "",
-          "RemediationProcedure": "Perform the steps below to enable MFA delete on an S3 bucket.\n\nNote:\n-You cannot enable MFA Delete using the AWS Management Console. You must use the AWS CLI or API.\n-You must use your 'root' account to enable MFA Delete on S3 buckets.\n\n**From Command line:**\n\n1. Run the s3api put-bucket-versioning command\n\n```\naws s3api put-bucket-versioning --profile my-root-profile --bucket Bucket_Name --versioning-configuration Status=Enabled,MFADelete=Enabled --mfa “arn:aws:iam::aws_account_id:mfa/root-account-mfa-device passcode”\n```",
-          "AuditProcedure": "Perform the steps below to confirm MFA delete is configured on an S3 Bucket\n\n**From Console:**\n\n1. Login to the S3 console at `https://console.aws.amazon.com/s3/`\n\n2. Click the `Check` box next to the Bucket name you want to confirm\n\n3. In the window under `Properties`\n\n4. Confirm that Versioning is `Enabled`\n\n5. Confirm that MFA Delete is `Enabled`\n\n**From Command Line:**\n\n1. Run the `get-bucket-versioning`\n```\naws s3api get-bucket-versioning --bucket my-bucket\n```\n\nOutput example:\n```\n \n Enabled\n Enabled \n\n```\n\nIf the Console or the CLI output does not show Versioning and MFA Delete `enabled` refer to the remediation below.",
+          "RemediationProcedure": "Perform the steps below to enable MFA delete on an S3 bucket.  Note: -You cannot enable MFA Delete using the AWS Management Console. You must use the AWS CLI or API. -You must use your 'root' account to enable MFA Delete on S3 buckets.  **From Command line:**  1. Run the s3api put-bucket-versioning command  ``` aws s3api put-bucket-versioning --profile my-root-profile --bucket Bucket_Name --versioning-configuration Status=Enabled,MFADelete=Enabled --mfa “arn:aws:iam::aws_account_id:mfa/root-account-mfa-device passcode” ```",
+          "AuditProcedure": "Perform the steps below to confirm MFA delete is configured on an S3 Bucket  **From Console:**  1. Login to the S3 console at `https://console.aws.amazon.com/s3/`  2. Click the `Check` box next to the Bucket name you want to confirm  3. In the window under `Properties`  4. Confirm that Versioning is `Enabled`  5. Confirm that MFA Delete is `Enabled`  **From Command Line:**  1. Run the `get-bucket-versioning` ``` aws s3api get-bucket-versioning --bucket my-bucket ```  Output example: ```    Enabled  Enabled   ```  If the Console or the CLI output does not show Versioning and MFA Delete `enabled` refer to the remediation below.",
           "AdditionalInformation": "",
           "References": "https://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html#MultiFactorAuthenticationDelete:https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMFADelete.html:https://aws.amazon.com/blogs/security/securing-access-to-aws-using-mfa-part-3/:https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa_lost-or-broken.html"
         }
@@ -522,10 +522,10 @@
           "Profile": "Level 2",
           "AssessmentStatus": "Manual",
           "Description": "Amazon S3 buckets can contain sensitive data, that for security purposes should be discovered, monitored, classified and protected. Macie along with other 3rd party tools can automatically provide an inventory of Amazon S3 buckets.",
-          "RationaleStatement": "Using a Cloud service or 3rd Party software to continuously monitor and automate the process of data discovery and classification for S3 buckets using machine learning and pattern matching is a strong defense in protecting that information.\n\nAmazon Macie is a fully managed data security and data privacy service that uses machine learning and pattern matching to discover and protect your sensitive data in AWS.",
+          "RationaleStatement": "Using a Cloud service or 3rd Party software to continuously monitor and automate the process of data discovery and classification for S3 buckets using machine learning and pattern matching is a strong defense in protecting that information.  Amazon Macie is a fully managed data security and data privacy service that uses machine learning and pattern matching to discover and protect your sensitive data in AWS.",
           "ImpactStatement": "There is a cost associated with using Amazon Macie. There is also typically a cost associated with 3rd Party tools that perform similar processes and protection.",
-          "RemediationProcedure": "Perform the steps below to enable and configure Amazon Macie\n\n**From Console:**\n\n1. Log on to the Macie console at `https://console.aws.amazon.com/macie/`\n\n2. Click `Get started`.\n\n3. Click `Enable Macie`.\n\nSetup a repository for sensitive data discovery results\n\n1. In the Left pane, under Settings, click `Discovery results`.\n\n2. Make sure `Create bucket` is selected.\n\n3. Create a bucket, enter a name for the bucket. The name must be unique across all S3 buckets. In addition, the name must start with a lowercase letter or a number.\n\n4. Click on `Advanced`.\n\n5. Block all public access, make sure `Yes` is selected.\n\n6. KMS encryption, specify the AWS KMS key that you want to use to encrypt the results. The key must be a symmetric, customer master key (CMK) that's in the same Region as the S3 bucket.\n\n7. Click on `Save`\n\nCreate a job to discover sensitive data\n\n1. In the left pane, click `S3 buckets`. Macie displays a list of all the S3 buckets for your account.\n\n2. Select the `check box` for each bucket that you want Macie to analyze as part of the job\n\n3. Click `Create job`.\n\n3. Click `Quick create`.\n\n4. For the Name and description step, enter a name and, optionally, a description of the job.\n\n5. Then click `Next`.\n\n6. For the Review and create step, click `Submit`.\n\nReview your findings\n\n1. In the left pane, click `Findings`.\n\n2. To view the details of a specific finding, choose any field other than the check box for the finding.\n\nIf you are using a 3rd Party tool to manage and protect your s3 data, follow the Vendor documentation for implementing and configuring that tool.",
-          "AuditProcedure": "Perform the following steps to determine if Macie is running:\n\n**From Console:**\n\n 1. Login to the Macie console at https://console.aws.amazon.com/macie/\n\n 2. In the left hand pane click on By job under findings.\n\n 3. Confirm that you have a Job setup for your S3 Buckets\n\nWhen you log into the Macie console if you aren't taken to the summary page and you don't have a job setup and running then refer to the remediation procedure below.\n\nIf you are using a 3rd Party tool to manage and protect your s3 data you meet this recommendation.",
+          "RemediationProcedure": "Perform the steps below to enable and configure Amazon Macie  **From Console:**  1. Log on to the Macie console at `https://console.aws.amazon.com/macie/`  2. Click `Get started`.  3. Click `Enable Macie`.  Setup a repository for sensitive data discovery results  1. In the Left pane, under Settings, click `Discovery results`.  2. Make sure `Create bucket` is selected.  3. Create a bucket, enter a name for the bucket. The name must be unique across all S3 buckets. In addition, the name must start with a lowercase letter or a number.  4. Click on `Advanced`.  5. Block all public access, make sure `Yes` is selected.  6. KMS encryption, specify the AWS KMS key that you want to use to encrypt the results. The key must be a symmetric, customer master key (CMK) that's in the same Region as the S3 bucket.  7. Click on `Save`  Create a job to discover sensitive data  1. In the left pane, click `S3 buckets`. Macie displays a list of all the S3 buckets for your account.  2. Select the `check box` for each bucket that you want Macie to analyze as part of the job  3. Click `Create job`.  3. Click `Quick create`.  4. For the Name and description step, enter a name and, optionally, a description of the job.  5. Then click `Next`.  6. For the Review and create step, click `Submit`.  Review your findings  1. In the left pane, click `Findings`.  2. To view the details of a specific finding, choose any field other than the check box for the finding.  If you are using a 3rd Party tool to manage and protect your s3 data, follow the Vendor documentation for implementing and configuring that tool.",
+          "AuditProcedure": "Perform the following steps to determine if Macie is running:  **From Console:**   1. Login to the Macie console at https://console.aws.amazon.com/macie/   2. In the left hand pane click on By job under findings.   3. Confirm that you have a Job setup for your S3 Buckets  When you log into the Macie console if you aren't taken to the summary page and you don't have a job setup and running then refer to the remediation procedure below.  If you are using a 3rd Party tool to manage and protect your s3 data you meet this recommendation.",
           "AdditionalInformation": "",
           "References": "https://aws.amazon.com/macie/getting-started/:https://docs.aws.amazon.com/workspaces/latest/adminguide/data-protection.html:https://docs.aws.amazon.com/macie/latest/user/data-classification.html"
         }
@@ -544,10 +544,10 @@
           "Profile": "Level 1",
           "AssessmentStatus": "Automated",
           "Description": "Amazon S3 provides `Block public access (bucket settings)` and `Block public access (account settings)` to help you manage public access to Amazon S3 resources. By default, S3 buckets and objects are created with public access disabled. However, an IAM principal with sufficient S3 permissions can enable public access at the bucket and/or object level. While enabled, `Block public access (bucket settings)` prevents an individual bucket, and its contained objects, from becoming publicly accessible. Similarly, `Block public access (account settings)` prevents all buckets, and contained objects, from becoming publicly accessible across the entire account.",
-          "RationaleStatement": "Amazon S3 `Block public access (bucket settings)` prevents the accidental or malicious public exposure of data contained within the respective bucket(s). \n\nAmazon S3 `Block public access (account settings)` prevents the accidental or malicious public exposure of data contained within all buckets of the respective AWS account.\n\nWhether blocking public access to all or some buckets is an organizational decision that should be based on data sensitivity, least privilege, and use case.",
+          "RationaleStatement": "Amazon S3 `Block public access (bucket settings)` prevents the accidental or malicious public exposure of data contained within the respective bucket(s).   Amazon S3 `Block public access (account settings)` prevents the accidental or malicious public exposure of data contained within all buckets of the respective AWS account.  Whether blocking public access to all or some buckets is an organizational decision that should be based on data sensitivity, least privilege, and use case.",
           "ImpactStatement": "When you apply Block Public Access settings to an account, the settings apply to all AWS Regions globally. The settings might not take effect in all Regions immediately or simultaneously, but they eventually propagate to all Regions.",
-          "RemediationProcedure": "**If utilizing Block Public Access (bucket settings)**\n\n**From Console:**\n\n1. Login to AWS Management Console and open the Amazon S3 console using https://console.aws.amazon.com/s3/ \n2. Select the Check box next to the Bucket.\n3. Click on 'Edit public access settings'.\n4. Click 'Block all public access'\n5. Repeat for all the buckets in your AWS account that contain sensitive data.\n\n**From Command Line:**\n\n1. List all of the S3 Buckets\n```\naws s3 ls\n```\n2. Set the Block Public Access to true on that bucket\n```\naws s3api put-public-access-block --bucket  --public-access-block-configuration \"BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPublicBuckets=true\"\n```\n\n**If utilizing Block Public Access (account settings)**\n\n**From Console:**\n\nIf the output reads `true` for the separate configuration settings then it is set on the account.\n\n1. Login to AWS Management Console and open the Amazon S3 console using https://console.aws.amazon.com/s3/ \n2. Choose `Block Public Access (account settings)`\n3. Choose `Edit` to change the block public access settings for all the buckets in your AWS account\n4. Choose the settings you want to change, and then choose `Save`. For details about each setting, pause on the `i` icons.\n5. When you're asked for confirmation, enter `confirm`. Then Click `Confirm` to save your changes.\n\n**From Command Line:**\n\nTo set Block Public access settings for this account, run the following command:\n```\naws s3control put-public-access-block\n--public-access-block-configuration BlockPublicAcls=true, IgnorePublicAcls=true, BlockPublicPolicy=true, RestrictPublicBuckets=true\n--account-id \n```",
-          "AuditProcedure": "**If utilizing Block Public Access (bucket settings)**\n\n**From Console:**\n\n1. Login to AWS Management Console and open the Amazon S3 console using https://console.aws.amazon.com/s3/ \n2. Select the Check box next to the Bucket.\n3. Click on 'Edit public access settings'.\n4. Ensure that block public access settings are set appropriately for this bucket\n5. Repeat for all the buckets in your AWS account.\n\n**From Command Line:**\n\n1. List all of the S3 Buckets\n```\naws s3 ls\n```\n2. Find the public access setting on that bucket\n```\naws s3api get-public-access-block --bucket \n```\nOutput if Block Public access is enabled:\n\n```\n{\n \"PublicAccessBlockConfiguration\": {\n \"BlockPublicAcls\": true,\n \"IgnorePublicAcls\": true,\n \"BlockPublicPolicy\": true,\n \"RestrictPublicBuckets\": true\n }\n}\n```\n\nIf the output reads `false` for the separate configuration settings then proceed to the remediation.\n\n**If utilizing Block Public Access (account settings)**\n\n**From Console:**\n\n1. Login to AWS Management Console and open the Amazon S3 console using https://console.aws.amazon.com/s3/ \n2. Choose `Block public access (account settings)`\n3. Ensure that block public access settings are set appropriately for your AWS account.\n\n**From Command Line:**\n\nTo check Public access settings for this account status, run the following command,\n`aws s3control get-public-access-block --account-id  --region `\n\nOutput if Block Public access is enabled:\n\n```\n{\n \"PublicAccessBlockConfiguration\": {\n \"IgnorePublicAcls\": true, \n \"BlockPublicPolicy\": true, \n \"BlockPublicAcls\": true, \n \"RestrictPublicBuckets\": true\n }\n}\n```\n\nIf the output reads `false` for the separate configuration settings then proceed to the remediation.",
+          "RemediationProcedure": "**If utilizing Block Public Access (bucket settings)**  **From Console:**  1. Login to AWS Management Console and open the Amazon S3 console using https://console.aws.amazon.com/s3/  2. Select the Check box next to the Bucket. 3. Click on 'Edit public access settings'. 4. Click 'Block all public access' 5. Repeat for all the buckets in your AWS account that contain sensitive data.  **From Command Line:**  1. List all of the S3 Buckets ``` aws s3 ls ``` 2. Set the Block Public Access to true on that bucket ``` aws s3api put-public-access-block --bucket  --public-access-block-configuration \"BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPublicBuckets=true\" ```  **If utilizing Block Public Access (account settings)**  **From Console:**  If the output reads `true` for the separate configuration settings then it is set on the account.  1. Login to AWS Management Console and open the Amazon S3 console using https://console.aws.amazon.com/s3/  2. Choose `Block Public Access (account settings)` 3. Choose `Edit` to change the block public access settings for all the buckets in your AWS account 4. Choose the settings you want to change, and then choose `Save`. For details about each setting, pause on the `i` icons. 5. When you're asked for confirmation, enter `confirm`. Then Click `Confirm` to save your changes.  **From Command Line:**  To set Block Public access settings for this account, run the following command: ``` aws s3control put-public-access-block --public-access-block-configuration BlockPublicAcls=true, IgnorePublicAcls=true, BlockPublicPolicy=true, RestrictPublicBuckets=true --account-id  ```",
+          "AuditProcedure": "**If utilizing Block Public Access (bucket settings)**  **From Console:**  1. Login to AWS Management Console and open the Amazon S3 console using https://console.aws.amazon.com/s3/  2. Select the Check box next to the Bucket. 3. Click on 'Edit public access settings'. 4. Ensure that block public access settings are set appropriately for this bucket 5. Repeat for all the buckets in your AWS account.  **From Command Line:**  1. List all of the S3 Buckets ``` aws s3 ls ``` 2. Find the public access setting on that bucket ``` aws s3api get-public-access-block --bucket  ``` Output if Block Public access is enabled:  ``` {  \"PublicAccessBlockConfiguration\": {  \"BlockPublicAcls\": true,  \"IgnorePublicAcls\": true,  \"BlockPublicPolicy\": true,  \"RestrictPublicBuckets\": true  } } ```  If the output reads `false` for the separate configuration settings then proceed to the remediation.  **If utilizing Block Public Access (account settings)**  **From Console:**  1. Login to AWS Management Console and open the Amazon S3 console using https://console.aws.amazon.com/s3/  2. Choose `Block public access (account settings)` 3. Ensure that block public access settings are set appropriately for your AWS account.  **From Command Line:**  To check Public access settings for this account status, run the following command, `aws s3control get-public-access-block --account-id  --region `  Output if Block Public access is enabled:  ``` {  \"PublicAccessBlockConfiguration\": {  \"IgnorePublicAcls\": true,   \"BlockPublicPolicy\": true,   \"BlockPublicAcls\": true,   \"RestrictPublicBuckets\": true  } } ```  If the output reads `false` for the separate configuration settings then proceed to the remediation.",
           "AdditionalInformation": "",
           "References": "https://docs.aws.amazon.com/AmazonS3/latest/user-guide/block-public-access-account.html"
         }
@@ -567,8 +567,8 @@
           "Description": "Elastic Compute Cloud (EC2) supports encryption at rest when using the Elastic Block Store (EBS) service. While disabled by default, forcing encryption at EBS volume creation is supported.",
           "RationaleStatement": "Encrypting data at rest reduces the likelihood that it is unintentionally exposed and can nullify the impact of disclosure if the encryption remains unbroken.",
           "ImpactStatement": "Losing access or removing the KMS key in use by the EBS volumes will result in no longer being able to access the volumes.",
-          "RemediationProcedure": "**From Console:**\n\n1. Login to AWS Management Console and open the Amazon EC2 console using https://console.aws.amazon.com/ec2/ \n2. Under `Account attributes`, click `EBS encryption`.\n3. Click `Manage`.\n4. Click the `Enable` checkbox.\n5. Click `Update EBS encryption`\n6. Repeat for every region requiring the change.\n\n**Note:** EBS volume encryption is configured per region.\n\n**From Command Line:**\n\n1. Run \n```\naws --region  ec2 enable-ebs-encryption-by-default\n```\n2. Verify that `\"EbsEncryptionByDefault\": true` is displayed.\n3. Repeat every region requiring the change.\n\n**Note:** EBS volume encryption is configured per region.",
-          "AuditProcedure": "**From Console:**\n\n1. Login to AWS Management Console and open the Amazon EC2 console using https://console.aws.amazon.com/ec2/ \n2. Under `Account attributes`, click `EBS encryption`.\n3. Verify `Always encrypt new EBS volumes` displays `Enabled`.\n4. Review every region in-use.\n\n**Note:** EBS volume encryption is configured per region.\n\n**From Command Line:**\n\n1. Run \n```\naws --region  ec2 get-ebs-encryption-by-default\n```\n2. Verify that `\"EbsEncryptionByDefault\": true` is displayed.\n3. Review every region in-use.\n\n**Note:** EBS volume encryption is configured per region.",
+          "RemediationProcedure": "**From Console:**  1. Login to AWS Management Console and open the Amazon EC2 console using https://console.aws.amazon.com/ec2/  2. Under `Account attributes`, click `EBS encryption`. 3. Click `Manage`. 4. Click the `Enable` checkbox. 5. Click `Update EBS encryption` 6. Repeat for every region requiring the change.  **Note:** EBS volume encryption is configured per region.  **From Command Line:**  1. Run  ``` aws --region  ec2 enable-ebs-encryption-by-default ``` 2. Verify that `\"EbsEncryptionByDefault\": true` is displayed. 3. Repeat every region requiring the change.  **Note:** EBS volume encryption is configured per region.",
+          "AuditProcedure": "**From Console:**  1. Login to AWS Management Console and open the Amazon EC2 console using https://console.aws.amazon.com/ec2/  2. Under `Account attributes`, click `EBS encryption`. 3. Verify `Always encrypt new EBS volumes` displays `Enabled`. 4. Review every region in-use.  **Note:** EBS volume encryption is configured per region.  **From Command Line:**  1. Run  ``` aws --region  ec2 get-ebs-encryption-by-default ``` 2. Verify that `\"EbsEncryptionByDefault\": true` is displayed. 3. Review every region in-use.  **Note:** EBS volume encryption is configured per region.",
           "AdditionalInformation": "Default EBS volume encryption only applies to newly created EBS volumes. Existing EBS volumes are **not** converted automatically.",
           "References": "https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html:https://aws.amazon.com/blogs/aws/new-opt-in-to-default-encryption-for-new-ebs-volumes/"
         }
@@ -588,8 +588,8 @@
           "Description": "Amazon RDS encrypted DB instances use the industry standard AES-256 encryption algorithm to encrypt your data on the server that hosts your Amazon RDS DB instances. After your data is encrypted, Amazon RDS handles authentication of access and decryption of your data transparently with a minimal impact on performance.",
           "RationaleStatement": "Databases are likely to hold sensitive and critical data, it is highly recommended to implement encryption in order to protect your data from unauthorized access or disclosure. With RDS encryption enabled, the data stored on the instance's underlying storage, the automated backups, read replicas, and snapshots, are all encrypted.",
           "ImpactStatement": "",
-          "RemediationProcedure": "**From Console:**\n\n1. Login to the AWS Management Console and open the RDS dashboard at https://console.aws.amazon.com/rds/.\n2. In the left navigation panel, click on `Databases`\n3. Select the Database instance that needs to be encrypted.\n4. Click on `Actions` button placed at the top right and select `Take Snapshot`.\n5. On the Take Snapshot page, enter a database name of which you want to take a snapshot in the `Snapshot Name` field and click on `Take Snapshot`.\n6. Select the newly created snapshot and click on the `Action` button placed at the top right and select `Copy snapshot` from the Action menu.\n7. On the Make Copy of DB Snapshot page, perform the following:\n\n- In the New DB Snapshot Identifier field, Enter a name for the `new snapshot`.\n- Check `Copy Tags`, New snapshot must have the same tags as the source snapshot.\n- Select `Yes` from the `Enable Encryption` dropdown list to enable encryption, You can choose to use the AWS default encryption key or custom key from Master Key dropdown list.\n\n8. Click `Copy Snapshot` to create an encrypted copy of the selected instance snapshot.\n9. Select the new Snapshot Encrypted Copy and click on the `Action` button placed at the top right and select `Restore Snapshot` button from the Action menu, This will restore the encrypted snapshot to a new database instance.\n10. On the Restore DB Instance page, enter a unique name for the new database instance in the DB Instance Identifier field.\n11. Review the instance configuration details and click `Restore DB Instance`.\n12. As the new instance provisioning process is completed can update application configuration to refer to the endpoint of the new Encrypted database instance Once the database endpoint is changed at the application level, can remove the unencrypted instance.\n\n**From Command Line:**\n\n1. Run `describe-db-instances` command to list all RDS database names available in the selected AWS region, The command output should return the database instance identifier.\n```\naws rds describe-db-instances --region  --query 'DBInstances[*].DBInstanceIdentifier'\n```\n2. Run `create-db-snapshot` command to create a snapshot for the selected database instance, The command output will return the `new snapshot` with name DB Snapshot Name.\n```\naws rds create-db-snapshot --region  --db-snapshot-identifier  --db-instance-identifier \n```\n3. Now run `list-aliases` command to list the KMS keys aliases available in a specified region, The command output should return each `key alias currently available`. For our RDS encryption activation process, locate the ID of the AWS default KMS key.\n```\naws kms list-aliases --region \n```\n4. Run `copy-db-snapshot` command using the default KMS key ID for RDS instances returned earlier to create an encrypted copy of the database instance snapshot, The command output will return the `encrypted instance snapshot configuration`.\n```\naws rds copy-db-snapshot --region  --source-db-snapshot-identifier  --target-db-snapshot-identifier  --copy-tags --kms-key-id \n```\n5. Run `restore-db-instance-from-db-snapshot` command to restore the encrypted snapshot created at the previous step to a new database instance, If successful, the command output should return the new encrypted database instance configuration.\n```\naws rds restore-db-instance-from-db-snapshot --region  --db-instance-identifier  --db-snapshot-identifier \n```\n6. Run `describe-db-instances` command to list all RDS database names, available in the selected AWS region, Output will return database instance identifier name Select encrypted database name that we just created DB-Name-Encrypted.\n```\naws rds describe-db-instances --region  --query 'DBInstances[*].DBInstanceIdentifier'\n```\n7. Run again `describe-db-instances` command using the RDS instance identifier returned earlier, to determine if the selected database instance is encrypted, The command output should return the encryption status `True`.\n```\naws rds describe-db-instances --region  --db-instance-identifier  --query 'DBInstances[*].StorageEncrypted'\n```",
-          "AuditProcedure": "**From Console:**\n\n1. Login to the AWS Management Console and open the RDS dashboard at https://console.aws.amazon.com/rds/\n2. In the navigation pane, under RDS dashboard, click `Databases`.\n3. Select the RDS Instance that you want to examine\n4. Click `Instance Name` to see details, then click on `Configuration` tab.\n5. Under Configuration Details section, In Storage pane search for the `Encryption Enabled` Status.\n6. If the current status is set to `Disabled`, Encryption is not enabled for the selected RDS Instance database instance.\n7. Repeat steps 3 to 7 to verify encryption status of other RDS Instance in same region.\n8. Change region from the top of the navigation bar and repeat audit for other regions.\n\n**From Command Line:**\n\n1. Run `describe-db-instances` command to list all RDS Instance database names, available in the selected AWS region, Output will return each Instance database identifier-name.\n ```\naws rds describe-db-instances --region  --query 'DBInstances[*].DBInstanceIdentifier'\n```\n2. Run again `describe-db-instances` command using the RDS Instance identifier returned earlier, to determine if the selected database instance is encrypted, The command output should return the encryption status `True` Or `False`.\n```\naws rds describe-db-instances --region  --db-instance-identifier  --query 'DBInstances[*].StorageEncrypted'\n```\n3. If the StorageEncrypted parameter value is `False`, Encryption is not enabled for the selected RDS database instance.\n4. Repeat steps 1 to 3 for auditing each RDS Instance and change Region to verify for other regions",
+          "RemediationProcedure": "**From Console:**  1. Login to the AWS Management Console and open the RDS dashboard at https://console.aws.amazon.com/rds/. 2. In the left navigation panel, click on `Databases` 3. Select the Database instance that needs to be encrypted. 4. Click on `Actions` button placed at the top right and select `Take Snapshot`. 5. On the Take Snapshot page, enter a database name of which you want to take a snapshot in the `Snapshot Name` field and click on `Take Snapshot`. 6. Select the newly created snapshot and click on the `Action` button placed at the top right and select `Copy snapshot` from the Action menu. 7. On the Make Copy of DB Snapshot page, perform the following:  - In the New DB Snapshot Identifier field, Enter a name for the `new snapshot`. - Check `Copy Tags`, New snapshot must have the same tags as the source snapshot. - Select `Yes` from the `Enable Encryption` dropdown list to enable encryption, You can choose to use the AWS default encryption key or custom key from Master Key dropdown list.  8. Click `Copy Snapshot` to create an encrypted copy of the selected instance snapshot. 9. Select the new Snapshot Encrypted Copy and click on the `Action` button placed at the top right and select `Restore Snapshot` button from the Action menu, This will restore the encrypted snapshot to a new database instance. 10. On the Restore DB Instance page, enter a unique name for the new database instance in the DB Instance Identifier field. 11. Review the instance configuration details and click `Restore DB Instance`. 12. As the new instance provisioning process is completed can update application configuration to refer to the endpoint of the new Encrypted database instance Once the database endpoint is changed at the application level, can remove the unencrypted instance.  **From Command Line:**  1. Run `describe-db-instances` command to list all RDS database names available in the selected AWS region, The command output should return the database instance identifier. ``` aws rds describe-db-instances --region  --query 'DBInstances[*].DBInstanceIdentifier' ``` 2. Run `create-db-snapshot` command to create a snapshot for the selected database instance, The command output will return the `new snapshot` with name DB Snapshot Name. ``` aws rds create-db-snapshot --region  --db-snapshot-identifier  --db-instance-identifier  ``` 3. Now run `list-aliases` command to list the KMS keys aliases available in a specified region, The command output should return each `key alias currently available`. For our RDS encryption activation process, locate the ID of the AWS default KMS key. ``` aws kms list-aliases --region  ``` 4. Run `copy-db-snapshot` command using the default KMS key ID for RDS instances returned earlier to create an encrypted copy of the database instance snapshot, The command output will return the `encrypted instance snapshot configuration`. ``` aws rds copy-db-snapshot --region  --source-db-snapshot-identifier  --target-db-snapshot-identifier  --copy-tags --kms-key-id  ``` 5. Run `restore-db-instance-from-db-snapshot` command to restore the encrypted snapshot created at the previous step to a new database instance, If successful, the command output should return the new encrypted database instance configuration. ``` aws rds restore-db-instance-from-db-snapshot --region  --db-instance-identifier  --db-snapshot-identifier  ``` 6. Run `describe-db-instances` command to list all RDS database names, available in the selected AWS region, Output will return database instance identifier name Select encrypted database name that we just created DB-Name-Encrypted. ``` aws rds describe-db-instances --region  --query 'DBInstances[*].DBInstanceIdentifier' ``` 7. Run again `describe-db-instances` command using the RDS instance identifier returned earlier, to determine if the selected database instance is encrypted, The command output should return the encryption status `True`. ``` aws rds describe-db-instances --region  --db-instance-identifier  --query 'DBInstances[*].StorageEncrypted' ```",
+          "AuditProcedure": "**From Console:**  1. Login to the AWS Management Console and open the RDS dashboard at https://console.aws.amazon.com/rds/ 2. In the navigation pane, under RDS dashboard, click `Databases`. 3. Select the RDS Instance that you want to examine 4. Click `Instance Name` to see details, then click on `Configuration` tab. 5. Under Configuration Details section, In Storage pane search for the `Encryption Enabled` Status. 6. If the current status is set to `Disabled`, Encryption is not enabled for the selected RDS Instance database instance. 7. Repeat steps 3 to 7 to verify encryption status of other RDS Instance in same region. 8. Change region from the top of the navigation bar and repeat audit for other regions.  **From Command Line:**  1. Run `describe-db-instances` command to list all RDS Instance database names, available in the selected AWS region, Output will return each Instance database identifier-name.  ``` aws rds describe-db-instances --region  --query 'DBInstances[*].DBInstanceIdentifier' ``` 2. Run again `describe-db-instances` command using the RDS Instance identifier returned earlier, to determine if the selected database instance is encrypted, The command output should return the encryption status `True` Or `False`. ``` aws rds describe-db-instances --region  --db-instance-identifier  --query 'DBInstances[*].StorageEncrypted' ``` 3. If the StorageEncrypted parameter value is `False`, Encryption is not enabled for the selected RDS database instance. 4. Repeat steps 1 to 3 for auditing each RDS Instance and change Region to verify for other regions",
           "AdditionalInformation": "",
           "References": "https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Encryption.html:https://aws.amazon.com/blogs/database/selecting-the-right-encryption-options-for-amazon-rds-and-amazon-aurora-database-engines/#:~:text=With%20RDS%2Dencrypted%20resources%2C%20data,transparent%20to%20your%20database%20engine.:https://aws.amazon.com/rds/features/security/"
         }
@@ -609,8 +609,8 @@
           "Description": "Ensure that RDS database instances have the Auto Minor Version Upgrade flag enabled in order to receive automatically minor engine upgrades during the specified maintenance window. So, RDS instances can get the new features, bug fixes, and security patches for their database engines.",
           "RationaleStatement": "AWS RDS will occasionally deprecate minor engine versions and provide new ones for an upgrade. When the last version number within the release is replaced, the version changed is considered minor. With Auto Minor Version Upgrade feature enabled, the version upgrades will occur automatically during the specified maintenance window so your RDS instances can get the new features, bug fixes, and security patches for their database engines.",
           "ImpactStatement": "",
-          "RemediationProcedure": "**From Console:**\n\n1. Log in to the AWS management console and navigate to the RDS dashboard at https://console.aws.amazon.com/rds/.\n2. In the left navigation panel, click on `Databases`.\n3. Select the RDS instance that wants to update.\n4. Click on the `Modify` button placed on the top right side.\n5. On the `Modify DB Instance: ` page, In the `Maintenance` section, select `Auto minor version upgrade` click on the `Yes` radio button.\n6. At the bottom of the page click on `Continue`, check to Apply Immediately to apply the changes immediately, or select `Apply during the next scheduled maintenance window` to avoid any downtime.\n7. Review the changes and click on `Modify DB Instance`. The instance status should change from available to modifying and back to available. Once the feature is enabled, the `Auto Minor Version Upgrade` status should change to `Yes`.\n\n**From Command Line:**\n\n1. Run `describe-db-instances` command to list all RDS database instance names, available in the selected AWS region:\n```\naws rds describe-db-instances --region  --query 'DBInstances[*].DBInstanceIdentifier'\n```\n2. The command output should return each database instance identifier.\n3. Run the `modify-db-instance` command to modify the selected RDS instance configuration this command will apply the changes immediately, Remove `--apply-immediately` to apply changes during the next scheduled maintenance window and avoid any downtime:\n```\naws rds modify-db-instance --region  --db-instance-identifier  --auto-minor-version-upgrade --apply-immediately\n```\n4. The command output should reveal the new configuration metadata for the RDS instance and check `AutoMinorVersionUpgrade` parameter value.\n5. Run `describe-db-instances` command to check if the Auto Minor Version Upgrade feature has been successfully enable:\n```\naws rds describe-db-instances --region  --db-instance-identifier  --query 'DBInstances[*].AutoMinorVersionUpgrade'\n```\n6. The command output should return the feature current status set to `true`, the feature is `enabled` and the minor engine upgrades will be applied to the selected RDS instance.",
-          "AuditProcedure": "**From Console:**\n\n1. Log in to the AWS management console and navigate to the RDS dashboard at https://console.aws.amazon.com/rds/.\n2. In the left navigation panel, click on `Databases`.\n3. Select the RDS instance that wants to examine.\n4. Click on the `Maintenance and backups` panel.\n5. Under the `Maintenance` section, search for the Auto Minor Version Upgrade status.\n- If the current status is set to `Disabled`, means the feature is not set and the minor engine upgrades released will not be applied to the selected RDS instance\n\n**From Command Line:**\n\n1. Run `describe-db-instances` command to list all RDS database names, available in the selected AWS region:\n```\naws rds describe-db-instances --region  --query 'DBInstances[*].DBInstanceIdentifier'\n```\n2. The command output should return each database instance identifier.\n3. Run again `describe-db-instances` command using the RDS instance identifier returned earlier to determine the Auto Minor Version Upgrade status for the selected instance:\n```\naws rds describe-db-instances --region  --db-instance-identifier  --query 'DBInstances[*].AutoMinorVersionUpgrade'\n```\n4. The command output should return the feature current status. If the current status is set to `true`, the feature is enabled and the minor engine upgrades will be applied to the selected RDS instance.",
+          "RemediationProcedure": "**From Console:**  1. Log in to the AWS management console and navigate to the RDS dashboard at https://console.aws.amazon.com/rds/. 2. In the left navigation panel, click on `Databases`. 3. Select the RDS instance that wants to update. 4. Click on the `Modify` button placed on the top right side. 5. On the `Modify DB Instance: ` page, In the `Maintenance` section, select `Auto minor version upgrade` click on the `Yes` radio button. 6. At the bottom of the page click on `Continue`, check to Apply Immediately to apply the changes immediately, or select `Apply during the next scheduled maintenance window` to avoid any downtime. 7. Review the changes and click on `Modify DB Instance`. The instance status should change from available to modifying and back to available. Once the feature is enabled, the `Auto Minor Version Upgrade` status should change to `Yes`.  **From Command Line:**  1. Run `describe-db-instances` command to list all RDS database instance names, available in the selected AWS region: ``` aws rds describe-db-instances --region  --query 'DBInstances[*].DBInstanceIdentifier' ``` 2. The command output should return each database instance identifier. 3. Run the `modify-db-instance` command to modify the selected RDS instance configuration this command will apply the changes immediately, Remove `--apply-immediately` to apply changes during the next scheduled maintenance window and avoid any downtime: ``` aws rds modify-db-instance --region  --db-instance-identifier  --auto-minor-version-upgrade --apply-immediately ``` 4. The command output should reveal the new configuration metadata for the RDS instance and check `AutoMinorVersionUpgrade` parameter value. 5. Run `describe-db-instances` command to check if the Auto Minor Version Upgrade feature has been successfully enable: ``` aws rds describe-db-instances --region  --db-instance-identifier  --query 'DBInstances[*].AutoMinorVersionUpgrade' ``` 6. The command output should return the feature current status set to `true`, the feature is `enabled` and the minor engine upgrades will be applied to the selected RDS instance.",
+          "AuditProcedure": "**From Console:**  1. Log in to the AWS management console and navigate to the RDS dashboard at https://console.aws.amazon.com/rds/. 2. In the left navigation panel, click on `Databases`. 3. Select the RDS instance that wants to examine. 4. Click on the `Maintenance and backups` panel. 5. Under the `Maintenance` section, search for the Auto Minor Version Upgrade status. - If the current status is set to `Disabled`, means the feature is not set and the minor engine upgrades released will not be applied to the selected RDS instance  **From Command Line:**  1. Run `describe-db-instances` command to list all RDS database names, available in the selected AWS region: ``` aws rds describe-db-instances --region  --query 'DBInstances[*].DBInstanceIdentifier' ``` 2. The command output should return each database instance identifier. 3. Run again `describe-db-instances` command using the RDS instance identifier returned earlier to determine the Auto Minor Version Upgrade status for the selected instance: ``` aws rds describe-db-instances --region  --db-instance-identifier  --query 'DBInstances[*].AutoMinorVersionUpgrade' ``` 4. The command output should return the feature current status. If the current status is set to `true`, the feature is enabled and the minor engine upgrades will be applied to the selected RDS instance.",
           "AdditionalInformation": "",
           "References": "https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_RDS_Managing.html:https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.Upgrading.html:https://aws.amazon.com/rds/faqs/"
         }
@@ -630,8 +630,8 @@
           "Description": "Ensure and verify that RDS database instances provisioned in your AWS account do restrict unauthorized access in order to minimize security risks. To restrict access to any publicly accessible RDS database instance, you must disable the database Publicly Accessible flag and update the VPC security group associated with the instance.",
           "RationaleStatement": "Ensure that no public-facing RDS database instances are provisioned in your AWS account and restrict unauthorized access in order to minimize security risks. When the RDS instance allows unrestricted access (0.0.0.0/0), everyone and everything on the Internet can establish a connection to your database and this can increase the opportunity for malicious activities such as brute force attacks, PostgreSQL injections, or DoS/DDoS attacks.",
           "ImpactStatement": "",
-          "RemediationProcedure": "**From Console:**\n\n1. Log in to the AWS management console and navigate to the RDS dashboard at https://console.aws.amazon.com/rds/.\n2. Under the navigation panel, On RDS Dashboard, click `Databases`.\n3. Select the RDS instance that you want to update.\n4. Click `Modify` from the dashboard top menu.\n5. On the Modify DB Instance panel, under the `Connectivity` section, click on `Additional connectivity configuration` and update the value for `Publicly Accessible` to Not publicly accessible to restrict public access. Follow the below steps to update subnet configurations:\n- Select the `Connectivity and security` tab, and click on the VPC attribute value inside the `Networking` section.\n- Select the `Details` tab from the VPC dashboard bottom panel and click on Route table configuration attribute value.\n- On the Route table details page, select the Routes tab from the dashboard bottom panel and click on `Edit routes`.\n- On the Edit routes page, update the Destination of Target which is set to `igw-xxxxx` and click on `Save` routes.\n6. On the Modify DB Instance panel Click on `Continue` and In the Scheduling of modifications section, perform one of the following actions based on your requirements:\n- Select Apply during the next scheduled maintenance window to apply the changes automatically during the next scheduled maintenance window.\n- Select Apply immediately to apply the changes right away. With this option, any pending modifications will be asynchronously applied as soon as possible, regardless of the maintenance window setting for this RDS database instance. Note that any changes available in the pending modifications queue are also applied. If any of the pending modifications require downtime, choosing this option can cause unexpected downtime for the application.\n7. Repeat steps 3 to 6 for each RDS instance available in the current region.\n8. Change the AWS region from the navigation bar to repeat the process for other regions.\n\n**From Command Line:**\n\n1. Run `describe-db-instances` command to list all RDS database names identifiers, available in the selected AWS region:\n```\naws rds describe-db-instances --region  --query 'DBInstances[*].DBInstanceIdentifier'\n```\n2. The command output should return each database instance identifier.\n3. Run `modify-db-instance` command to modify the selected RDS instance configuration. Then use the following command to disable the `Publicly Accessible` flag for the selected RDS instances. This command use the apply-immediately flag. If you want `to avoid any downtime --no-apply-immediately flag can be used`:\n```\naws rds modify-db-instance --region  --db-instance-identifier  --no-publicly-accessible --apply-immediately\n```\n4. The command output should reveal the `PubliclyAccessible` configuration under pending values and should get applied at the specified time.\n5. Updating the Internet Gateway Destination via AWS CLI is not currently supported To update information about Internet Gateway use the AWS Console Procedure.\n6. Repeat steps 1 to 5 for each RDS instance provisioned in the current region.\n7. Change the AWS region by using the --region filter to repeat the process for other regions.",
-          "AuditProcedure": "**From Console:**\n\n1. Log in to the AWS management console and navigate to the RDS dashboard at https://console.aws.amazon.com/rds/.\n2. Under the navigation panel, On RDS Dashboard, click `Databases`.\n3. Select the RDS instance that you want to examine.\n4. Click `Instance Name` from the dashboard, Under `Connectivity and Security.\n5. On the `Security`, check if the Publicly Accessible flag status is set to `Yes`, follow the below-mentioned steps to check database subnet access.\n- In the `networking` section, click the subnet link available under `Subnets`\n- The link will redirect you to the VPC Subnets page.\n- Select the subnet listed on the page and click the `Route Table` tab from the dashboard bottom panel. If the route table contains any entries with the destination `CIDR block set to 0.0.0.0/0` and with an `Internet Gateway` attached.\n- The selected RDS database instance was provisioned inside a public subnet, therefore is not running within a logically isolated environment and can be accessible from the Internet.\n6. Repeat steps no. 4 and 5 to determine the type (public or private) and subnet for other RDS database instances provisioned in the current region.\n8. Change the AWS region from the navigation bar and repeat the audit process for other regions.\n\n**From Command Line:**\n\n1. Run `describe-db-instances` command to list all RDS database names, available in the selected AWS region:\n```\naws rds describe-db-instances --region  --query 'DBInstances[*].DBInstanceIdentifier'\n```\n2. The command output should return each database instance `identifier`.\n3. Run again `describe-db-instances` command using the `PubliclyAccessible` parameter as query filter to reveal the database instance Publicly Accessible flag status:\n```\naws rds describe-db-instances --region  --db-instance-identifier  --query 'DBInstances[*].PubliclyAccessible'\n```\n4. Check for the Publicly Accessible parameter status, If the Publicly Accessible flag is set to `Yes`. Then selected RDS database instance is publicly accessible and insecure, follow the below-mentioned steps to check database subnet access\n5. Run again `describe-db-instances` command using the RDS database instance identifier that you want to check and appropriate filtering to describe the VPC subnet(s) associated with the selected instance:\n```\naws rds describe-db-instances --region  --db-instance-identifier  --query 'DBInstances[*].DBSubnetGroup.Subnets[]'\n```\n- The command output should list the subnets available in the selected database subnet group.\n6. Run `describe-route-tables` command using the ID of the subnet returned at the previous step to describe the routes of the VPC route table associated with the selected subnet:\n```\naws ec2 describe-route-tables --region  --filters \"Name=association.subnet-id,Values=\" --query 'RouteTables[*].Routes[]'\n```\n- If the command returns the route table associated with database instance subnet ID. Check the `GatewayId` and `DestinationCidrBlock` attributes values returned in the output. If the route table contains any entries with the `GatewayId` value set to `igw-xxxxxxxx` and the `DestinationCidrBlock` value set to `0.0.0.0/0`, the selected RDS database instance was provisioned inside a public subnet.\n- Or\n- If the command returns empty results, the route table is implicitly associated with subnet, therefore the audit process continues with the next step\n7. Run again `describe-db-instances` command using the RDS database instance identifier that you want to check and appropriate filtering to describe the VPC ID associated with the selected instance:\n```\naws rds describe-db-instances --region  --db-instance-identifier  --query 'DBInstances[*].DBSubnetGroup.VpcId'\n```\n- The command output should show the VPC ID in the selected database subnet group\n8. Now run `describe-route-tables` command using the ID of the VPC returned at the previous step to describe the routes of the VPC main route table implicitly associated with the selected subnet:\n```\naws ec2 describe-route-tables --region  --filters \"Name=vpc-id,Values=\" \"Name=association.main,Values=true\" --query 'RouteTables[*].Routes[]'\n```\n- The command output returns the VPC main route table implicitly associated with database instance subnet ID. Check the `GatewayId` and `DestinationCidrBlock` attributes values returned in the output. If the route table contains any entries with the `GatewayId` value set to `igw-xxxxxxxx` and the `DestinationCidrBlock` value set to `0.0.0.0/0`, the selected RDS database instance was provisioned inside a public subnet, therefore is not running within a logically isolated environment and does not adhere to AWS security best practices.",
+          "RemediationProcedure": "**From Console:**  1. Log in to the AWS management console and navigate to the RDS dashboard at https://console.aws.amazon.com/rds/. 2. Under the navigation panel, On RDS Dashboard, click `Databases`. 3. Select the RDS instance that you want to update. 4. Click `Modify` from the dashboard top menu. 5. On the Modify DB Instance panel, under the `Connectivity` section, click on `Additional connectivity configuration` and update the value for `Publicly Accessible` to Not publicly accessible to restrict public access. Follow the below steps to update subnet configurations: - Select the `Connectivity and security` tab, and click on the VPC attribute value inside the `Networking` section. - Select the `Details` tab from the VPC dashboard bottom panel and click on Route table configuration attribute value. - On the Route table details page, select the Routes tab from the dashboard bottom panel and click on `Edit routes`. - On the Edit routes page, update the Destination of Target which is set to `igw-xxxxx` and click on `Save` routes. 6. On the Modify DB Instance panel Click on `Continue` and In the Scheduling of modifications section, perform one of the following actions based on your requirements: - Select Apply during the next scheduled maintenance window to apply the changes automatically during the next scheduled maintenance window. - Select Apply immediately to apply the changes right away. With this option, any pending modifications will be asynchronously applied as soon as possible, regardless of the maintenance window setting for this RDS database instance. Note that any changes available in the pending modifications queue are also applied. If any of the pending modifications require downtime, choosing this option can cause unexpected downtime for the application. 7. Repeat steps 3 to 6 for each RDS instance available in the current region. 8. Change the AWS region from the navigation bar to repeat the process for other regions.  **From Command Line:**  1. Run `describe-db-instances` command to list all RDS database names identifiers, available in the selected AWS region: ``` aws rds describe-db-instances --region  --query 'DBInstances[*].DBInstanceIdentifier' ``` 2. The command output should return each database instance identifier. 3. Run `modify-db-instance` command to modify the selected RDS instance configuration. Then use the following command to disable the `Publicly Accessible` flag for the selected RDS instances. This command use the apply-immediately flag. If you want `to avoid any downtime --no-apply-immediately flag can be used`: ``` aws rds modify-db-instance --region  --db-instance-identifier  --no-publicly-accessible --apply-immediately ``` 4. The command output should reveal the `PubliclyAccessible` configuration under pending values and should get applied at the specified time. 5. Updating the Internet Gateway Destination via AWS CLI is not currently supported To update information about Internet Gateway use the AWS Console Procedure. 6. Repeat steps 1 to 5 for each RDS instance provisioned in the current region. 7. Change the AWS region by using the --region filter to repeat the process for other regions.",
+          "AuditProcedure": "**From Console:**  1. Log in to the AWS management console and navigate to the RDS dashboard at https://console.aws.amazon.com/rds/. 2. Under the navigation panel, On RDS Dashboard, click `Databases`. 3. Select the RDS instance that you want to examine. 4. Click `Instance Name` from the dashboard, Under `Connectivity and Security. 5. On the `Security`, check if the Publicly Accessible flag status is set to `Yes`, follow the below-mentioned steps to check database subnet access. - In the `networking` section, click the subnet link available under `Subnets` - The link will redirect you to the VPC Subnets page. - Select the subnet listed on the page and click the `Route Table` tab from the dashboard bottom panel. If the route table contains any entries with the destination `CIDR block set to 0.0.0.0/0` and with an `Internet Gateway` attached. - The selected RDS database instance was provisioned inside a public subnet, therefore is not running within a logically isolated environment and can be accessible from the Internet. 6. Repeat steps no. 4 and 5 to determine the type (public or private) and subnet for other RDS database instances provisioned in the current region. 8. Change the AWS region from the navigation bar and repeat the audit process for other regions.  **From Command Line:**  1. Run `describe-db-instances` command to list all RDS database names, available in the selected AWS region: ``` aws rds describe-db-instances --region  --query 'DBInstances[*].DBInstanceIdentifier' ``` 2. The command output should return each database instance `identifier`. 3. Run again `describe-db-instances` command using the `PubliclyAccessible` parameter as query filter to reveal the database instance Publicly Accessible flag status: ``` aws rds describe-db-instances --region  --db-instance-identifier  --query 'DBInstances[*].PubliclyAccessible' ``` 4. Check for the Publicly Accessible parameter status, If the Publicly Accessible flag is set to `Yes`. Then selected RDS database instance is publicly accessible and insecure, follow the below-mentioned steps to check database subnet access 5. Run again `describe-db-instances` command using the RDS database instance identifier that you want to check and appropriate filtering to describe the VPC subnet(s) associated with the selected instance: ``` aws rds describe-db-instances --region  --db-instance-identifier  --query 'DBInstances[*].DBSubnetGroup.Subnets[]' ``` - The command output should list the subnets available in the selected database subnet group. 6. Run `describe-route-tables` command using the ID of the subnet returned at the previous step to describe the routes of the VPC route table associated with the selected subnet: ``` aws ec2 describe-route-tables --region  --filters \"Name=association.subnet-id,Values=\" --query 'RouteTables[*].Routes[]' ``` - If the command returns the route table associated with database instance subnet ID. Check the `GatewayId` and `DestinationCidrBlock` attributes values returned in the output. If the route table contains any entries with the `GatewayId` value set to `igw-xxxxxxxx` and the `DestinationCidrBlock` value set to `0.0.0.0/0`, the selected RDS database instance was provisioned inside a public subnet. - Or - If the command returns empty results, the route table is implicitly associated with subnet, therefore the audit process continues with the next step 7. Run again `describe-db-instances` command using the RDS database instance identifier that you want to check and appropriate filtering to describe the VPC ID associated with the selected instance: ``` aws rds describe-db-instances --region  --db-instance-identifier  --query 'DBInstances[*].DBSubnetGroup.VpcId' ``` - The command output should show the VPC ID in the selected database subnet group 8. Now run `describe-route-tables` command using the ID of the VPC returned at the previous step to describe the routes of the VPC main route table implicitly associated with the selected subnet: ``` aws ec2 describe-route-tables --region  --filters \"Name=vpc-id,Values=\" \"Name=association.main,Values=true\" --query 'RouteTables[*].Routes[]' ``` - The command output returns the VPC main route table implicitly associated with database instance subnet ID. Check the `GatewayId` and `DestinationCidrBlock` attributes values returned in the output. If the route table contains any entries with the `GatewayId` value set to `igw-xxxxxxxx` and the `DestinationCidrBlock` value set to `0.0.0.0/0`, the selected RDS database instance was provisioned inside a public subnet, therefore is not running within a logically isolated environment and does not adhere to AWS security best practices.",
           "AdditionalInformation": "",
           "References": "https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.html:https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Scenario2.html:https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_VPC.WorkingWithRDSInstanceinaVPC.html:https://aws.amazon.com/rds/faqs/"
         }
@@ -651,8 +651,8 @@
           "Description": "EFS data should be encrypted at rest using AWS KMS (Key Management Service).",
           "RationaleStatement": "Data should be encrypted at rest to reduce the risk of a data breach via direct access to the storage device.",
           "ImpactStatement": "",
-          "RemediationProcedure": "**It is important to note that EFS file system data at rest encryption must be turned on when creating the file system.**\n\nIf an EFS file system has been created without data at rest encryption enabled then you must create another EFS file system with the correct configuration and transfer the data.\n\n**Steps to create an EFS file system with data encrypted at rest:**\n\n**From Console:**\n1. Login to the AWS Management Console and Navigate to `Elastic File System (EFS)` dashboard.\n2. Select `File Systems` from the left navigation panel.\n3. Click `Create File System` button from the dashboard top menu to start the file system setup process.\n4. On the `Configure file system access` configuration page, perform the following actions.\n- Choose the right VPC from the VPC dropdown list.\n- Within Create mount targets section, select the checkboxes for all of the Availability Zones (AZs) within the selected VPC. These will be your mount targets.\n- Click `Next step` to continue.\n\n5. Perform the following on the `Configure optional settings` page.\n- Create `tags` to describe your new file system.\n- Choose `performance mode` based on your requirements.\n- Check `Enable encryption` checkbox and choose `aws/elasticfilesystem` from Select KMS master key dropdown list to enable encryption for the new file system using the default master key provided and managed by AWS KMS.\n- Click `Next step` to continue.\n\n6. Review the file system configuration details on the `review and create` page and then click `Create File System` to create your new AWS EFS file system.\n7. Copy the data from the old unencrypted EFS file system onto the newly create encrypted file system.\n8. Remove the unencrypted file system as soon as your data migration to the newly create encrypted file system is completed.\n9. Change the AWS region from the navigation bar and repeat the entire process for other aws regions.\n\n**From CLI:**\n1. Run describe-file-systems command to describe the configuration information available for the selected (unencrypted) file system (see Audit section to identify the right resource):\n```\naws efs describe-file-systems --region  --file-system-id \n```\n2. The command output should return the requested configuration information.\n3. To provision a new AWS EFS file system, you need to generate a universally unique identifier (UUID) in order to create the token required by the create-file-system command. To create the required token, you can use a randomly generated UUID from \"https://www.uuidgenerator.net\".\n4. Run create-file-system command using the unique token created at the previous step.\n```\naws efs create-file-system --region  --creation-token  --performance-mode generalPurpose --encrypted\n```\n5. The command output should return the new file system configuration metadata.\n6. Run create-mount-target command using the newly created EFS file system ID returned at the previous step as identifier and the ID of the Availability Zone (AZ) that will represent the mount target:\n```\naws efs create-mount-target --region  --file-system-id  --subnet-id \n```\n7. The command output should return the new mount target metadata.\n8. Now you can mount your file system from an EC2 instance.\n9. Copy the data from the old unencrypted EFS file system onto the newly create encrypted file system.\n10. Remove the unencrypted file system as soon as your data migration to the newly create encrypted file system is completed.\n```\naws efs delete-file-system --region  --file-system-id \n```\n11. Change the AWS region by updating the --region and repeat the entire process for other aws regions.",
-          "AuditProcedure": "**From Console:**\n1. Login to the AWS Management Console and Navigate to `Elastic File System (EFS) dashboard.\n2. Select `File Systems` from the left navigation panel.\n3. Each item on the list has a visible Encrypted field that displays data at rest encryption status.\n4. Validate that this field reads `Encrypted` for all EFS file systems in all AWS regions.\n\n**From CLI:**\n1. Run describe-file-systems command using custom query filters to list the identifiers of all AWS EFS file systems currently available within the selected region:\n```\naws efs describe-file-systems --region  --output table --query 'FileSystems[*].FileSystemId'\n```\n2. The command output should return a table with the requested file system IDs.\n3. Run describe-file-systems command using the ID of the file system that you want to examine as identifier and the necessary query filters:\n```\naws efs describe-file-systems --region  --file-system-id  --query 'FileSystems[*].Encrypted'\n```\n4. The command output should return the file system encryption status true or false. If the returned value is `false`, the selected AWS EFS file system is not encrypted and if the returned value is `true`, the selected AWS EFS file system is encrypted.",
+          "RemediationProcedure": "**It is important to note that EFS file system data at rest encryption must be turned on when creating the file system.**  If an EFS file system has been created without data at rest encryption enabled then you must create another EFS file system with the correct configuration and transfer the data.  **Steps to create an EFS file system with data encrypted at rest:**  **From Console:** 1. Login to the AWS Management Console and Navigate to `Elastic File System (EFS)` dashboard. 2. Select `File Systems` from the left navigation panel. 3. Click `Create File System` button from the dashboard top menu to start the file system setup process. 4. On the `Configure file system access` configuration page, perform the following actions. - Choose the right VPC from the VPC dropdown list. - Within Create mount targets section, select the checkboxes for all of the Availability Zones (AZs) within the selected VPC. These will be your mount targets. - Click `Next step` to continue.  5. Perform the following on the `Configure optional settings` page. - Create `tags` to describe your new file system. - Choose `performance mode` based on your requirements. - Check `Enable encryption` checkbox and choose `aws/elasticfilesystem` from Select KMS master key dropdown list to enable encryption for the new file system using the default master key provided and managed by AWS KMS. - Click `Next step` to continue.  6. Review the file system configuration details on the `review and create` page and then click `Create File System` to create your new AWS EFS file system. 7. Copy the data from the old unencrypted EFS file system onto the newly create encrypted file system. 8. Remove the unencrypted file system as soon as your data migration to the newly create encrypted file system is completed. 9. Change the AWS region from the navigation bar and repeat the entire process for other aws regions.  **From CLI:** 1. Run describe-file-systems command to describe the configuration information available for the selected (unencrypted) file system (see Audit section to identify the right resource): ``` aws efs describe-file-systems --region  --file-system-id  ``` 2. The command output should return the requested configuration information. 3. To provision a new AWS EFS file system, you need to generate a universally unique identifier (UUID) in order to create the token required by the create-file-system command. To create the required token, you can use a randomly generated UUID from \"https://www.uuidgenerator.net\". 4. Run create-file-system command using the unique token created at the previous step. ``` aws efs create-file-system --region  --creation-token  --performance-mode generalPurpose --encrypted ``` 5. The command output should return the new file system configuration metadata. 6. Run create-mount-target command using the newly created EFS file system ID returned at the previous step as identifier and the ID of the Availability Zone (AZ) that will represent the mount target: ``` aws efs create-mount-target --region  --file-system-id  --subnet-id  ``` 7. The command output should return the new mount target metadata. 8. Now you can mount your file system from an EC2 instance. 9. Copy the data from the old unencrypted EFS file system onto the newly create encrypted file system. 10. Remove the unencrypted file system as soon as your data migration to the newly create encrypted file system is completed. ``` aws efs delete-file-system --region  --file-system-id  ``` 11. Change the AWS region by updating the --region and repeat the entire process for other aws regions.",
+          "AuditProcedure": "**From Console:** 1. Login to the AWS Management Console and Navigate to `Elastic File System (EFS) dashboard. 2. Select `File Systems` from the left navigation panel. 3. Each item on the list has a visible Encrypted field that displays data at rest encryption status. 4. Validate that this field reads `Encrypted` for all EFS file systems in all AWS regions.  **From CLI:** 1. Run describe-file-systems command using custom query filters to list the identifiers of all AWS EFS file systems currently available within the selected region: ``` aws efs describe-file-systems --region  --output table --query 'FileSystems[*].FileSystemId' ``` 2. The command output should return a table with the requested file system IDs. 3. Run describe-file-systems command using the ID of the file system that you want to examine as identifier and the necessary query filters: ``` aws efs describe-file-systems --region  --file-system-id  --query 'FileSystems[*].Encrypted' ``` 4. The command output should return the file system encryption status true or false. If the returned value is `false`, the selected AWS EFS file system is not encrypted and if the returned value is `true`, the selected AWS EFS file system is encrypted.",
           "AdditionalInformation": "",
           "References": "https://docs.aws.amazon.com/efs/latest/ug/encryption-at-rest.html:https://awscli.amazonaws.com/v2/documentation/api/latest/reference/efs/index.html#efs"
         }
@@ -670,10 +670,10 @@
           "Profile": "Level 1",
           "AssessmentStatus": "Automated",
           "Description": "AWS CloudTrail is a web service that records AWS API calls for your account and delivers log files to you. The recorded information includes the identity of the API caller, the time of the API call, the source IP address of the API caller, the request parameters, and the response elements returned by the AWS service. CloudTrail provides a history of AWS API calls for an account, including API calls made via the Management Console, SDKs, command line tools, and higher-level AWS services (such as CloudFormation).",
-          "RationaleStatement": "The AWS API call history produced by CloudTrail enables security analysis, resource change tracking, and compliance auditing. Additionally, \n\n- ensuring that a multi-regions trail exists will ensure that unexpected activity occurring in otherwise unused regions is detected\n\n- ensuring that a multi-regions trail exists will ensure that `Global Service Logging` is enabled for a trail by default to capture recording of events generated on \nAWS global services\n\n- for a multi-regions trail, ensuring that management events configured for all type of Read/Writes ensures recording of management operations that are performed on all resources in an AWS account",
-          "ImpactStatement": "S3 lifecycle features can be used to manage the accumulation and management of logs over time. See the following AWS resource for more information on these features:\n\n1. https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html",
-          "RemediationProcedure": "Perform the following to enable global (Multi-region) CloudTrail logging:\n\n**From Console:**\n\n1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/cloudtrail](https://console.aws.amazon.com/cloudtrail)\n2. Click on _Trails_ on the left navigation pane\n3. Click `Get Started Now` , if presented\n - Click `Add new trail` \n - Enter a trail name in the `Trail name` box\n - Set the `Apply trail to all regions` option to `Yes` \n - Specify an S3 bucket name in the `S3 bucket` box\n - Click `Create` \n4. If 1 or more trails already exist, select the target trail to enable for global logging\n5. Click the edit icon (pencil) next to `Apply trail to all regions` , Click `Yes` and Click `Save`.\n6. Click the edit icon (pencil) next to `Management Events` click `All` for setting `Read/Write Events` and Click `Save`.\n\n**From Command Line:**\n```\naws cloudtrail create-trail --name  --bucket-name  --is-multi-region-trail \naws cloudtrail update-trail --name  --is-multi-region-trail\n```\n\nNote: Creating CloudTrail via CLI without providing any overriding options configures `Management Events` to set `All` type of `Read/Writes` by default.",
-          "AuditProcedure": "Perform the following to determine if CloudTrail is enabled for all regions:\n\n**From Console:**\n\n1. Sign in to the AWS Management Console and open the CloudTrail console at [https://console.aws.amazon.com/cloudtrail](https://console.aws.amazon.com/cloudtrail)\n2. Click on `Trails` on the left navigation pane\n - You will be presented with a list of trails across all regions\n3. Ensure at least one Trail has `All` specified in the `Region` column\n4. Click on a trail via the link in the _Name_ column\n5. Ensure `Logging` is set to `ON` \n6. Ensure `Apply trail to all regions` is set to `Yes`\n7. In section `Management Events` ensure `Read/Write Events` set to `ALL`\n\n**From Command Line:**\n```\n aws cloudtrail describe-trails\n```\nEnsure `IsMultiRegionTrail` is set to `true` \n```\naws cloudtrail get-trail-status --name \n```\nEnsure `IsLogging` is set to `true`\n```\naws cloudtrail get-event-selectors --trail-name \n```\nEnsure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`",
+          "RationaleStatement": "The AWS API call history produced by CloudTrail enables security analysis, resource change tracking, and compliance auditing. Additionally,   - ensuring that a multi-regions trail exists will ensure that unexpected activity occurring in otherwise unused regions is detected  - ensuring that a multi-regions trail exists will ensure that `Global Service Logging` is enabled for a trail by default to capture recording of events generated on  AWS global services  - for a multi-regions trail, ensuring that management events configured for all type of Read/Writes ensures recording of management operations that are performed on all resources in an AWS account",
+          "ImpactStatement": "S3 lifecycle features can be used to manage the accumulation and management of logs over time. See the following AWS resource for more information on these features:  1. https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html",
+          "RemediationProcedure": "Perform the following to enable global (Multi-region) CloudTrail logging:  **From Console:**  1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/cloudtrail](https://console.aws.amazon.com/cloudtrail) 2. Click on _Trails_ on the left navigation pane 3. Click `Get Started Now` , if presented  - Click `Add new trail`   - Enter a trail name in the `Trail name` box  - Set the `Apply trail to all regions` option to `Yes`   - Specify an S3 bucket name in the `S3 bucket` box  - Click `Create`  4. If 1 or more trails already exist, select the target trail to enable for global logging 5. Click the edit icon (pencil) next to `Apply trail to all regions` , Click `Yes` and Click `Save`. 6. Click the edit icon (pencil) next to `Management Events` click `All` for setting `Read/Write Events` and Click `Save`.  **From Command Line:** ``` aws cloudtrail create-trail --name  --bucket-name  --is-multi-region-trail  aws cloudtrail update-trail --name  --is-multi-region-trail ```  Note: Creating CloudTrail via CLI without providing any overriding options configures `Management Events` to set `All` type of `Read/Writes` by default.",
+          "AuditProcedure": "Perform the following to determine if CloudTrail is enabled for all regions:  **From Console:**  1. Sign in to the AWS Management Console and open the CloudTrail console at [https://console.aws.amazon.com/cloudtrail](https://console.aws.amazon.com/cloudtrail) 2. Click on `Trails` on the left navigation pane  - You will be presented with a list of trails across all regions 3. Ensure at least one Trail has `All` specified in the `Region` column 4. Click on a trail via the link in the _Name_ column 5. Ensure `Logging` is set to `ON`  6. Ensure `Apply trail to all regions` is set to `Yes` 7. In section `Management Events` ensure `Read/Write Events` set to `ALL`  **From Command Line:** ```  aws cloudtrail describe-trails ``` Ensure `IsMultiRegionTrail` is set to `true`  ``` aws cloudtrail get-trail-status --name  ``` Ensure `IsLogging` is set to `true` ``` aws cloudtrail get-event-selectors --trail-name  ``` Ensure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`",
           "AdditionalInformation": "",
           "References": "https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-concepts.html#cloudtrail-concepts-management-events:https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-management-and-data-events-with-cloudtrail.html?icmpid=docs_cloudtrail_console#logging-management-events:https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-supported-services.html#cloud-trail-supported-services-data-events"
         }
@@ -693,8 +693,8 @@
           "Description": "S3 object-level API operations such as GetObject, DeleteObject, and PutObject are called data events. By default, CloudTrail trails don't log data events and so it is recommended to enable Object-level logging for S3 buckets.",
           "RationaleStatement": "Enabling object-level logging will help you meet data compliance requirements within your organization, perform comprehensive security analysis, monitor specific patterns of user behavior in your AWS account or take immediate actions on any object-level API activity within your S3 Buckets using Amazon CloudWatch Events.",
           "ImpactStatement": "",
-          "RemediationProcedure": "**From Console:**\n\n1. Login to the AWS Management Console and navigate to S3 dashboard at `https://console.aws.amazon.com/s3/`\n2. In the left navigation panel, click `buckets` and then click on the S3 Bucket Name that you want to examine.\n3. Click `Properties` tab to see in detail bucket configuration.\n4. Click on the `Object-level` logging setting, enter the CloudTrail name for the recording activity. You can choose an existing Cloudtrail or create a new one by navigating to the Cloudtrail console link `https://console.aws.amazon.com/cloudtrail/`\n5. Once the Cloudtrail is selected, check the `Write` event checkbox, so that `object-level` logging for Write events is enabled.\n6. Repeat steps 2 to 5 to enable object-level logging of write events for other S3 buckets.\n\n**From Command Line:**\n\n1. To enable `object-level` data events logging for S3 buckets within your AWS account, run `put-event-selectors` command using the name of the trail that you want to reconfigure as identifier:\n```\naws cloudtrail put-event-selectors --region  --trail-name  --event-selectors '[{ \"ReadWriteType\": \"WriteOnly\", \"IncludeManagementEvents\":true, \"DataResources\": [{ \"Type\": \"AWS::S3::Object\", \"Values\": [\"arn:aws:s3:::/\"] }] }]'\n```\n2. The command output will be `object-level` event trail configuration.\n3. If you want to enable it for all buckets at once then change Values parameter to `[\"arn:aws:s3\"]` in command given above.\n4. Repeat step 1 for each s3 bucket to update `object-level` logging of write events.\n5. Change the AWS region by updating the `--region` command parameter and perform the process for other regions.",
-          "AuditProcedure": "**From Console:**\n\n1. Login to the AWS Management Console and navigate to CloudTrail dashboard at `https://console.aws.amazon.com/cloudtrail/`\n2. In the left panel, click `Trails` and then click on the CloudTrail Name that you want to examine.\n3. Review `General details`\n4. Confirm that `Multi-region trail` is set to `Yes`\n5. Scroll down to `Data events`\n6. Confirm that it reads:\nData events: S3\nBucket Name: All current and future S3 buckets\nRead: Enabled\nWrite: Enabled\n7. Repeat steps 2 to 6 to verify that Multi-region trail and Data events logging of S3 buckets in CloudTrail.\nIf the CloudTrails do not have multi-region and data events configured for S3 refer to the remediation below.\n\n**From Command Line:**\n\n1. Run `list-trails` command to list the names of all Amazon CloudTrail trails currently available in all AWS regions:\n```\naws cloudtrail list-trails\n```\n2. The command output will be a list of all the trail names to include.\n\"TrailARN\": \"arn:aws:cloudtrail:::trail/\",\n\"Name\": \"\",\n\"HomeRegion\": \"\"\n3. Next run 'get-trail- command to determine Multi-region.\n```\naws cloudtrail get-trail --name  --region \n```\n4. The command output should include:\n\"IsMultiRegionTrail\": true,\n5. Next run `get-event-selectors` command using the `Name` of the trail and the `region` returned in step 2 to determine if Data events logging feature is enabled within the selected CloudTrail trail for all S3 buckets:\n```\naws cloudtrail get-event-selectors --region  --trail-name  --query EventSelectors[*].DataResources[]\n```\n6. The command output should be an array that contains the configuration of the AWS resource(S3 bucket) defined for the Data events selector.\n\"Type\": \"AWS::S3::Object\",\n \"Values\": [\n \"arn:aws:s3\"\n7. If the `get-event-selectors` command returns an empty array '[]', the Data events are not included in the selected AWS Cloudtrail trail logging configuration, therefore the S3 object-level API operations performed within your AWS account are not recorded.\n8. Repeat steps 1 to 5 for auditing each CloudTrail to determine if Data events for S3 are covered.\nIf Multi-region is not set to true and the Data events does not show S3 defined as shown refer to the remediation procedure below.",
+          "RemediationProcedure": "**From Console:**  1. Login to the AWS Management Console and navigate to S3 dashboard at `https://console.aws.amazon.com/s3/` 2. In the left navigation panel, click `buckets` and then click on the S3 Bucket Name that you want to examine. 3. Click `Properties` tab to see in detail bucket configuration. 4. Click on the `Object-level` logging setting, enter the CloudTrail name for the recording activity. You can choose an existing Cloudtrail or create a new one by navigating to the Cloudtrail console link `https://console.aws.amazon.com/cloudtrail/` 5. Once the Cloudtrail is selected, check the `Write` event checkbox, so that `object-level` logging for Write events is enabled. 6. Repeat steps 2 to 5 to enable object-level logging of write events for other S3 buckets.  **From Command Line:**  1. To enable `object-level` data events logging for S3 buckets within your AWS account, run `put-event-selectors` command using the name of the trail that you want to reconfigure as identifier: ``` aws cloudtrail put-event-selectors --region  --trail-name  --event-selectors '[{ \"ReadWriteType\": \"WriteOnly\", \"IncludeManagementEvents\":true, \"DataResources\": [{ \"Type\": \"AWS::S3::Object\", \"Values\": [\"arn:aws:s3:::/\"] }] }]' ``` 2. The command output will be `object-level` event trail configuration. 3. If you want to enable it for all buckets at once then change Values parameter to `[\"arn:aws:s3\"]` in command given above. 4. Repeat step 1 for each s3 bucket to update `object-level` logging of write events. 5. Change the AWS region by updating the `--region` command parameter and perform the process for other regions.",
+          "AuditProcedure": "**From Console:**  1. Login to the AWS Management Console and navigate to CloudTrail dashboard at `https://console.aws.amazon.com/cloudtrail/` 2. In the left panel, click `Trails` and then click on the CloudTrail Name that you want to examine. 3. Review `General details` 4. Confirm that `Multi-region trail` is set to `Yes` 5. Scroll down to `Data events` 6. Confirm that it reads: Data events: S3 Bucket Name: All current and future S3 buckets Read: Enabled Write: Enabled 7. Repeat steps 2 to 6 to verify that Multi-region trail and Data events logging of S3 buckets in CloudTrail. If the CloudTrails do not have multi-region and data events configured for S3 refer to the remediation below.  **From Command Line:**  1. Run `list-trails` command to list the names of all Amazon CloudTrail trails currently available in all AWS regions: ``` aws cloudtrail list-trails ``` 2. The command output will be a list of all the trail names to include. \"TrailARN\": \"arn:aws:cloudtrail:::trail/\", \"Name\": \"\", \"HomeRegion\": \"\" 3. Next run 'get-trail- command to determine Multi-region. ``` aws cloudtrail get-trail --name  --region  ``` 4. The command output should include: \"IsMultiRegionTrail\": true, 5. Next run `get-event-selectors` command using the `Name` of the trail and the `region` returned in step 2 to determine if Data events logging feature is enabled within the selected CloudTrail trail for all S3 buckets: ``` aws cloudtrail get-event-selectors --region  --trail-name  --query EventSelectors[*].DataResources[] ``` 6. The command output should be an array that contains the configuration of the AWS resource(S3 bucket) defined for the Data events selector. \"Type\": \"AWS::S3::Object\",  \"Values\": [  \"arn:aws:s3\" 7. If the `get-event-selectors` command returns an empty array '[]', the Data events are not included in the selected AWS Cloudtrail trail logging configuration, therefore the S3 object-level API operations performed within your AWS account are not recorded. 8. Repeat steps 1 to 5 for auditing each CloudTrail to determine if Data events for S3 are covered. If Multi-region is not set to true and the Data events does not show S3 defined as shown refer to the remediation procedure below.",
           "AdditionalInformation": "",
           "References": "https://docs.aws.amazon.com/AmazonS3/latest/user-guide/enable-cloudtrail-events.html"
         }
@@ -714,8 +714,8 @@
           "Description": "S3 object-level API operations such as GetObject, DeleteObject, and PutObject are called data events. By default, CloudTrail trails don't log data events and so it is recommended to enable Object-level logging for S3 buckets.",
           "RationaleStatement": "Enabling object-level logging will help you meet data compliance requirements within your organization, perform comprehensive security analysis, monitor specific patterns of user behavior in your AWS account or take immediate actions on any object-level API activity using Amazon CloudWatch Events.",
           "ImpactStatement": "",
-          "RemediationProcedure": "**From Console:**\n\n1. Login to the AWS Management Console and navigate to S3 dashboard at `https://console.aws.amazon.com/s3/`\n2. In the left navigation panel, click `buckets` and then click on the S3 Bucket Name that you want to examine.\n3. Click `Properties` tab to see in detail bucket configuration.\n4. Click on the `Object-level` logging setting, enter the CloudTrail name for the recording activity. You can choose an existing Cloudtrail or create a new one by navigating to the Cloudtrail console link `https://console.aws.amazon.com/cloudtrail/`\n5. Once the Cloudtrail is selected, check the Read event checkbox, so that `object-level` logging for `Read` events is enabled.\n6. Repeat steps 2 to 5 to enable `object-level` logging of read events for other S3 buckets.\n\n**From Command Line:**\n1. To enable `object-level` data events logging for S3 buckets within your AWS account, run `put-event-selectors` command using the name of the trail that you want to reconfigure as identifier:\n```\naws cloudtrail put-event-selectors --region  --trail-name  --event-selectors '[{ \"ReadWriteType\": \"ReadOnly\", \"IncludeManagementEvents\":true, \"DataResources\": [{ \"Type\": \"AWS::S3::Object\", \"Values\": [\"arn:aws:s3:::/\"] }] }]'\n```\n2. The command output will be `object-level` event trail configuration.\n3. If you want to enable it for all buckets at ones then change Values parameter to `[\"arn:aws:s3\"]` in command given above.\n4. Repeat step 1 for each s3 bucket to update `object-level` logging of read events.\n5. Change the AWS region by updating the `--region` command parameter and perform the process for other regions.",
-          "AuditProcedure": "**From Console:**\n\n1. Login to the AWS Management Console and navigate to S3 dashboard at `https://console.aws.amazon.com/s3/`\n2. In the left navigation panel, click `buckets` and then click on the S3 Bucket Name that you want to examine.\n3. Click `Properties` tab to see in detail bucket configuration.\n4. If the current status for `Object-level` logging is set to `Disabled`, then object-level logging of read events for the selected s3 bucket is not set.\n5. If the current status for `Object-level` logging is set to `Enabled`, but the Read event check-box is unchecked, then object-level logging of read events for the selected s3 bucket is not set.\n6. Repeat steps 2 to 5 to verify `object-level` logging for `read` events of your other S3 buckets.\n\n**From Command Line:**\n1. Run `describe-trails` command to list the names of all Amazon CloudTrail trails currently available in the selected AWS region:\n```\naws cloudtrail describe-trails --region  --output table --query trailList[*].Name\n```\n2. The command output will be table of the requested trail names.\n3. Run `get-event-selectors` command using the name of the trail returned at the previous step and custom query filters to determine if Data events logging feature is enabled within the selected CloudTrail trail configuration for s3 bucket resources:\n```\naws cloudtrail get-event-selectors --region  --trail-name  --query EventSelectors[*].DataResources[]\n```\n4. The command output should be an array that contains the configuration of the AWS resource(S3 bucket) defined for the Data events selector.\n5. If the `get-event-selectors` command returns an empty array, the Data events are not included into the selected AWS Cloudtrail trail logging configuration, therefore the S3 object-level API operations performed within your AWS account are not recorded.\n6. Repeat steps 1 to 5 for auditing each s3 bucket to identify other trails that are missing the capability to log Data events.\n7. Change the AWS region by updating the `--region` command parameter and perform the audit process for other regions.",
+          "RemediationProcedure": "**From Console:**  1. Login to the AWS Management Console and navigate to S3 dashboard at `https://console.aws.amazon.com/s3/` 2. In the left navigation panel, click `buckets` and then click on the S3 Bucket Name that you want to examine. 3. Click `Properties` tab to see in detail bucket configuration. 4. Click on the `Object-level` logging setting, enter the CloudTrail name for the recording activity. You can choose an existing Cloudtrail or create a new one by navigating to the Cloudtrail console link `https://console.aws.amazon.com/cloudtrail/` 5. Once the Cloudtrail is selected, check the Read event checkbox, so that `object-level` logging for `Read` events is enabled. 6. Repeat steps 2 to 5 to enable `object-level` logging of read events for other S3 buckets.  **From Command Line:** 1. To enable `object-level` data events logging for S3 buckets within your AWS account, run `put-event-selectors` command using the name of the trail that you want to reconfigure as identifier: ``` aws cloudtrail put-event-selectors --region  --trail-name  --event-selectors '[{ \"ReadWriteType\": \"ReadOnly\", \"IncludeManagementEvents\":true, \"DataResources\": [{ \"Type\": \"AWS::S3::Object\", \"Values\": [\"arn:aws:s3:::/\"] }] }]' ``` 2. The command output will be `object-level` event trail configuration. 3. If you want to enable it for all buckets at ones then change Values parameter to `[\"arn:aws:s3\"]` in command given above. 4. Repeat step 1 for each s3 bucket to update `object-level` logging of read events. 5. Change the AWS region by updating the `--region` command parameter and perform the process for other regions.",
+          "AuditProcedure": "**From Console:**  1. Login to the AWS Management Console and navigate to S3 dashboard at `https://console.aws.amazon.com/s3/` 2. In the left navigation panel, click `buckets` and then click on the S3 Bucket Name that you want to examine. 3. Click `Properties` tab to see in detail bucket configuration. 4. If the current status for `Object-level` logging is set to `Disabled`, then object-level logging of read events for the selected s3 bucket is not set. 5. If the current status for `Object-level` logging is set to `Enabled`, but the Read event check-box is unchecked, then object-level logging of read events for the selected s3 bucket is not set. 6. Repeat steps 2 to 5 to verify `object-level` logging for `read` events of your other S3 buckets.  **From Command Line:** 1. Run `describe-trails` command to list the names of all Amazon CloudTrail trails currently available in the selected AWS region: ``` aws cloudtrail describe-trails --region  --output table --query trailList[*].Name ``` 2. The command output will be table of the requested trail names. 3. Run `get-event-selectors` command using the name of the trail returned at the previous step and custom query filters to determine if Data events logging feature is enabled within the selected CloudTrail trail configuration for s3 bucket resources: ``` aws cloudtrail get-event-selectors --region  --trail-name  --query EventSelectors[*].DataResources[] ``` 4. The command output should be an array that contains the configuration of the AWS resource(S3 bucket) defined for the Data events selector. 5. If the `get-event-selectors` command returns an empty array, the Data events are not included into the selected AWS Cloudtrail trail logging configuration, therefore the S3 object-level API operations performed within your AWS account are not recorded. 6. Repeat steps 1 to 5 for auditing each s3 bucket to identify other trails that are missing the capability to log Data events. 7. Change the AWS region by updating the `--region` command parameter and perform the audit process for other regions.",
           "AdditionalInformation": "",
           "References": "https://docs.aws.amazon.com/AmazonS3/latest/user-guide/enable-cloudtrail-events.html"
         }
@@ -735,8 +735,8 @@
           "Description": "CloudTrail log file validation creates a digitally signed digest file containing a hash of each log that CloudTrail writes to S3. These digest files can be used to determine whether a log file was changed, deleted, or unchanged after CloudTrail delivered the log. It is recommended that file validation be enabled on all CloudTrails.",
           "RationaleStatement": "Enabling log file validation will provide additional integrity checking of CloudTrail logs.",
           "ImpactStatement": "",
-          "RemediationProcedure": "Perform the following to enable log file validation on a given trail:\n\n**From Console:**\n\n1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/cloudtrail](https://console.aws.amazon.com/cloudtrail)\n2. Click on `Trails` on the left navigation pane\n3. Click on target trail\n4. Within the `General details` section click `edit`\n5. Under the `Advanced settings` section\n6. Check the enable box under `Log file validation` \n7. Click `Save changes` \n\n**From Command Line:**\n```\naws cloudtrail update-trail --name  --enable-log-file-validation\n```\nNote that periodic validation of logs using these digests can be performed by running the following command:\n```\naws cloudtrail validate-logs --trail-arn  --start-time  --end-time \n```",
-          "AuditProcedure": "Perform the following on each trail to determine if log file validation is enabled:\n\n**From Console:**\n\n1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/cloudtrail](https://console.aws.amazon.com/cloudtrail)\n2. Click on `Trails` on the left navigation pane\n3. For Every Trail:\n- Click on a trail via the link in the _Name_ column\n- Under the `General details` section, ensure `Log file validation` is set to `Enabled` \n\n**From Command Line:**\n```\naws cloudtrail describe-trails\n```\nEnsure `LogFileValidationEnabled` is set to `true` for each trail",
+          "RemediationProcedure": "Perform the following to enable log file validation on a given trail:  **From Console:**  1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/cloudtrail](https://console.aws.amazon.com/cloudtrail) 2. Click on `Trails` on the left navigation pane 3. Click on target trail 4. Within the `General details` section click `edit` 5. Under the `Advanced settings` section 6. Check the enable box under `Log file validation`  7. Click `Save changes`   **From Command Line:** ``` aws cloudtrail update-trail --name  --enable-log-file-validation ``` Note that periodic validation of logs using these digests can be performed by running the following command: ``` aws cloudtrail validate-logs --trail-arn  --start-time  --end-time  ```",
+          "AuditProcedure": "Perform the following on each trail to determine if log file validation is enabled:  **From Console:**  1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/cloudtrail](https://console.aws.amazon.com/cloudtrail) 2. Click on `Trails` on the left navigation pane 3. For Every Trail: - Click on a trail via the link in the _Name_ column - Under the `General details` section, ensure `Log file validation` is set to `Enabled`   **From Command Line:** ``` aws cloudtrail describe-trails ``` Ensure `LogFileValidationEnabled` is set to `true` for each trail",
           "AdditionalInformation": "",
           "References": "https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-log-file-validation-enabling.html"
         }
@@ -756,8 +756,8 @@
           "Description": "CloudTrail logs a record of every API call made in your AWS account. These logs file are stored in an S3 bucket. It is recommended that the bucket policy or access control list (ACL) applied to the S3 bucket that CloudTrail logs to prevent public access to the CloudTrail logs.",
           "RationaleStatement": "Allowing public access to CloudTrail log content may aid an adversary in identifying weaknesses in the affected account's use or configuration.",
           "ImpactStatement": "",
-          "RemediationProcedure": "Perform the following to remove any public access that has been granted to the bucket via an ACL or S3 bucket policy:\n\n1. Go to Amazon S3 console at [https://console.aws.amazon.com/s3/home](https://console.aws.amazon.com/s3/home)\n2. Right-click on the bucket and click Properties\n3. In the `Properties` pane, click the `Permissions` tab.\n4. The tab shows a list of grants, one row per grant, in the bucket ACL. Each row identifies the grantee and the permissions granted.\n5. Select the row that grants permission to `Everyone` or `Any Authenticated User` \n6. Uncheck all the permissions granted to `Everyone` or `Any Authenticated User` (click `x` to delete the row).\n7. Click `Save` to save the ACL.\n8. If the `Edit bucket policy` button is present, click it.\n9. Remove any `Statement` having an `Effect` set to `Allow` and a `Principal` set to \"\\*\" or {\"AWS\" : \"\\*\"}.",
-          "AuditProcedure": "Perform the following to determine if any public access is granted to an S3 bucket via an ACL or S3 bucket policy:\n\n**From Console:**\n\n1. Go to the Amazon CloudTrail console at [https://console.aws.amazon.com/cloudtrail/home](https://console.aws.amazon.com/cloudtrail/home)\n2. In the `API activity history` pane on the left, click `Trails` \n3. In the `Trails` pane, note the bucket names in the `S3 bucket` column\n4. Go to Amazon S3 console at [https://console.aws.amazon.com/s3/home](https://console.aws.amazon.com/s3/home)\n5. For each bucket noted in step 3, right-click on the bucket and click `Properties` \n6. In the `Properties` pane, click the `Permissions` tab.\n7. The tab shows a list of grants, one row per grant, in the bucket ACL. Each row identifies the grantee and the permissions granted.\n8. Ensure no rows exists that have the `Grantee` set to `Everyone` or the `Grantee` set to `Any Authenticated User.` \n9. If the `Edit bucket policy` button is present, click it to review the bucket policy.\n10. Ensure the policy does not contain a `Statement` having an `Effect` set to `Allow` and a `Principal` set to \"\\*\" or {\"AWS\" : \"\\*\"}\n\n**From Command Line:**\n\n1. Get the name of the S3 bucket that CloudTrail is logging to:\n```\n aws cloudtrail describe-trails --query 'trailList[*].S3BucketName'\n```\n2. Ensure the `AllUsers` principal is not granted privileges to that `` :\n```\n aws s3api get-bucket-acl --bucket  --query 'Grants[?Grantee.URI== `https://acs.amazonaws.com/groups/global/AllUsers` ]'\n```\n3. Ensure the `AuthenticatedUsers` principal is not granted privileges to that ``:\n```\n aws s3api get-bucket-acl --bucket  --query 'Grants[?Grantee.URI== `https://acs.amazonaws.com/groups/global/Authenticated Users` ]'\n```\n4. Get the S3 Bucket Policy\n```\n aws s3api get-bucket-policy --bucket  \n```\n5. Ensure the policy does not contain a `Statement` having an `Effect` set to `Allow` and a `Principal` set to \"\\*\" or {\"AWS\" : \"\\*\"}\n\n**Note:** Principal set to \"\\*\" or {\"AWS\" : \"\\*\"} allows anonymous access.",
+          "RemediationProcedure": "Perform the following to remove any public access that has been granted to the bucket via an ACL or S3 bucket policy:  1. Go to Amazon S3 console at [https://console.aws.amazon.com/s3/home](https://console.aws.amazon.com/s3/home) 2. Right-click on the bucket and click Properties 3. In the `Properties` pane, click the `Permissions` tab. 4. The tab shows a list of grants, one row per grant, in the bucket ACL. Each row identifies the grantee and the permissions granted. 5. Select the row that grants permission to `Everyone` or `Any Authenticated User`  6. Uncheck all the permissions granted to `Everyone` or `Any Authenticated User` (click `x` to delete the row). 7. Click `Save` to save the ACL. 8. If the `Edit bucket policy` button is present, click it. 9. Remove any `Statement` having an `Effect` set to `Allow` and a `Principal` set to \"\\*\" or {\"AWS\" : \"\\*\"}.",
+          "AuditProcedure": "Perform the following to determine if any public access is granted to an S3 bucket via an ACL or S3 bucket policy:  **From Console:**  1. Go to the Amazon CloudTrail console at [https://console.aws.amazon.com/cloudtrail/home](https://console.aws.amazon.com/cloudtrail/home) 2. In the `API activity history` pane on the left, click `Trails`  3. In the `Trails` pane, note the bucket names in the `S3 bucket` column 4. Go to Amazon S3 console at [https://console.aws.amazon.com/s3/home](https://console.aws.amazon.com/s3/home) 5. For each bucket noted in step 3, right-click on the bucket and click `Properties`  6. In the `Properties` pane, click the `Permissions` tab. 7. The tab shows a list of grants, one row per grant, in the bucket ACL. Each row identifies the grantee and the permissions granted. 8. Ensure no rows exists that have the `Grantee` set to `Everyone` or the `Grantee` set to `Any Authenticated User.`  9. If the `Edit bucket policy` button is present, click it to review the bucket policy. 10. Ensure the policy does not contain a `Statement` having an `Effect` set to `Allow` and a `Principal` set to \"\\*\" or {\"AWS\" : \"\\*\"}  **From Command Line:**  1. Get the name of the S3 bucket that CloudTrail is logging to: ```  aws cloudtrail describe-trails --query 'trailList[*].S3BucketName' ``` 2. Ensure the `AllUsers` principal is not granted privileges to that `` : ```  aws s3api get-bucket-acl --bucket  --query 'Grants[?Grantee.URI== `https://acs.amazonaws.com/groups/global/AllUsers` ]' ``` 3. Ensure the `AuthenticatedUsers` principal is not granted privileges to that ``: ```  aws s3api get-bucket-acl --bucket  --query 'Grants[?Grantee.URI== `https://acs.amazonaws.com/groups/global/Authenticated Users` ]' ``` 4. Get the S3 Bucket Policy ```  aws s3api get-bucket-policy --bucket   ``` 5. Ensure the policy does not contain a `Statement` having an `Effect` set to `Allow` and a `Principal` set to \"\\*\" or {\"AWS\" : \"\\*\"}  **Note:** Principal set to \"\\*\" or {\"AWS\" : \"\\*\"} allows anonymous access.",
           "AdditionalInformation": "",
           "References": "https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_principal.html"
         }
@@ -774,11 +774,11 @@
           "Section": "3. Logging",
           "Profile": "Level 1",
           "AssessmentStatus": "Automated",
-          "Description": "AWS CloudTrail is a web service that records AWS API calls made in a given AWS account. The recorded information includes the identity of the API caller, the time of the API call, the source IP address of the API caller, the request parameters, and the response elements returned by the AWS service. CloudTrail uses Amazon S3 for log file storage and delivery, so log files are stored durably. In addition to capturing CloudTrail logs within a specified S3 bucket for long term analysis, realtime analysis can be performed by configuring CloudTrail to send logs to CloudWatch Logs. For a trail that is enabled in all regions in an account, CloudTrail sends log files from all those regions to a CloudWatch Logs log group. It is recommended that CloudTrail logs be sent to CloudWatch Logs.\n\nNote: The intent of this recommendation is to ensure AWS account activity is being captured, monitored, and appropriately alarmed on. CloudWatch Logs is a native way to accomplish this using AWS services but does not preclude the use of an alternate solution.",
+          "Description": "AWS CloudTrail is a web service that records AWS API calls made in a given AWS account. The recorded information includes the identity of the API caller, the time of the API call, the source IP address of the API caller, the request parameters, and the response elements returned by the AWS service. CloudTrail uses Amazon S3 for log file storage and delivery, so log files are stored durably. In addition to capturing CloudTrail logs within a specified S3 bucket for long term analysis, realtime analysis can be performed by configuring CloudTrail to send logs to CloudWatch Logs. For a trail that is enabled in all regions in an account, CloudTrail sends log files from all those regions to a CloudWatch Logs log group. It is recommended that CloudTrail logs be sent to CloudWatch Logs.  Note: The intent of this recommendation is to ensure AWS account activity is being captured, monitored, and appropriately alarmed on. CloudWatch Logs is a native way to accomplish this using AWS services but does not preclude the use of an alternate solution.",
           "RationaleStatement": "Sending CloudTrail logs to CloudWatch Logs will facilitate real-time and historic activity logging based on user, API, resource, and IP address, and provides opportunity to establish alarms and notifications for anomalous or sensitivity account activity.",
-          "ImpactStatement": "Note: By default, CloudWatch Logs will store Logs indefinitely unless a specific retention period is defined for the log group. When choosing the number of days to retain, keep in mind the average days it takes an organization to realize they have been breached is 210 days (at the time of this writing). Since additional time is required to research a breach, a minimum 365 day retention policy allows time for detection and research. You may also wish to archive the logs to a cheaper storage service rather than simply deleting them. See the following AWS resource to manage CloudWatch Logs retention periods:\n\n1. https://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/SettingLogRetention.html",
-          "RemediationProcedure": "Perform the following to establish the prescribed state:\n\n**From Console:**\n\n1. Login to the CloudTrail console at `https://console.aws.amazon.com/cloudtrail/`\n2. Select the `Trail` the needs to be updated.\n3. Scroll down to `CloudWatch Logs`\n4. Click `Edit`\n5. Under `CloudWatch Logs` click the box `Enabled`\n6. Under `Log Group` pick new or select an existing log group\n7. Edit the `Log group name` to match the CloudTrail or pick the existing CloudWatch Group.\n8. Under `IAM Role` pick new or select an existing.\n9. Edit the `Role name` to match the CloudTrail or pick the existing IAM Role.\n10. Click `Save changes.\n\n**From Command Line:**\n```\naws cloudtrail update-trail --name  --cloudwatch-logs-log-group-arn  --cloudwatch-logs-role-arn \n```",
-          "AuditProcedure": "Perform the following to ensure CloudTrail is configured as prescribed:\n\n**From Console:**\n\n1. Login to the CloudTrail console at `https://console.aws.amazon.com/cloudtrail/`\n2. Under `Trails` , click on the CloudTrail you wish to evaluate\n3. Under the `CloudWatch Logs` section.\n4. Ensure a `CloudWatch Logs` log group is configured and listed.\n5. Under `General details` confirm `Last log file delivered` has a recent (~one day old) timestamp.\n\n**From Command Line:**\n\n1. Run the following command to get a listing of existing trails:\n```\n aws cloudtrail describe-trails\n```\n2. Ensure `CloudWatchLogsLogGroupArn` is not empty and note the value of the `Name` property.\n3. Using the noted value of the `Name` property, run the following command:\n```\n aws cloudtrail get-trail-status --name \n```\n4. Ensure the `LatestcloudwatchLogdDeliveryTime` property is set to a recent (~one day old) timestamp.\n\nIf the `CloudWatch Logs` log group is not setup and the delivery time is not recent refer to the remediation below.",
+          "ImpactStatement": "Note: By default, CloudWatch Logs will store Logs indefinitely unless a specific retention period is defined for the log group. When choosing the number of days to retain, keep in mind the average days it takes an organization to realize they have been breached is 210 days (at the time of this writing). Since additional time is required to research a breach, a minimum 365 day retention policy allows time for detection and research. You may also wish to archive the logs to a cheaper storage service rather than simply deleting them. See the following AWS resource to manage CloudWatch Logs retention periods:  1. https://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/SettingLogRetention.html",
+          "RemediationProcedure": "Perform the following to establish the prescribed state:  **From Console:**  1. Login to the CloudTrail console at `https://console.aws.amazon.com/cloudtrail/` 2. Select the `Trail` the needs to be updated. 3. Scroll down to `CloudWatch Logs` 4. Click `Edit` 5. Under `CloudWatch Logs` click the box `Enabled` 6. Under `Log Group` pick new or select an existing log group 7. Edit the `Log group name` to match the CloudTrail or pick the existing CloudWatch Group. 8. Under `IAM Role` pick new or select an existing. 9. Edit the `Role name` to match the CloudTrail or pick the existing IAM Role. 10. Click `Save changes.  **From Command Line:** ``` aws cloudtrail update-trail --name  --cloudwatch-logs-log-group-arn  --cloudwatch-logs-role-arn  ```",
+          "AuditProcedure": "Perform the following to ensure CloudTrail is configured as prescribed:  **From Console:**  1. Login to the CloudTrail console at `https://console.aws.amazon.com/cloudtrail/` 2. Under `Trails` , click on the CloudTrail you wish to evaluate 3. Under the `CloudWatch Logs` section. 4. Ensure a `CloudWatch Logs` log group is configured and listed. 5. Under `General details` confirm `Last log file delivered` has a recent (~one day old) timestamp.  **From Command Line:**  1. Run the following command to get a listing of existing trails: ```  aws cloudtrail describe-trails ``` 2. Ensure `CloudWatchLogsLogGroupArn` is not empty and note the value of the `Name` property. 3. Using the noted value of the `Name` property, run the following command: ```  aws cloudtrail get-trail-status --name  ``` 4. Ensure the `LatestcloudwatchLogdDeliveryTime` property is set to a recent (~one day old) timestamp.  If the `CloudWatch Logs` log group is not setup and the delivery time is not recent refer to the remediation below.",
           "AdditionalInformation": "",
           "References": "https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-user-guide.html:https://docs.aws.amazon.com/awscloudtrail/latest/userguide/how-cloudtrail-works.html:https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-aws-service-specific-topics.html"
         }
@@ -798,8 +798,8 @@
           "Description": "AWS Config is a web service that performs configuration management of supported AWS resources within your account and delivers log files to you. The recorded information includes the configuration item (AWS resource), relationships between configuration items (AWS resources), any configuration changes between resources. It is recommended AWS Config be enabled in all regions.",
           "RationaleStatement": "The AWS configuration item history captured by AWS Config enables security analysis, resource change tracking, and compliance auditing.",
           "ImpactStatement": "It is recommended AWS Config be enabled in all regions.",
-          "RemediationProcedure": "To implement AWS Config configuration:\n\n**From Console:**\n\n1. Select the region you want to focus on in the top right of the console\n2. Click `Services` \n3. Click `Config` \n4. Define which resources you want to record in the selected region\n5. Choose to include global resources (IAM resources)\n6. Specify an S3 bucket in the same account or in another managed AWS account\n7. Create an SNS Topic from the same AWS account or another managed AWS account\n\n**From Command Line:**\n\n1. Ensure there is an appropriate S3 bucket, SNS topic, and IAM role per the [AWS Config Service prerequisites](http://docs.aws.amazon.com/config/latest/developerguide/gs-cli-prereq.html).\n2. Run this command to set up the configuration recorder\n```\naws configservice subscribe --s3-bucket my-config-bucket --sns-topic arn:aws:sns:us-east-1:012345678912:my-config-notice --iam-role arn:aws:iam::012345678912:role/myConfigRole\n```\n3. Run this command to start the configuration recorder:\n```\nstart-configuration-recorder --configuration-recorder-name \n```",
-          "AuditProcedure": "Process to evaluate AWS Config configuration per region\n\n**From Console:**\n\n1. Sign in to the AWS Management Console and open the AWS Config console at [https://console.aws.amazon.com/config/](https://console.aws.amazon.com/config/).\n2. On the top right of the console select target Region.\n3. If presented with Setup AWS Config - follow remediation procedure:\n4. On the Resource inventory page, Click on edit (the gear icon). The Set Up AWS Config page appears.\n5. Ensure 1 or both check-boxes under \"All Resources\" is checked.\n - Include global resources related to IAM resources - which needs to be enabled in 1 region only\n6. Ensure the correct S3 bucket has been defined.\n7. Ensure the correct SNS topic has been defined.\n8. Repeat steps 2 to 7 for each region.\n\n**From Command Line:**\n\n1. Run this command to show all AWS Config recorders and their properties:\n```\naws configservice describe-configuration-recorders\n```\n2. Evaluate the output to ensure that there's at least one recorder for which `recordingGroup` object includes `\"allSupported\": true` AND `\"includeGlobalResourceTypes\": true`\n\nNote: There is one more parameter \"ResourceTypes\" in recordingGroup object. We don't need to check the same as whenever we set \"allSupported\": true, AWS enforces resource types to be empty (\"ResourceTypes\":[])\n\nSample Output:\n\n```\n{\n \"ConfigurationRecorders\": [\n {\n \"recordingGroup\": {\n \"allSupported\": true,\n \"resourceTypes\": [],\n \"includeGlobalResourceTypes\": true\n },\n \"roleARN\": \"arn:aws:iam:::role/service-role/\",\n \"name\": \"default\"\n }\n ]\n}\n```\n\n3. Run this command to show the status for all AWS Config recorders:\n```\naws configservice describe-configuration-recorder-status\n```\n4. In the output, find recorders with `name` key matching the recorders that met criteria in step 2. Ensure that at least one of them includes `\"recording\": true` and `\"lastStatus\": \"SUCCESS\"`",
+          "RemediationProcedure": "To implement AWS Config configuration:  **From Console:**  1. Select the region you want to focus on in the top right of the console 2. Click `Services`  3. Click `Config`  4. Define which resources you want to record in the selected region 5. Choose to include global resources (IAM resources) 6. Specify an S3 bucket in the same account or in another managed AWS account 7. Create an SNS Topic from the same AWS account or another managed AWS account  **From Command Line:**  1. Ensure there is an appropriate S3 bucket, SNS topic, and IAM role per the [AWS Config Service prerequisites](http://docs.aws.amazon.com/config/latest/developerguide/gs-cli-prereq.html). 2. Run this command to set up the configuration recorder ``` aws configservice subscribe --s3-bucket my-config-bucket --sns-topic arn:aws:sns:us-east-1:012345678912:my-config-notice --iam-role arn:aws:iam::012345678912:role/myConfigRole ``` 3. Run this command to start the configuration recorder: ``` start-configuration-recorder --configuration-recorder-name  ```",
+          "AuditProcedure": "Process to evaluate AWS Config configuration per region  **From Console:**  1. Sign in to the AWS Management Console and open the AWS Config console at [https://console.aws.amazon.com/config/](https://console.aws.amazon.com/config/). 2. On the top right of the console select target Region. 3. If presented with Setup AWS Config - follow remediation procedure: 4. On the Resource inventory page, Click on edit (the gear icon). The Set Up AWS Config page appears. 5. Ensure 1 or both check-boxes under \"All Resources\" is checked.  - Include global resources related to IAM resources - which needs to be enabled in 1 region only 6. Ensure the correct S3 bucket has been defined. 7. Ensure the correct SNS topic has been defined. 8. Repeat steps 2 to 7 for each region.  **From Command Line:**  1. Run this command to show all AWS Config recorders and their properties: ``` aws configservice describe-configuration-recorders ``` 2. Evaluate the output to ensure that there's at least one recorder for which `recordingGroup` object includes `\"allSupported\": true` AND `\"includeGlobalResourceTypes\": true`  Note: There is one more parameter \"ResourceTypes\" in recordingGroup object. We don't need to check the same as whenever we set \"allSupported\": true, AWS enforces resource types to be empty (\"ResourceTypes\":[])  Sample Output:  ``` {  \"ConfigurationRecorders\": [  {  \"recordingGroup\": {  \"allSupported\": true,  \"resourceTypes\": [],  \"includeGlobalResourceTypes\": true  },  \"roleARN\": \"arn:aws:iam:::role/service-role/\",  \"name\": \"default\"  }  ] } ```  3. Run this command to show the status for all AWS Config recorders: ``` aws configservice describe-configuration-recorder-status ``` 4. In the output, find recorders with `name` key matching the recorders that met criteria in step 2. Ensure that at least one of them includes `\"recording\": true` and `\"lastStatus\": \"SUCCESS\"`",
           "AdditionalInformation": "",
           "References": "https://docs.aws.amazon.com/cli/latest/reference/configservice/describe-configuration-recorder-status.html"
         }
@@ -819,8 +819,8 @@
           "Description": "S3 Bucket Access Logging generates a log that contains access records for each request made to your S3 bucket. An access log record contains details about the request, such as the request type, the resources specified in the request worked, and the time and date the request was processed. It is recommended that bucket access logging be enabled on the CloudTrail S3 bucket.",
           "RationaleStatement": "By enabling S3 bucket logging on target S3 buckets, it is possible to capture all events which may affect objects within any target buckets. Configuring logs to be placed in a separate bucket allows access to log information which can be useful in security and incident response workflows.",
           "ImpactStatement": "",
-          "RemediationProcedure": "Perform the following to enable S3 bucket logging:\n\n**From Console:**\n\n1. Sign in to the AWS Management Console and open the S3 console at [https://console.aws.amazon.com/s3](https://console.aws.amazon.com/s3).\n2. Under `All Buckets` click on the target S3 bucket\n3. Click on `Properties` in the top right of the console\n4. Under `Bucket:`  click on `Logging` \n5. Configure bucket logging\n - Click on the `Enabled` checkbox\n - Select Target Bucket from list\n - Enter a Target Prefix\n6. Click `Save`.\n\n**From Command Line:**\n\n1. Get the name of the S3 bucket that CloudTrail is logging to:\n```\naws cloudtrail describe-trails --region  --query trailList[*].S3BucketName\n```\n2. Copy and add target bucket name at ``, Prefix for logfile at `` and optionally add an email address in the following template and save it as ``:\n```\n{\n \"LoggingEnabled\": {\n \"TargetBucket\": \"\",\n \"TargetPrefix\": \"\",\n \"TargetGrants\": [\n {\n \"Grantee\": {\n \"Type\": \"AmazonCustomerByEmail\",\n \"EmailAddress\": \"\"\n },\n \"Permission\": \"FULL_CONTROL\"\n }\n ]\n } \n}\n```\n3. Run the `put-bucket-logging` command with bucket name and `` as input, for more information refer at [put-bucket-logging](https://docs.aws.amazon.com/cli/latest/reference/s3api/put-bucket-logging.html):\n```\naws s3api put-bucket-logging --bucket  --bucket-logging-status file://\n```",
-          "AuditProcedure": "Perform the following ensure the CloudTrail S3 bucket has access logging is enabled:\n\n**From Console:**\n\n1. Go to the Amazon CloudTrail console at [https://console.aws.amazon.com/cloudtrail/home](https://console.aws.amazon.com/cloudtrail/home)\n2. In the API activity history pane on the left, click Trails\n3. In the Trails pane, note the bucket names in the S3 bucket column\n4. Sign in to the AWS Management Console and open the S3 console at [https://console.aws.amazon.com/s3](https://console.aws.amazon.com/s3).\n5. Under `All Buckets` click on a target S3 bucket\n6. Click on `Properties` in the top right of the console\n7. Under `Bucket:` _ `` _ click on `Logging` \n8. Ensure `Enabled` is checked.\n\n**From Command Line:**\n\n1. Get the name of the S3 bucket that CloudTrail is logging to:\n``` \naws cloudtrail describe-trails --query 'trailList[*].S3BucketName' \n```\n2. Ensure Bucket Logging is enabled:\n```\naws s3api get-bucket-logging --bucket \n```\nEnsure command does not returns empty output.\n\nSample Output for a bucket with logging enabled:\n\n```\n{\n \"LoggingEnabled\": {\n \"TargetPrefix\": \"\",\n \"TargetBucket\": \"\"\n }\n}\n```",
+          "RemediationProcedure": "Perform the following to enable S3 bucket logging:  **From Console:**  1. Sign in to the AWS Management Console and open the S3 console at [https://console.aws.amazon.com/s3](https://console.aws.amazon.com/s3). 2. Under `All Buckets` click on the target S3 bucket 3. Click on `Properties` in the top right of the console 4. Under `Bucket:`  click on `Logging`  5. Configure bucket logging  - Click on the `Enabled` checkbox  - Select Target Bucket from list  - Enter a Target Prefix 6. Click `Save`.  **From Command Line:**  1. Get the name of the S3 bucket that CloudTrail is logging to: ``` aws cloudtrail describe-trails --region  --query trailList[*].S3BucketName ``` 2. Copy and add target bucket name at ``, Prefix for logfile at `` and optionally add an email address in the following template and save it as ``: ``` {  \"LoggingEnabled\": {  \"TargetBucket\": \"\",  \"TargetPrefix\": \"\",  \"TargetGrants\": [  {  \"Grantee\": {  \"Type\": \"AmazonCustomerByEmail\",  \"EmailAddress\": \"\"  },  \"Permission\": \"FULL_CONTROL\"  }  ]  }  } ``` 3. Run the `put-bucket-logging` command with bucket name and `` as input, for more information refer at [put-bucket-logging](https://docs.aws.amazon.com/cli/latest/reference/s3api/put-bucket-logging.html): ``` aws s3api put-bucket-logging --bucket  --bucket-logging-status file:// ```",
+          "AuditProcedure": "Perform the following ensure the CloudTrail S3 bucket has access logging is enabled:  **From Console:**  1. Go to the Amazon CloudTrail console at [https://console.aws.amazon.com/cloudtrail/home](https://console.aws.amazon.com/cloudtrail/home) 2. In the API activity history pane on the left, click Trails 3. In the Trails pane, note the bucket names in the S3 bucket column 4. Sign in to the AWS Management Console and open the S3 console at [https://console.aws.amazon.com/s3](https://console.aws.amazon.com/s3). 5. Under `All Buckets` click on a target S3 bucket 6. Click on `Properties` in the top right of the console 7. Under `Bucket:` _ `` _ click on `Logging`  8. Ensure `Enabled` is checked.  **From Command Line:**  1. Get the name of the S3 bucket that CloudTrail is logging to: ```  aws cloudtrail describe-trails --query 'trailList[*].S3BucketName'  ``` 2. Ensure Bucket Logging is enabled: ``` aws s3api get-bucket-logging --bucket  ``` Ensure command does not returns empty output.  Sample Output for a bucket with logging enabled:  ``` {  \"LoggingEnabled\": {  \"TargetPrefix\": \"\",  \"TargetBucket\": \"\"  } } ```",
           "AdditionalInformation": "",
           "References": "https://docs.aws.amazon.com/AmazonS3/latest/dev/ServerLogs.html"
         }
@@ -840,9 +840,9 @@
           "Description": "AWS CloudTrail is a web service that records AWS API calls for an account and makes those logs available to users and resources in accordance with IAM policies. AWS Key Management Service (KMS) is a managed service that helps create and control the encryption keys used to encrypt account data, and uses Hardware Security Modules (HSMs) to protect the security of encryption keys. CloudTrail logs can be configured to leverage server side encryption (SSE) and KMS customer created master keys (CMK) to further protect CloudTrail logs. It is recommended that CloudTrail be configured to use SSE-KMS.",
           "RationaleStatement": "Configuring CloudTrail to use SSE-KMS provides additional confidentiality controls on log data as a given user must have S3 read permission on the corresponding log bucket and must be granted decrypt permission by the CMK policy.",
           "ImpactStatement": "Customer created keys incur an additional cost. See https://aws.amazon.com/kms/pricing/ for more information.",
-          "RemediationProcedure": "Perform the following to configure CloudTrail to use SSE-KMS:\n\n**From Console:**\n\n1. Sign in to the AWS Management Console and open the CloudTrail console at [https://console.aws.amazon.com/cloudtrail](https://console.aws.amazon.com/cloudtrail)\n2. In the left navigation pane, choose `Trails` .\n3. Click on a Trail\n4. Under the `S3` section click on the edit button (pencil icon)\n5. Click `Advanced` \n6. Select an existing CMK from the `KMS key Id` drop-down menu\n - Note: Ensure the CMK is located in the same region as the S3 bucket\n - Note: You will need to apply a KMS Key policy on the selected CMK in order for CloudTrail as a service to encrypt and decrypt log files using the CMK provided. Steps are provided [here](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/create-kms-key-policy-for-cloudtrail.html) for editing the selected CMK Key policy\n7. Click `Save` \n8. You will see a notification message stating that you need to have decrypt permissions on the specified KMS key to decrypt log files.\n9. Click `Yes` \n\n**From Command Line:**\n```\naws cloudtrail update-trail --name  --kms-id \naws kms put-key-policy --key-id  --policy \n```",
-          "AuditProcedure": "Perform the following to determine if CloudTrail is configured to use SSE-KMS:\n\n**From Console:**\n\n1. Sign in to the AWS Management Console and open the CloudTrail console at [https://console.aws.amazon.com/cloudtrail](https://console.aws.amazon.com/cloudtrail)\n2. In the left navigation pane, choose `Trails` .\n3. Select a Trail\n4. Under the `S3` section, ensure `Encrypt log files` is set to `Yes` and a KMS key ID is specified in the `KSM Key Id` field.\n\n**From Command Line:**\n\n1. Run the following command:\n```\n aws cloudtrail describe-trails \n```\n2. For each trail listed, SSE-KMS is enabled if the trail has a `KmsKeyId` property defined.",
-          "AdditionalInformation": "3 statements which need to be added to the CMK policy:\n\n1\\. Enable Cloudtrail to describe CMK properties\n```\n
{\n \"Sid\": \"Allow CloudTrail access\",\n \"Effect\": \"Allow\",\n \"Principal\": {\n \"Service\": \"cloudtrail.amazonaws.com\"\n },\n \"Action\": \"kms:DescribeKey\",\n \"Resource\": \"*\"\n}\n```\n2\\. Granting encrypt permissions\n```\n
{\n \"Sid\": \"Allow CloudTrail to encrypt logs\",\n \"Effect\": \"Allow\",\n \"Principal\": {\n \"Service\": \"cloudtrail.amazonaws.com\"\n },\n \"Action\": \"kms:GenerateDataKey*\",\n \"Resource\": \"*\",\n \"Condition\": {\n \"StringLike\": {\n \"kms:EncryptionContext:aws:cloudtrail:arn\": [\n \"arn:aws:cloudtrail:*:aws-account-id:trail/*\"\n ]\n }\n }\n}\n```\n3\\. Granting decrypt permissions\n```\n
{\n \"Sid\": \"Enable CloudTrail log decrypt permissions\",\n \"Effect\": \"Allow\",\n \"Principal\": {\n \"AWS\": \"arn:aws:iam::aws-account-id:user/username\"\n },\n \"Action\": \"kms:Decrypt\",\n \"Resource\": \"*\",\n \"Condition\": {\n \"Null\": {\n \"kms:EncryptionContext:aws:cloudtrail:arn\": \"false\"\n }\n }\n}\n```",
+          "RemediationProcedure": "Perform the following to configure CloudTrail to use SSE-KMS:  **From Console:**  1. Sign in to the AWS Management Console and open the CloudTrail console at [https://console.aws.amazon.com/cloudtrail](https://console.aws.amazon.com/cloudtrail) 2. In the left navigation pane, choose `Trails` . 3. Click on a Trail 4. Under the `S3` section click on the edit button (pencil icon) 5. Click `Advanced`  6. Select an existing CMK from the `KMS key Id` drop-down menu  - Note: Ensure the CMK is located in the same region as the S3 bucket  - Note: You will need to apply a KMS Key policy on the selected CMK in order for CloudTrail as a service to encrypt and decrypt log files using the CMK provided. Steps are provided [here](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/create-kms-key-policy-for-cloudtrail.html) for editing the selected CMK Key policy 7. Click `Save`  8. You will see a notification message stating that you need to have decrypt permissions on the specified KMS key to decrypt log files. 9. Click `Yes`   **From Command Line:** ``` aws cloudtrail update-trail --name  --kms-id  aws kms put-key-policy --key-id  --policy  ```",
+          "AuditProcedure": "Perform the following to determine if CloudTrail is configured to use SSE-KMS:  **From Console:**  1. Sign in to the AWS Management Console and open the CloudTrail console at [https://console.aws.amazon.com/cloudtrail](https://console.aws.amazon.com/cloudtrail) 2. In the left navigation pane, choose `Trails` . 3. Select a Trail 4. Under the `S3` section, ensure `Encrypt log files` is set to `Yes` and a KMS key ID is specified in the `KSM Key Id` field.  **From Command Line:**  1. Run the following command: ```  aws cloudtrail describe-trails  ``` 2. For each trail listed, SSE-KMS is enabled if the trail has a `KmsKeyId` property defined.",
+          "AdditionalInformation": "3 statements which need to be added to the CMK policy:  1\\. Enable Cloudtrail to describe CMK properties ``` 
{  \"Sid\": \"Allow CloudTrail access\",  \"Effect\": \"Allow\",  \"Principal\": {  \"Service\": \"cloudtrail.amazonaws.com\"  },  \"Action\": \"kms:DescribeKey\",  \"Resource\": \"*\" } ``` 2\\. Granting encrypt permissions ``` 
{  \"Sid\": \"Allow CloudTrail to encrypt logs\",  \"Effect\": \"Allow\",  \"Principal\": {  \"Service\": \"cloudtrail.amazonaws.com\"  },  \"Action\": \"kms:GenerateDataKey*\",  \"Resource\": \"*\",  \"Condition\": {  \"StringLike\": {  \"kms:EncryptionContext:aws:cloudtrail:arn\": [  \"arn:aws:cloudtrail:*:aws-account-id:trail/*\"  ]  }  } } ``` 3\\. Granting decrypt permissions ``` 
{  \"Sid\": \"Enable CloudTrail log decrypt permissions\",  \"Effect\": \"Allow\",  \"Principal\": {  \"AWS\": \"arn:aws:iam::aws-account-id:user/username\"  },  \"Action\": \"kms:Decrypt\",  \"Resource\": \"*\",  \"Condition\": {  \"Null\": {  \"kms:EncryptionContext:aws:cloudtrail:arn\": \"false\"  }  } } ```",
           "References": "https://docs.aws.amazon.com/awscloudtrail/latest/userguide/encrypting-cloudtrail-log-files-with-aws-kms.html:https://docs.aws.amazon.com/kms/latest/developerguide/create-keys.html"
         }
       ]
@@ -859,10 +859,10 @@
           "Profile": "Level 2",
           "AssessmentStatus": "Automated",
           "Description": "AWS Key Management Service (KMS) allows customers to rotate the backing key which is key material stored within the KMS which is tied to the key ID of the Customer Created customer master key (CMK). It is the backing key that is used to perform cryptographic operations such as encryption and decryption. Automated key rotation currently retains all prior backing keys so that decryption of encrypted data can take place transparently. It is recommended that CMK key rotation be enabled for symmetric keys. Key rotation can not be enabled for any asymmetric CMK.",
-          "RationaleStatement": "Rotating encryption keys helps reduce the potential impact of a compromised key as data encrypted with a new key cannot be accessed with a previous key that may have been exposed.\nKeys should be rotated every year, or upon event that would result in the compromise of that key.",
+          "RationaleStatement": "Rotating encryption keys helps reduce the potential impact of a compromised key as data encrypted with a new key cannot be accessed with a previous key that may have been exposed. Keys should be rotated every year, or upon event that would result in the compromise of that key.",
           "ImpactStatement": "Creation, management, and storage of CMKs may require additional time from and administrator.",
-          "RemediationProcedure": "**From Console:**\n\n1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam](https://console.aws.amazon.com/iam).\n2. In the left navigation pane, choose `Customer managed keys` .\n3. Select a customer managed CMK where `Key spec = SYMMETRIC_DEFAULT`\n4. Underneath the \"General configuration\" panel open the tab \"Key rotation\"\n5. Check the \"Automatically rotate this KMS key every year.\" checkbox\n\n**From Command Line:**\n\n1. Run the following command to enable key rotation:\n```\n aws kms enable-key-rotation --key-id \n```",
-          "AuditProcedure": "**From Console:**\n\n1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam](https://console.aws.amazon.com/iam).\n2. In the left navigation pane, choose `Customer managed keys`\n3. Select a customer managed CMK where `Key spec = SYMMETRIC_DEFAULT`\n4. Underneath the `General configuration` panel open the tab `Key rotation`\n5. Ensure that the checkbox `Automatically rotate this KMS key every year.` is activated\n6. Repeat steps 3 - 5 for all customer managed CMKs where \"Key spec = SYMMETRIC_DEFAULT\"\n\n**From Command Line:**\n\n1. Run the following command to get a list of all keys and their associated `KeyIds` \n```\n aws kms list-keys\n```\n2. For each key, note the KeyId and run the following command\n```\ndescribe-key --key-id \n```\n3. If the response contains \"KeySpec = SYMMETRIC_DEFAULT\" run the following command\n```\n aws kms get-key-rotation-status --key-id \n```\n4. Ensure `KeyRotationEnabled` is set to `true`\n5. Repeat steps 2 - 4 for all remaining CMKs",
+          "RemediationProcedure": "**From Console:**  1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam](https://console.aws.amazon.com/iam). 2. In the left navigation pane, choose `Customer managed keys` . 3. Select a customer managed CMK where `Key spec = SYMMETRIC_DEFAULT` 4. Underneath the \"General configuration\" panel open the tab \"Key rotation\" 5. Check the \"Automatically rotate this KMS key every year.\" checkbox  **From Command Line:**  1. Run the following command to enable key rotation: ```  aws kms enable-key-rotation --key-id  ```",
+          "AuditProcedure": "**From Console:**  1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam](https://console.aws.amazon.com/iam). 2. In the left navigation pane, choose `Customer managed keys` 3. Select a customer managed CMK where `Key spec = SYMMETRIC_DEFAULT` 4. Underneath the `General configuration` panel open the tab `Key rotation` 5. Ensure that the checkbox `Automatically rotate this KMS key every year.` is activated 6. Repeat steps 3 - 5 for all customer managed CMKs where \"Key spec = SYMMETRIC_DEFAULT\"  **From Command Line:**  1. Run the following command to get a list of all keys and their associated `KeyIds`  ```  aws kms list-keys ``` 2. For each key, note the KeyId and run the following command ``` describe-key --key-id  ``` 3. If the response contains \"KeySpec = SYMMETRIC_DEFAULT\" run the following command ```  aws kms get-key-rotation-status --key-id  ``` 4. Ensure `KeyRotationEnabled` is set to `true` 5. Repeat steps 2 - 4 for all remaining CMKs",
           "AdditionalInformation": "",
           "References": "https://aws.amazon.com/kms/pricing/:https://csrc.nist.gov/publications/detail/sp/800-57-part-1/rev-5/final"
         }
@@ -881,9 +881,9 @@
           "AssessmentStatus": "Automated",
           "Description": "VPC Flow Logs is a feature that enables you to capture information about the IP traffic going to and from network interfaces in your VPC. After you've created a flow log, you can view and retrieve its data in Amazon CloudWatch Logs. It is recommended that VPC Flow Logs be enabled for packet \"Rejects\" for VPCs.",
           "RationaleStatement": "VPC Flow Logs provide visibility into network traffic that traverses the VPC and can be used to detect anomalous traffic or insight during security workflows.",
-          "ImpactStatement": "By default, CloudWatch Logs will store Logs indefinitely unless a specific retention period is defined for the log group. When choosing the number of days to retain, keep in mind the average days it takes an organization to realize they have been breached is 210 days (at the time of this writing). Since additional time is required to research a breach, a minimum 365 day retention policy allows time for detection and research. You may also wish to archive the logs to a cheaper storage service rather than simply deleting them. See the following AWS resource to manage CloudWatch Logs retention periods:\n\n1. https://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/SettingLogRetention.html",
-          "RemediationProcedure": "Perform the following to determine if VPC Flow logs is enabled:\n\n**From Console:**\n\n1. Sign into the management console\n2. Select `Services` then `VPC` \n3. In the left navigation pane, select `Your VPCs` \n4. Select a VPC\n5. In the right pane, select the `Flow Logs` tab.\n6. If no Flow Log exists, click `Create Flow Log` \n7. For Filter, select `Reject`\n8. Enter in a `Role` and `Destination Log Group` \n9. Click `Create Log Flow` \n10. Click on `CloudWatch Logs Group` \n\n**Note:** Setting the filter to \"Reject\" will dramatically reduce the logging data accumulation for this recommendation and provide sufficient information for the purposes of breach detection, research and remediation. However, during periods of least privilege security group engineering, setting this the filter to \"All\" can be very helpful in discovering existing traffic flows required for proper operation of an already running environment.\n\n**From Command Line:**\n\n1. Create a policy document and name it as `role_policy_document.json` and paste the following content:\n```\n{\n \"Version\": \"2012-10-17\",\n \"Statement\": [\n {\n \"Sid\": \"test\",\n \"Effect\": \"Allow\",\n \"Principal\": {\n \"Service\": \"ec2.amazonaws.com\"\n },\n \"Action\": \"sts:AssumeRole\"\n }\n ]\n}\n```\n2. Create another policy document and name it as `iam_policy.json` and paste the following content:\n```\n{\n \"Version\": \"2012-10-17\",\n \"Statement\": [\n {\n \"Effect\": \"Allow\",\n \"Action\":[\n \"logs:CreateLogGroup\",\n \"logs:CreateLogStream\",\n \"logs:DescribeLogGroups\",\n \"logs:DescribeLogStreams\",\n \"logs:PutLogEvents\",\n \"logs:GetLogEvents\",\n \"logs:FilterLogEvents\"\n ],\n \"Resource\": \"*\"\n }\n ]\n}\n```\n3. Run the below command to create an IAM role:\n```\naws iam create-role --role-name  --assume-role-policy-document file://role_policy_document.json \n```\n4. Run the below command to create an IAM policy:\n```\naws iam create-policy --policy-name  --policy-document file://iam-policy.json\n```\n5. Run `attach-group-policy` command using the IAM policy ARN returned at the previous step to attach the policy to the IAM role (if the command succeeds, no output is returned):\n```\naws iam attach-group-policy --policy-arn arn:aws:iam:::policy/ --group-name \n```\n6. Run `describe-vpcs` to get the VpcId available in the selected region:\n```\naws ec2 describe-vpcs --region \n```\n7. The command output should return the VPC Id available in the selected region.\n8. Run `create-flow-logs` to create a flow log for the vpc:\n```\naws ec2 create-flow-logs --resource-type VPC --resource-ids  --traffic-type REJECT --log-group-name  --deliver-logs-permission-arn \n```\n9. Repeat step 8 for other vpcs available in the selected region.\n10. Change the region by updating --region and repeat remediation procedure for other vpcs.",
-          "AuditProcedure": "Perform the following to determine if VPC Flow logs are enabled:\n\n**From Console:**\n\n1. Sign into the management console\n2. Select `Services` then `VPC` \n3. In the left navigation pane, select `Your VPCs` \n4. Select a VPC\n5. In the right pane, select the `Flow Logs` tab.\n6. Ensure a Log Flow exists that has `Active` in the `Status` column.\n\n**From Command Line:**\n\n1. Run `describe-vpcs` command (OSX/Linux/UNIX) to list the VPC networks available in the current AWS region:\n```\naws ec2 describe-vpcs --region  --query Vpcs[].VpcId\n```\n2. The command output returns the `VpcId` available in the selected region.\n3. Run `describe-flow-logs` command (OSX/Linux/UNIX) using the VPC ID to determine if the selected virtual network has the Flow Logs feature enabled:\n```\naws ec2 describe-flow-logs --filter \"Name=resource-id,Values=\"\n```\n4. If there are no Flow Logs created for the selected VPC, the command output will return an `empty list []`.\n5. Repeat step 3 for other VPCs available in the same region.\n6. Change the region by updating `--region` and repeat steps 1 - 5 for all the VPCs.",
+          "ImpactStatement": "By default, CloudWatch Logs will store Logs indefinitely unless a specific retention period is defined for the log group. When choosing the number of days to retain, keep in mind the average days it takes an organization to realize they have been breached is 210 days (at the time of this writing). Since additional time is required to research a breach, a minimum 365 day retention policy allows time for detection and research. You may also wish to archive the logs to a cheaper storage service rather than simply deleting them. See the following AWS resource to manage CloudWatch Logs retention periods:  1. https://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/SettingLogRetention.html",
+          "RemediationProcedure": "Perform the following to determine if VPC Flow logs is enabled:  **From Console:**  1. Sign into the management console 2. Select `Services` then `VPC`  3. In the left navigation pane, select `Your VPCs`  4. Select a VPC 5. In the right pane, select the `Flow Logs` tab. 6. If no Flow Log exists, click `Create Flow Log`  7. For Filter, select `Reject` 8. Enter in a `Role` and `Destination Log Group`  9. Click `Create Log Flow`  10. Click on `CloudWatch Logs Group`   **Note:** Setting the filter to \"Reject\" will dramatically reduce the logging data accumulation for this recommendation and provide sufficient information for the purposes of breach detection, research and remediation. However, during periods of least privilege security group engineering, setting this the filter to \"All\" can be very helpful in discovering existing traffic flows required for proper operation of an already running environment.  **From Command Line:**  1. Create a policy document and name it as `role_policy_document.json` and paste the following content: ``` {  \"Version\": \"2012-10-17\",  \"Statement\": [  {  \"Sid\": \"test\",  \"Effect\": \"Allow\",  \"Principal\": {  \"Service\": \"ec2.amazonaws.com\"  },  \"Action\": \"sts:AssumeRole\"  }  ] } ``` 2. Create another policy document and name it as `iam_policy.json` and paste the following content: ``` {  \"Version\": \"2012-10-17\",  \"Statement\": [  {  \"Effect\": \"Allow\",  \"Action\":[  \"logs:CreateLogGroup\",  \"logs:CreateLogStream\",  \"logs:DescribeLogGroups\",  \"logs:DescribeLogStreams\",  \"logs:PutLogEvents\",  \"logs:GetLogEvents\",  \"logs:FilterLogEvents\"  ],  \"Resource\": \"*\"  }  ] } ``` 3. Run the below command to create an IAM role: ``` aws iam create-role --role-name  --assume-role-policy-document file://role_policy_document.json  ``` 4. Run the below command to create an IAM policy: ``` aws iam create-policy --policy-name  --policy-document file://iam-policy.json ``` 5. Run `attach-group-policy` command using the IAM policy ARN returned at the previous step to attach the policy to the IAM role (if the command succeeds, no output is returned): ``` aws iam attach-group-policy --policy-arn arn:aws:iam:::policy/ --group-name  ``` 6. Run `describe-vpcs` to get the VpcId available in the selected region: ``` aws ec2 describe-vpcs --region  ``` 7. The command output should return the VPC Id available in the selected region. 8. Run `create-flow-logs` to create a flow log for the vpc: ``` aws ec2 create-flow-logs --resource-type VPC --resource-ids  --traffic-type REJECT --log-group-name  --deliver-logs-permission-arn  ``` 9. Repeat step 8 for other vpcs available in the selected region. 10. Change the region by updating --region and repeat remediation procedure for other vpcs.",
+          "AuditProcedure": "Perform the following to determine if VPC Flow logs are enabled:  **From Console:**  1. Sign into the management console 2. Select `Services` then `VPC`  3. In the left navigation pane, select `Your VPCs`  4. Select a VPC 5. In the right pane, select the `Flow Logs` tab. 6. Ensure a Log Flow exists that has `Active` in the `Status` column.  **From Command Line:**  1. Run `describe-vpcs` command (OSX/Linux/UNIX) to list the VPC networks available in the current AWS region: ``` aws ec2 describe-vpcs --region  --query Vpcs[].VpcId ``` 2. The command output returns the `VpcId` available in the selected region. 3. Run `describe-flow-logs` command (OSX/Linux/UNIX) using the VPC ID to determine if the selected virtual network has the Flow Logs feature enabled: ``` aws ec2 describe-flow-logs --filter \"Name=resource-id,Values=\" ``` 4. If there are no Flow Logs created for the selected VPC, the command output will return an `empty list []`. 5. Repeat step 3 for other VPCs available in the same region. 6. Change the region by updating `--region` and repeat steps 1 - 5 for all the VPCs.",
           "AdditionalInformation": "",
           "References": "https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/flow-logs.html"
         }
@@ -902,10 +902,10 @@
           "AssessmentStatus": "Automated",
           "Description": "Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs and establishing corresponding metric filters and alarms. It is recommended that a metric filter and alarm be established for unauthorized API calls.",
           "RationaleStatement": "Monitoring unauthorized API calls will help reveal application errors and may reduce time to detect malicious activity.",
-          "ImpactStatement": "This alert may be triggered by normal read-only console activities that attempt to opportunistically gather optional information, but gracefully fail if they don't have permissions.\n\nIf an excessive number of alerts are being generated then an organization may wish to consider adding read access to the limited IAM user permissions simply to quiet the alerts.\n\nIn some cases doing this may allow the users to actually view some areas of the system - any additional access given should be reviewed for alignment with the original limited IAM user intent.",
-          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:\n\n1. Create a metric filter based on filter pattern provided which checks for unauthorized API calls and the `` taken from audit step 1.\n```\naws logs put-metric-filter --log-group-name \"cloudtrail_log_group_name\" --filter-name \"\" --metric-transformations metricName=unauthorized_api_calls_metric,metricNamespace=CISBenchmark,metricValue=1 --filter-pattern \"{ ($.errorCode = \"*UnauthorizedOperation\") || ($.errorCode = \"AccessDenied*\") || ($.sourceIPAddress!=\"delivery.logs.amazonaws.com\") || ($.eventName!=\"HeadBucket\") }\"\n```\n\n**Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.\n\n2. Create an SNS topic that the alarm will notify\n```\naws sns create-topic --name \n```\n**Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.\n**Note**: Capture the TopicArn displayed when creating the SNS Topic in Step 2.\n\n3. Create an SNS subscription to the topic created in step 2\n```\naws sns subscribe --topic-arn  --protocol  --notification-endpoint \n```\n\n**Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.\n\n4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2\n```\naws cloudwatch put-metric-alarm --alarm-name \"unauthorized_api_calls_alarm\" --metric-name \"unauthorized_api_calls_metric\" --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace \"CISBenchmark\" --alarm-actions \n```",
-          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:\n\n1. Identify the log group name configured for use with active multi-region CloudTrail:\n\n- List all CloudTrails: `aws cloudtrail describe-trails`\n\n- Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`\n\n- From value associated with \"Name\":` note ``\n\n- From value associated with \"CloudWatchLogsLogGroupArn\" note \n\nExample: for CloudWatchLogsLogGroupArn that looks like arn:aws:logs:::log-group:NewGroup:*,  would be NewGroup\n\n- Ensure Identified Multi region CloudTrail is active\n\n`aws cloudtrail get-trail-status --name `\n\nensure `IsLogging` is set to `TRUE`\n\n- Ensure identified Multi-region Cloudtrail captures all Management Events\n\n`aws cloudtrail get-event-selectors --trail-name <\"Name\" as shown in describe-trails>`\n\nEnsure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`\n\n2. Get a list of all associated metric filters for this `` that you captured in step 1:\n\n```\naws logs describe-metric-filters --log-group-name \"\"\n```\n\n3. Ensure the output from the above command contains the following:\n\n```\n\"filterPattern\": \"{ ($.errorCode = *UnauthorizedOperation) || ($.errorCode = AccessDenied*) || ($.sourceIPAddress!=delivery.logs.amazonaws.com) || ($.eventName!=HeadBucket) }\",\n```\n\n4. Note the \"filterName\" `` value associated with the `filterPattern` found in step 3.\n\n5. Get a list of CloudWatch alarms and filter on the `` captured in step 4.\n\n```\naws cloudwatch describe-alarms --query \"MetricAlarms[?MetricName == `unauthorized_api_calls_metric`]\"\n```\n\n6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.\n\n7. Ensure there is at least one active subscriber to the SNS topic\n\n```\naws sns list-subscriptions-by-topic --topic-arn  \n```\nat least one subscription should have \"SubscriptionArn\" with valid aws ARN.\n\n```\nExample of valid \"SubscriptionArn\": \"arn:aws:sns::::\"\n```",
-          "AdditionalInformation": "Configuring log metric filter and alarm on Multi-region (global) CloudTrail\n- ensures that activities from all regions (used as well as unused) are monitored\n- ensures that activities on all supported global services are monitored\n- ensures that all management events across all regions are monitored",
+          "ImpactStatement": "This alert may be triggered by normal read-only console activities that attempt to opportunistically gather optional information, but gracefully fail if they don't have permissions.  If an excessive number of alerts are being generated then an organization may wish to consider adding read access to the limited IAM user permissions simply to quiet the alerts.  In some cases doing this may allow the users to actually view some areas of the system - any additional access given should be reviewed for alignment with the original limited IAM user intent.",
+          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:  1. Create a metric filter based on filter pattern provided which checks for unauthorized API calls and the `` taken from audit step 1. ``` aws logs put-metric-filter --log-group-name \"cloudtrail_log_group_name\" --filter-name \"\" --metric-transformations metricName=unauthorized_api_calls_metric,metricNamespace=CISBenchmark,metricValue=1 --filter-pattern \"{ ($.errorCode = \"*UnauthorizedOperation\") || ($.errorCode = \"AccessDenied*\") || ($.sourceIPAddress!=\"delivery.logs.amazonaws.com\") || ($.eventName!=\"HeadBucket\") }\" ```  **Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.  2. Create an SNS topic that the alarm will notify ``` aws sns create-topic --name  ``` **Note**: you can execute this command once and then re-use the same topic for all monitoring alarms. **Note**: Capture the TopicArn displayed when creating the SNS Topic in Step 2.  3. Create an SNS subscription to the topic created in step 2 ``` aws sns subscribe --topic-arn  --protocol  --notification-endpoint  ```  **Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.  4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 ``` aws cloudwatch put-metric-alarm --alarm-name \"unauthorized_api_calls_alarm\" --metric-name \"unauthorized_api_calls_metric\" --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace \"CISBenchmark\" --alarm-actions  ```",
+          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:  1. Identify the log group name configured for use with active multi-region CloudTrail:  - List all CloudTrails: `aws cloudtrail describe-trails`  - Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`  - From value associated with \"Name\":` note ``  - From value associated with \"CloudWatchLogsLogGroupArn\" note   Example: for CloudWatchLogsLogGroupArn that looks like arn:aws:logs:::log-group:NewGroup:*,  would be NewGroup  - Ensure Identified Multi region CloudTrail is active  `aws cloudtrail get-trail-status --name `  ensure `IsLogging` is set to `TRUE`  - Ensure identified Multi-region Cloudtrail captures all Management Events  `aws cloudtrail get-event-selectors --trail-name <\"Name\" as shown in describe-trails>`  Ensure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`  2. Get a list of all associated metric filters for this `` that you captured in step 1:  ``` aws logs describe-metric-filters --log-group-name \"\" ```  3. Ensure the output from the above command contains the following:  ``` \"filterPattern\": \"{ ($.errorCode = *UnauthorizedOperation) || ($.errorCode = AccessDenied*) || ($.sourceIPAddress!=delivery.logs.amazonaws.com) || ($.eventName!=HeadBucket) }\", ```  4. Note the \"filterName\" `` value associated with the `filterPattern` found in step 3.  5. Get a list of CloudWatch alarms and filter on the `` captured in step 4.  ``` aws cloudwatch describe-alarms --query \"MetricAlarms[?MetricName == `unauthorized_api_calls_metric`]\" ```  6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.  7. Ensure there is at least one active subscriber to the SNS topic  ``` aws sns list-subscriptions-by-topic --topic-arn   ``` at least one subscription should have \"SubscriptionArn\" with valid aws ARN.  ``` Example of valid \"SubscriptionArn\": \"arn:aws:sns::::\" ```",
+          "AdditionalInformation": "Configuring log metric filter and alarm on Multi-region (global) CloudTrail - ensures that activities from all regions (used as well as unused) are monitored - ensures that activities on all supported global services are monitored - ensures that all management events across all regions are monitored",
           "References": "https://aws.amazon.com/sns/:https://docs.aws.amazon.com/awscloudtrail/latest/userguide/receive-cloudtrail-log-files-from-multiple-regions.html:https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudwatch-alarms-for-cloudtrail.html:https://docs.aws.amazon.com/sns/latest/dg/SubscribeTopic.html"
         }
       ]
@@ -924,9 +924,9 @@
           "Description": "Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs and establishing corresponding metric filters and alarms. Security Groups are a stateful packet filter that controls ingress and egress traffic within a VPC. It is recommended that a metric filter and alarm be established for detecting changes to Security Groups.",
           "RationaleStatement": "Monitoring changes to security group will help ensure that resources and services are not unintentionally exposed.",
           "ImpactStatement": "",
-          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:\n\n1. Create a metric filter based on filter pattern provided which checks for security groups changes and the `` taken from audit step 1.\n```\naws logs put-metric-filter --log-group-name \"\" --filter-name \"\" --metric-transformations metricName= \"\" ,metricNamespace=\"CISBenchmark\",metricValue=1 --filter-pattern \"{ ($.eventName = AuthorizeSecurityGroupIngress) || ($.eventName = AuthorizeSecurityGroupEgress) || ($.eventName = RevokeSecurityGroupIngress) || ($.eventName = RevokeSecurityGroupEgress) || ($.eventName = CreateSecurityGroup) || ($.eventName = DeleteSecurityGroup) }\"\n```\n\n**Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.\n\n2. Create an SNS topic that the alarm will notify\n```\naws sns create-topic --name \"\"\n```\n\n**Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.\n\n3. Create an SNS subscription to the topic created in step 2\n```\naws sns subscribe --topic-arn \"\" --protocol  --notification-endpoint \"\"\n```\n\n**Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.\n\n4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2\n```\naws cloudwatch put-metric-alarm --alarm-name \"\" --metric-name \"\" --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace \"CISBenchmark\" --alarm-actions \"\"\n```",
-          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:\n\n1. Identify the log group name configured for use with active multi-region CloudTrail:\n\n- List all CloudTrails: `aws cloudtrail describe-trails`\n\n- Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`\n\n- From value associated with CloudWatchLogsLogGroupArn note ``\n\nExample: for CloudWatchLogsLogGroupArn that looks like `arn:aws:logs:::log-group:NewGroup:*`, `` would be `NewGroup`\n\n- Ensure Identified Multi region CloudTrail is active\n\n`aws cloudtrail get-trail-status --name `\n\nensure `IsLogging` is set to `TRUE`\n\n- Ensure identified Multi-region Cloudtrail captures all Management Events\n\n`aws cloudtrail get-event-selectors --trail-name `\n\nEnsure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`\n\n2. Get a list of all associated metric filters for this ``:\n```\naws logs describe-metric-filters --log-group-name \"\"\n```\n3. Ensure the output from the above command contains the following:\n```\n\"filterPattern\": \"{ ($.eventName = AuthorizeSecurityGroupIngress) || ($.eventName = AuthorizeSecurityGroupEgress) || ($.eventName = RevokeSecurityGroupIngress) || ($.eventName = RevokeSecurityGroupEgress) || ($.eventName = CreateSecurityGroup) || ($.eventName = DeleteSecurityGroup) }\"\n```\n4. Note the `` value associated with the `filterPattern` found in step 3.\n\n5. Get a list of CloudWatch alarms and filter on the `` captured in step 4.\n```\naws cloudwatch describe-alarms --query \"MetricAlarms[?MetricName== '']\"\n```\n6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.\n\n7. Ensure there is at least one active subscriber to the SNS topic\n```\naws sns list-subscriptions-by-topic --topic-arn  \n```\nat least one subscription should have \"SubscriptionArn\" with valid aws ARN.\n```\nExample of valid \"SubscriptionArn\": \"arn:aws:sns::::\"\n```",
-          "AdditionalInformation": "Configuring log metric filter and alarm on Multi-region (global) CloudTrail\n- ensures that activities from all regions (used as well as unused) are monitored\n- ensures that activities on all supported global services are monitored\n- ensures that all management events across all regions are monitored",
+          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:  1. Create a metric filter based on filter pattern provided which checks for security groups changes and the `` taken from audit step 1. ``` aws logs put-metric-filter --log-group-name \"\" --filter-name \"\" --metric-transformations metricName= \"\" ,metricNamespace=\"CISBenchmark\",metricValue=1 --filter-pattern \"{ ($.eventName = AuthorizeSecurityGroupIngress) || ($.eventName = AuthorizeSecurityGroupEgress) || ($.eventName = RevokeSecurityGroupIngress) || ($.eventName = RevokeSecurityGroupEgress) || ($.eventName = CreateSecurityGroup) || ($.eventName = DeleteSecurityGroup) }\" ```  **Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.  2. Create an SNS topic that the alarm will notify ``` aws sns create-topic --name \"\" ```  **Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.  3. Create an SNS subscription to the topic created in step 2 ``` aws sns subscribe --topic-arn \"\" --protocol  --notification-endpoint \"\" ```  **Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.  4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 ``` aws cloudwatch put-metric-alarm --alarm-name \"\" --metric-name \"\" --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace \"CISBenchmark\" --alarm-actions \"\" ```",
+          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:  1. Identify the log group name configured for use with active multi-region CloudTrail:  - List all CloudTrails: `aws cloudtrail describe-trails`  - Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`  - From value associated with CloudWatchLogsLogGroupArn note ``  Example: for CloudWatchLogsLogGroupArn that looks like `arn:aws:logs:::log-group:NewGroup:*`, `` would be `NewGroup`  - Ensure Identified Multi region CloudTrail is active  `aws cloudtrail get-trail-status --name `  ensure `IsLogging` is set to `TRUE`  - Ensure identified Multi-region Cloudtrail captures all Management Events  `aws cloudtrail get-event-selectors --trail-name `  Ensure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`  2. Get a list of all associated metric filters for this ``: ``` aws logs describe-metric-filters --log-group-name \"\" ``` 3. Ensure the output from the above command contains the following: ``` \"filterPattern\": \"{ ($.eventName = AuthorizeSecurityGroupIngress) || ($.eventName = AuthorizeSecurityGroupEgress) || ($.eventName = RevokeSecurityGroupIngress) || ($.eventName = RevokeSecurityGroupEgress) || ($.eventName = CreateSecurityGroup) || ($.eventName = DeleteSecurityGroup) }\" ``` 4. Note the `` value associated with the `filterPattern` found in step 3.  5. Get a list of CloudWatch alarms and filter on the `` captured in step 4. ``` aws cloudwatch describe-alarms --query \"MetricAlarms[?MetricName== '']\" ``` 6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.  7. Ensure there is at least one active subscriber to the SNS topic ``` aws sns list-subscriptions-by-topic --topic-arn   ``` at least one subscription should have \"SubscriptionArn\" with valid aws ARN. ``` Example of valid \"SubscriptionArn\": \"arn:aws:sns::::\" ```",
+          "AdditionalInformation": "Configuring log metric filter and alarm on Multi-region (global) CloudTrail - ensures that activities from all regions (used as well as unused) are monitored - ensures that activities on all supported global services are monitored - ensures that all management events across all regions are monitored",
           "References": "https://docs.aws.amazon.com/awscloudtrail/latest/userguide/receive-cloudtrail-log-files-from-multiple-regions.html:https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudwatch-alarms-for-cloudtrail.html:https://docs.aws.amazon.com/sns/latest/dg/SubscribeTopic.html"
         }
       ]
@@ -945,9 +945,9 @@
           "Description": "Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs and establishing corresponding metric filters and alarms. NACLs are used as a stateless packet filter to control ingress and egress traffic for subnets within a VPC. It is recommended that a metric filter and alarm be established for changes made to NACLs.",
           "RationaleStatement": "Monitoring changes to NACLs will help ensure that AWS resources and services are not unintentionally exposed.",
           "ImpactStatement": "",
-          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:\n\n1. Create a metric filter based on filter pattern provided which checks for NACL changes and the `` taken from audit step 1.\n```\naws logs put-metric-filter --log-group-name  --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventName = CreateNetworkAcl) || ($.eventName = CreateNetworkAclEntry) || ($.eventName = DeleteNetworkAcl) || ($.eventName = DeleteNetworkAclEntry) || ($.eventName = ReplaceNetworkAclEntry) || ($.eventName = ReplaceNetworkAclAssociation) }'\n```\n\n**Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.\n\n2. Create an SNS topic that the alarm will notify\n```\naws sns create-topic --name \n```\n\n**Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.\n\n3. Create an SNS subscription to the topic created in step 2\n```\naws sns subscribe --topic-arn  --protocol  --notification-endpoint \n```\n\n**Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.\n\n4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2\n```\naws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions \n```",
-          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:\n\n1. Identify the log group name configured for use with active multi-region CloudTrail:\n\n- List all CloudTrails: `aws cloudtrail describe-trails`\n\n- Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`\n\n- From value associated with CloudWatchLogsLogGroupArn note ``\n\nExample: for CloudWatchLogsLogGroupArn that looks like `arn:aws:logs:::log-group:NewGroup:*`, `` would be `NewGroup`\n\n- Ensure Identified Multi region CloudTrail is active\n\n`aws cloudtrail get-trail-status --name `\n\nensure `IsLogging` is set to `TRUE`\n\n- Ensure identified Multi-region Cloudtrail captures all Management Events\n\n`aws cloudtrail get-event-selectors --trail-name `\n\nEnsure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`\n\n2. Get a list of all associated metric filters for this ``:\n```\naws logs describe-metric-filters --log-group-name \"\"\n```\n3. Ensure the output from the above command contains the following:\n```\n\"filterPattern\": \"{ ($.eventName = CreateNetworkAcl) || ($.eventName = CreateNetworkAclEntry) || ($.eventName = DeleteNetworkAcl) || ($.eventName = DeleteNetworkAclEntry) || ($.eventName = ReplaceNetworkAclEntry) || ($.eventName = ReplaceNetworkAclAssociation) }\"\n```\n4. Note the `` value associated with the `filterPattern` found in step 3.\n\n5. Get a list of CloudWatch alarms and filter on the `` captured in step 4.\n```\naws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]'\n```\n6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.\n\n7. Ensure there is at least one active subscriber to the SNS topic\n```\naws sns list-subscriptions-by-topic --topic-arn  \n```\nat least one subscription should have \"SubscriptionArn\" with valid aws ARN.\n```\nExample of valid \"SubscriptionArn\": \"arn:aws:sns::::\"\n```",
-          "AdditionalInformation": "Configuring log metric filter and alarm on Multi-region (global) CloudTrail\n- ensures that activities from all regions (used as well as unused) are monitored\n- ensures that activities on all supported global services are monitored\n- ensures that all management events across all regions are monitored",
+          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:  1. Create a metric filter based on filter pattern provided which checks for NACL changes and the `` taken from audit step 1. ``` aws logs put-metric-filter --log-group-name  --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventName = CreateNetworkAcl) || ($.eventName = CreateNetworkAclEntry) || ($.eventName = DeleteNetworkAcl) || ($.eventName = DeleteNetworkAclEntry) || ($.eventName = ReplaceNetworkAclEntry) || ($.eventName = ReplaceNetworkAclAssociation) }' ```  **Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.  2. Create an SNS topic that the alarm will notify ``` aws sns create-topic --name  ```  **Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.  3. Create an SNS subscription to the topic created in step 2 ``` aws sns subscribe --topic-arn  --protocol  --notification-endpoint  ```  **Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.  4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 ``` aws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions  ```",
+          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:  1. Identify the log group name configured for use with active multi-region CloudTrail:  - List all CloudTrails: `aws cloudtrail describe-trails`  - Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`  - From value associated with CloudWatchLogsLogGroupArn note ``  Example: for CloudWatchLogsLogGroupArn that looks like `arn:aws:logs:::log-group:NewGroup:*`, `` would be `NewGroup`  - Ensure Identified Multi region CloudTrail is active  `aws cloudtrail get-trail-status --name `  ensure `IsLogging` is set to `TRUE`  - Ensure identified Multi-region Cloudtrail captures all Management Events  `aws cloudtrail get-event-selectors --trail-name `  Ensure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`  2. Get a list of all associated metric filters for this ``: ``` aws logs describe-metric-filters --log-group-name \"\" ``` 3. Ensure the output from the above command contains the following: ``` \"filterPattern\": \"{ ($.eventName = CreateNetworkAcl) || ($.eventName = CreateNetworkAclEntry) || ($.eventName = DeleteNetworkAcl) || ($.eventName = DeleteNetworkAclEntry) || ($.eventName = ReplaceNetworkAclEntry) || ($.eventName = ReplaceNetworkAclAssociation) }\" ``` 4. Note the `` value associated with the `filterPattern` found in step 3.  5. Get a list of CloudWatch alarms and filter on the `` captured in step 4. ``` aws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]' ``` 6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.  7. Ensure there is at least one active subscriber to the SNS topic ``` aws sns list-subscriptions-by-topic --topic-arn   ``` at least one subscription should have \"SubscriptionArn\" with valid aws ARN. ``` Example of valid \"SubscriptionArn\": \"arn:aws:sns::::\" ```",
+          "AdditionalInformation": "Configuring log metric filter and alarm on Multi-region (global) CloudTrail - ensures that activities from all regions (used as well as unused) are monitored - ensures that activities on all supported global services are monitored - ensures that all management events across all regions are monitored",
           "References": "https://docs.aws.amazon.com/awscloudtrail/latest/userguide/receive-cloudtrail-log-files-from-multiple-regions.html:https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudwatch-alarms-for-cloudtrail.html:https://docs.aws.amazon.com/sns/latest/dg/SubscribeTopic.html"
         }
       ]
@@ -966,9 +966,9 @@
           "Description": "Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs and establishing corresponding metric filters and alarms. Network gateways are required to send/receive traffic to a destination outside of a VPC. It is recommended that a metric filter and alarm be established for changes to network gateways.",
           "RationaleStatement": "Monitoring changes to network gateways will help ensure that all ingress/egress traffic traverses the VPC border via a controlled path.",
           "ImpactStatement": "",
-          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:\n\n1. Create a metric filter based on filter pattern provided which checks for network gateways changes and the `` taken from audit step 1.\n```\naws logs put-metric-filter --log-group-name  --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventName = CreateCustomerGateway) || ($.eventName = DeleteCustomerGateway) || ($.eventName = AttachInternetGateway) || ($.eventName = CreateInternetGateway) || ($.eventName = DeleteInternetGateway) || ($.eventName = DetachInternetGateway) }'\n```\n\n**Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.\n\n2. Create an SNS topic that the alarm will notify\n```\naws sns create-topic --name \n```\n\n**Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.\n\n3. Create an SNS subscription to the topic created in step 2\n```\naws sns subscribe --topic-arn  --protocol  --notification-endpoint \n```\n\n**Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.\n\n4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2\n```\naws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions \n```",
-          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:\n\n1. Identify the log group name configured for use with active multi-region CloudTrail:\n\n- List all CloudTrails: `aws cloudtrail describe-trails`\n\n- Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`\n\n- From value associated with CloudWatchLogsLogGroupArn note ``\n\nExample: for CloudWatchLogsLogGroupArn that looks like `arn:aws:logs:::log-group:NewGroup:*`, `` would be `NewGroup`\n\n- Ensure Identified Multi region CloudTrail is active\n\n`aws cloudtrail get-trail-status --name `\n\nensure `IsLogging` is set to `TRUE`\n\n- Ensure identified Multi-region Cloudtrail captures all Management Events\n\n`aws cloudtrail get-event-selectors --trail-name `\n\nEnsure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`\n\n2. Get a list of all associated metric filters for this ``:\n```\naws logs describe-metric-filters --log-group-name \"\"\n```\n3. Ensure the output from the above command contains the following:\n```\n\"filterPattern\": \"{ ($.eventName = CreateCustomerGateway) || ($.eventName = DeleteCustomerGateway) || ($.eventName = AttachInternetGateway) || ($.eventName = CreateInternetGateway) || ($.eventName = DeleteInternetGateway) || ($.eventName = DetachInternetGateway) }\"\n```\n4. Note the `` value associated with the `filterPattern` found in step 3.\n\n5. Get a list of CloudWatch alarms and filter on the `` captured in step 4.\n```\naws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]'\n```\n6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.\n\n7. Ensure there is at least one active subscriber to the SNS topic\n```\naws sns list-subscriptions-by-topic --topic-arn  \n```\nat least one subscription should have \"SubscriptionArn\" with valid aws ARN.\n```\nExample of valid \"SubscriptionArn\": \"arn:aws:sns::::\"\n```",
-          "AdditionalInformation": "Configuring log metric filter and alarm on Multi-region (global) CloudTrail\n- ensures that activities from all regions (used as well as unused) are monitored\n- ensures that activities on all supported global services are monitored\n- ensures that all management events across all regions are monitored",
+          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:  1. Create a metric filter based on filter pattern provided which checks for network gateways changes and the `` taken from audit step 1. ``` aws logs put-metric-filter --log-group-name  --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventName = CreateCustomerGateway) || ($.eventName = DeleteCustomerGateway) || ($.eventName = AttachInternetGateway) || ($.eventName = CreateInternetGateway) || ($.eventName = DeleteInternetGateway) || ($.eventName = DetachInternetGateway) }' ```  **Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.  2. Create an SNS topic that the alarm will notify ``` aws sns create-topic --name  ```  **Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.  3. Create an SNS subscription to the topic created in step 2 ``` aws sns subscribe --topic-arn  --protocol  --notification-endpoint  ```  **Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.  4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 ``` aws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions  ```",
+          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:  1. Identify the log group name configured for use with active multi-region CloudTrail:  - List all CloudTrails: `aws cloudtrail describe-trails`  - Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`  - From value associated with CloudWatchLogsLogGroupArn note ``  Example: for CloudWatchLogsLogGroupArn that looks like `arn:aws:logs:::log-group:NewGroup:*`, `` would be `NewGroup`  - Ensure Identified Multi region CloudTrail is active  `aws cloudtrail get-trail-status --name `  ensure `IsLogging` is set to `TRUE`  - Ensure identified Multi-region Cloudtrail captures all Management Events  `aws cloudtrail get-event-selectors --trail-name `  Ensure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`  2. Get a list of all associated metric filters for this ``: ``` aws logs describe-metric-filters --log-group-name \"\" ``` 3. Ensure the output from the above command contains the following: ``` \"filterPattern\": \"{ ($.eventName = CreateCustomerGateway) || ($.eventName = DeleteCustomerGateway) || ($.eventName = AttachInternetGateway) || ($.eventName = CreateInternetGateway) || ($.eventName = DeleteInternetGateway) || ($.eventName = DetachInternetGateway) }\" ``` 4. Note the `` value associated with the `filterPattern` found in step 3.  5. Get a list of CloudWatch alarms and filter on the `` captured in step 4. ``` aws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]' ``` 6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.  7. Ensure there is at least one active subscriber to the SNS topic ``` aws sns list-subscriptions-by-topic --topic-arn   ``` at least one subscription should have \"SubscriptionArn\" with valid aws ARN. ``` Example of valid \"SubscriptionArn\": \"arn:aws:sns::::\" ```",
+          "AdditionalInformation": "Configuring log metric filter and alarm on Multi-region (global) CloudTrail - ensures that activities from all regions (used as well as unused) are monitored - ensures that activities on all supported global services are monitored - ensures that all management events across all regions are monitored",
           "References": "https://docs.aws.amazon.com/awscloudtrail/latest/userguide/receive-cloudtrail-log-files-from-multiple-regions.html:https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudwatch-alarms-for-cloudtrail.html:https://docs.aws.amazon.com/sns/latest/dg/SubscribeTopic.html"
         }
       ]
@@ -987,9 +987,9 @@
           "Description": "Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs and establishing corresponding metric filters and alarms. Routing tables are used to route network traffic between subnets and to network gateways. It is recommended that a metric filter and alarm be established for changes to route tables.",
           "RationaleStatement": "Monitoring changes to route tables will help ensure that all VPC traffic flows through an expected path.",
           "ImpactStatement": "",
-          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:\n\n1. Create a metric filter based on filter pattern provided which checks for route table changes and the `` taken from audit step 1.\n```\naws logs put-metric-filter --log-group-name  --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventName = CreateRoute) || ($.eventName = CreateRouteTable) || ($.eventName = ReplaceRoute) || ($.eventName = ReplaceRouteTableAssociation) || ($.eventName = DeleteRouteTable) || ($.eventName = DeleteRoute) || ($.eventName = DisassociateRouteTable) }'\n```\n\n**Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.\n\n2. Create an SNS topic that the alarm will notify\n```\naws sns create-topic --name \n```\n\n**Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.\n\n3. Create an SNS subscription to the topic created in step 2\n```\naws sns subscribe --topic-arn  --protocol  --notification-endpoint \n```\n\n**Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.\n\n4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2\n```\naws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions \n```",
-          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:\n\n1. Identify the log group name configured for use with active multi-region CloudTrail:\n\n- List all CloudTrails: `aws cloudtrail describe-trails`\n\n- Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`\n\n- From value associated with CloudWatchLogsLogGroupArn note ``\n\nExample: for CloudWatchLogsLogGroupArn that looks like `arn:aws:logs:::log-group:NewGroup:*`, `` would be `NewGroup`\n\n- Ensure Identified Multi region CloudTrail is active\n\n`aws cloudtrail get-trail-status --name `\n\nensure `IsLogging` is set to `TRUE`\n\n- Ensure identified Multi-region Cloudtrail captures all Management Events\n\n`aws cloudtrail get-event-selectors --trail-name `\n\nEnsure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`\n\n2. Get a list of all associated metric filters for this ``:\n\n```\naws logs describe-metric-filters --log-group-name \"\"\n```\n\n3. Ensure the output from the above command contains the following:\n\n```\n\"filterPattern\": \"{ ($.eventName = CreateRoute) || ($.eventName = CreateRouteTable) || ($.eventName = ReplaceRoute) || ($.eventName = ReplaceRouteTableAssociation) || ($.eventName = DeleteRouteTable) || ($.eventName = DeleteRoute) || ($.eventName = DisassociateRouteTable) }\"\n```\n\n4. Note the `` value associated with the `filterPattern` found in step 3.\n\n5. Get a list of CloudWatch alarms and filter on the `` captured in step 4.\n\n```\naws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]'\n```\n\n6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.\n\n7. Ensure there is at least one active subscriber to the SNS topic\n\n```\naws sns list-subscriptions-by-topic --topic-arn  \n```\nat least one subscription should have \"SubscriptionArn\" with valid aws ARN.\n\n```\nExample of valid \"SubscriptionArn\": \"arn:aws:sns::::\"\n```",
-          "AdditionalInformation": "Configuring log metric filter and alarm on Multi-region (global) CloudTrail\n- ensures that activities from all regions (used as well as unused) are monitored\n- ensures that activities on all supported global services are monitored\n- ensures that all management events across all regions are monitored",
+          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:  1. Create a metric filter based on filter pattern provided which checks for route table changes and the `` taken from audit step 1. ``` aws logs put-metric-filter --log-group-name  --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventName = CreateRoute) || ($.eventName = CreateRouteTable) || ($.eventName = ReplaceRoute) || ($.eventName = ReplaceRouteTableAssociation) || ($.eventName = DeleteRouteTable) || ($.eventName = DeleteRoute) || ($.eventName = DisassociateRouteTable) }' ```  **Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.  2. Create an SNS topic that the alarm will notify ``` aws sns create-topic --name  ```  **Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.  3. Create an SNS subscription to the topic created in step 2 ``` aws sns subscribe --topic-arn  --protocol  --notification-endpoint  ```  **Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.  4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 ``` aws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions  ```",
+          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:  1. Identify the log group name configured for use with active multi-region CloudTrail:  - List all CloudTrails: `aws cloudtrail describe-trails`  - Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`  - From value associated with CloudWatchLogsLogGroupArn note ``  Example: for CloudWatchLogsLogGroupArn that looks like `arn:aws:logs:::log-group:NewGroup:*`, `` would be `NewGroup`  - Ensure Identified Multi region CloudTrail is active  `aws cloudtrail get-trail-status --name `  ensure `IsLogging` is set to `TRUE`  - Ensure identified Multi-region Cloudtrail captures all Management Events  `aws cloudtrail get-event-selectors --trail-name `  Ensure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`  2. Get a list of all associated metric filters for this ``:  ``` aws logs describe-metric-filters --log-group-name \"\" ```  3. Ensure the output from the above command contains the following:  ``` \"filterPattern\": \"{ ($.eventName = CreateRoute) || ($.eventName = CreateRouteTable) || ($.eventName = ReplaceRoute) || ($.eventName = ReplaceRouteTableAssociation) || ($.eventName = DeleteRouteTable) || ($.eventName = DeleteRoute) || ($.eventName = DisassociateRouteTable) }\" ```  4. Note the `` value associated with the `filterPattern` found in step 3.  5. Get a list of CloudWatch alarms and filter on the `` captured in step 4.  ``` aws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]' ```  6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.  7. Ensure there is at least one active subscriber to the SNS topic  ``` aws sns list-subscriptions-by-topic --topic-arn   ``` at least one subscription should have \"SubscriptionArn\" with valid aws ARN.  ``` Example of valid \"SubscriptionArn\": \"arn:aws:sns::::\" ```",
+          "AdditionalInformation": "Configuring log metric filter and alarm on Multi-region (global) CloudTrail - ensures that activities from all regions (used as well as unused) are monitored - ensures that activities on all supported global services are monitored - ensures that all management events across all regions are monitored",
           "References": "https://docs.aws.amazon.com/awscloudtrail/latest/userguide/receive-cloudtrail-log-files-from-multiple-regions.html:https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudwatch-alarms-for-cloudtrail.html:https://docs.aws.amazon.com/sns/latest/dg/SubscribeTopic.html"
         }
       ]
@@ -1008,9 +1008,9 @@
           "Description": "Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs and establishing corresponding metric filters and alarms. It is possible to have more than 1 VPC within an account, in addition it is also possible to create a peer connection between 2 VPCs enabling network traffic to route between VPCs. It is recommended that a metric filter and alarm be established for changes made to VPCs.",
           "RationaleStatement": "Monitoring changes to VPC will help ensure VPC traffic flow is not getting impacted.",
           "ImpactStatement": "",
-          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:\n\n1. Create a metric filter based on filter pattern provided which checks for VPC changes and the `` taken from audit step 1.\n```\naws logs put-metric-filter --log-group-name  --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventName = CreateVpc) || ($.eventName = DeleteVpc) || ($.eventName = ModifyVpcAttribute) || ($.eventName = AcceptVpcPeeringConnection) || ($.eventName = CreateVpcPeeringConnection) || ($.eventName = DeleteVpcPeeringConnection) || ($.eventName = RejectVpcPeeringConnection) || ($.eventName = AttachClassicLinkVpc) || ($.eventName = DetachClassicLinkVpc) || ($.eventName = DisableVpcClassicLink) || ($.eventName = EnableVpcClassicLink) }'\n```\n\n**Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.\n\n2. Create an SNS topic that the alarm will notify\n```\naws sns create-topic --name \n```\n\n**Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.\n\n3. Create an SNS subscription to the topic created in step 2\n```\naws sns subscribe --topic-arn  --protocol  --notification-endpoint \n```\n\n**Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.\n\n4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2\n```\naws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions \n```",
-          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:\n\n1. Identify the log group name configured for use with active multi-region CloudTrail:\n\n- List all CloudTrails: `aws cloudtrail describe-trails`\n\n- Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`\n\n- From value associated with CloudWatchLogsLogGroupArn note ``\n\nExample: for CloudWatchLogsLogGroupArn that looks like `arn:aws:logs:::log-group:NewGroup:*`, `` would be `NewGroup`\n\n- Ensure Identified Multi region CloudTrail is active\n\n`aws cloudtrail get-trail-status --name `\n\nensure `IsLogging` is set to `TRUE`\n\n- Ensure identified Multi-region Cloudtrail captures all Management Events\n\n`aws cloudtrail get-event-selectors --trail-name `\n\nEnsure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`\n\n2. Get a list of all associated metric filters for this ``:\n\n```\naws logs describe-metric-filters --log-group-name \"\"\n```\n\n3. Ensure the output from the above command contains the following:\n\n```\n\"filterPattern\": \"{ ($.eventName = CreateVpc) || ($.eventName = DeleteVpc) || ($.eventName = ModifyVpcAttribute) || ($.eventName = AcceptVpcPeeringConnection) || ($.eventName = CreateVpcPeeringConnection) || ($.eventName = DeleteVpcPeeringConnection) || ($.eventName = RejectVpcPeeringConnection) || ($.eventName = AttachClassicLinkVpc) || ($.eventName = DetachClassicLinkVpc) || ($.eventName = DisableVpcClassicLink) || ($.eventName = EnableVpcClassicLink) }\"\n```\n\n4. Note the `` value associated with the `filterPattern` found in step 3.\n\n5. Get a list of CloudWatch alarms and filter on the `` captured in step 4.\n\n```\naws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]'\n```\n\n6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.\n\n7. Ensure there is at least one active subscriber to the SNS topic\n\n```\naws sns list-subscriptions-by-topic --topic-arn  \n```\nat least one subscription should have \"SubscriptionArn\" with valid aws ARN.\n\n```\nExample of valid \"SubscriptionArn\": \"arn:aws:sns::::\"\n```",
-          "AdditionalInformation": "Configuring log metric filter and alarm on Multi-region (global) CloudTrail\n- ensures that activities from all regions (used as well as unused) are monitored\n- ensures that activities on all supported global services are monitored\n- ensures that all management events across all regions are monitored",
+          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:  1. Create a metric filter based on filter pattern provided which checks for VPC changes and the `` taken from audit step 1. ``` aws logs put-metric-filter --log-group-name  --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventName = CreateVpc) || ($.eventName = DeleteVpc) || ($.eventName = ModifyVpcAttribute) || ($.eventName = AcceptVpcPeeringConnection) || ($.eventName = CreateVpcPeeringConnection) || ($.eventName = DeleteVpcPeeringConnection) || ($.eventName = RejectVpcPeeringConnection) || ($.eventName = AttachClassicLinkVpc) || ($.eventName = DetachClassicLinkVpc) || ($.eventName = DisableVpcClassicLink) || ($.eventName = EnableVpcClassicLink) }' ```  **Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.  2. Create an SNS topic that the alarm will notify ``` aws sns create-topic --name  ```  **Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.  3. Create an SNS subscription to the topic created in step 2 ``` aws sns subscribe --topic-arn  --protocol  --notification-endpoint  ```  **Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.  4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 ``` aws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions  ```",
+          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:  1. Identify the log group name configured for use with active multi-region CloudTrail:  - List all CloudTrails: `aws cloudtrail describe-trails`  - Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`  - From value associated with CloudWatchLogsLogGroupArn note ``  Example: for CloudWatchLogsLogGroupArn that looks like `arn:aws:logs:::log-group:NewGroup:*`, `` would be `NewGroup`  - Ensure Identified Multi region CloudTrail is active  `aws cloudtrail get-trail-status --name `  ensure `IsLogging` is set to `TRUE`  - Ensure identified Multi-region Cloudtrail captures all Management Events  `aws cloudtrail get-event-selectors --trail-name `  Ensure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`  2. Get a list of all associated metric filters for this ``:  ``` aws logs describe-metric-filters --log-group-name \"\" ```  3. Ensure the output from the above command contains the following:  ``` \"filterPattern\": \"{ ($.eventName = CreateVpc) || ($.eventName = DeleteVpc) || ($.eventName = ModifyVpcAttribute) || ($.eventName = AcceptVpcPeeringConnection) || ($.eventName = CreateVpcPeeringConnection) || ($.eventName = DeleteVpcPeeringConnection) || ($.eventName = RejectVpcPeeringConnection) || ($.eventName = AttachClassicLinkVpc) || ($.eventName = DetachClassicLinkVpc) || ($.eventName = DisableVpcClassicLink) || ($.eventName = EnableVpcClassicLink) }\" ```  4. Note the `` value associated with the `filterPattern` found in step 3.  5. Get a list of CloudWatch alarms and filter on the `` captured in step 4.  ``` aws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]' ```  6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.  7. Ensure there is at least one active subscriber to the SNS topic  ``` aws sns list-subscriptions-by-topic --topic-arn   ``` at least one subscription should have \"SubscriptionArn\" with valid aws ARN.  ``` Example of valid \"SubscriptionArn\": \"arn:aws:sns::::\" ```",
+          "AdditionalInformation": "Configuring log metric filter and alarm on Multi-region (global) CloudTrail - ensures that activities from all regions (used as well as unused) are monitored - ensures that activities on all supported global services are monitored - ensures that all management events across all regions are monitored",
           "References": "https://docs.aws.amazon.com/awscloudtrail/latest/userguide/receive-cloudtrail-log-files-from-multiple-regions.html:https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudwatch-alarms-for-cloudtrail.html:https://docs.aws.amazon.com/sns/latest/dg/SubscribeTopic.html"
         }
       ]
@@ -1029,8 +1029,8 @@
           "Description": "Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs and establishing corresponding metric filters and alarms. It is recommended that a metric filter and alarm be established for AWS Organizations changes made in the master AWS Account.",
           "RationaleStatement": "Monitoring AWS Organizations changes can help you prevent any unwanted, accidental or intentional modifications that may lead to unauthorized access or other security breaches. This monitoring technique helps you to ensure that any unexpected changes performed within your AWS Organizations can be investigated and any unwanted changes can be rolled back.",
           "ImpactStatement": "",
-          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:\n\n1. Create a metric filter based on filter pattern provided which checks for AWS Organizations changes and the `` taken from audit step 1:\n```\naws logs put-metric-filter --log-group-name  --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventSource = organizations.amazonaws.com) && (($.eventName = \"AcceptHandshake\") || ($.eventName = \"AttachPolicy\") || ($.eventName = \"CreateAccount\") || ($.eventName = \"CreateOrganizationalUnit\") || ($.eventName = \"CreatePolicy\") || ($.eventName = \"DeclineHandshake\") || ($.eventName = \"DeleteOrganization\") || ($.eventName = \"DeleteOrganizationalUnit\") || ($.eventName = \"DeletePolicy\") || ($.eventName = \"DetachPolicy\") || ($.eventName = \"DisablePolicyType\") || ($.eventName = \"EnablePolicyType\") || ($.eventName = \"InviteAccountToOrganization\") || ($.eventName = \"LeaveOrganization\") || ($.eventName = \"MoveAccount\") || ($.eventName = \"RemoveAccountFromOrganization\") || ($.eventName = \"UpdatePolicy\") || ($.eventName = \"UpdateOrganizationalUnit\")) }'\n```\n**Note:** You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.\n\n2. Create an SNS topic that the alarm will notify:\n```\naws sns create-topic --name \n```\n**Note:** you can execute this command once and then re-use the same topic for all monitoring alarms.\n\n3. Create an SNS subscription to the topic created in step 2:\n```\naws sns subscribe --topic-arn  --protocol  --notification-endpoint \n```\n**Note:** you can execute this command once and then re-use the SNS subscription for all monitoring alarms.\n\n4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2:\n```\naws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions \n```",
-          "AuditProcedure": "1. Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:\n- Identify the log group name configured for use with active multi-region CloudTrail:\n- List all CloudTrails: \n```\naws cloudtrail describe-trails\n```\n- Identify Multi region Cloudtrails, Trails with `\"IsMultiRegionTrail\"` set to true\n- From value associated with CloudWatchLogsLogGroupArn note \n **Example:** for CloudWatchLogsLogGroupArn that looks like arn:aws:logs:::log-group:NewGroup:*,  would be NewGroup\n\n- Ensure Identified Multi region CloudTrail is active:\n```\naws cloudtrail get-trail-status --name \n```\nEnsure `IsLogging` is set to `TRUE`\n\n- Ensure identified Multi-region Cloudtrail captures all Management Events:\n```\naws cloudtrail get-event-selectors --trail-name \n```\n- Ensure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to true and `ReadWriteType` set to `All`.\n\n2. Get a list of all associated metric filters for this :\n```\naws logs describe-metric-filters --log-group-name \"\"\n```\n3. Ensure the output from the above command contains the following:\n```\n\"filterPattern\": \"{ ($.eventSource = organizations.amazonaws.com) && (($.eventName = \"AcceptHandshake\") || ($.eventName = \"AttachPolicy\") || ($.eventName = \"CreateAccount\") || ($.eventName = \"CreateOrganizationalUnit\") || ($.eventName = \"CreatePolicy\") || ($.eventName = \"DeclineHandshake\") || ($.eventName = \"DeleteOrganization\") || ($.eventName = \"DeleteOrganizationalUnit\") || ($.eventName = \"DeletePolicy\") || ($.eventName = \"DetachPolicy\") || ($.eventName = \"DisablePolicyType\") || ($.eventName = \"EnablePolicyType\") || ($.eventName = \"InviteAccountToOrganization\") || ($.eventName = \"LeaveOrganization\") || ($.eventName = \"MoveAccount\") || ($.eventName = \"RemoveAccountFromOrganization\") || ($.eventName = \"UpdatePolicy\") || ($.eventName = \"UpdateOrganizationalUnit\")) }\"\n```\n4. Note the `` value associated with the filterPattern found in step 3.\n\n5. Get a list of CloudWatch alarms and filter on the `` captured in step 4:\n```\naws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]'\n```\n6. Note the AlarmActions value - this will provide the SNS topic ARN value.\n\n7. Ensure there is at least one active subscriber to the SNS topic:\n```\naws sns list-subscriptions-by-topic --topic-arn  \n```\nat least one subscription should have \"SubscriptionArn\" with valid aws ARN.\nExample of valid \"SubscriptionArn\": \n```\n\"arn:aws:sns::::\"\n```",
+          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:  1. Create a metric filter based on filter pattern provided which checks for AWS Organizations changes and the `` taken from audit step 1: ``` aws logs put-metric-filter --log-group-name  --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventSource = organizations.amazonaws.com) && (($.eventName = \"AcceptHandshake\") || ($.eventName = \"AttachPolicy\") || ($.eventName = \"CreateAccount\") || ($.eventName = \"CreateOrganizationalUnit\") || ($.eventName = \"CreatePolicy\") || ($.eventName = \"DeclineHandshake\") || ($.eventName = \"DeleteOrganization\") || ($.eventName = \"DeleteOrganizationalUnit\") || ($.eventName = \"DeletePolicy\") || ($.eventName = \"DetachPolicy\") || ($.eventName = \"DisablePolicyType\") || ($.eventName = \"EnablePolicyType\") || ($.eventName = \"InviteAccountToOrganization\") || ($.eventName = \"LeaveOrganization\") || ($.eventName = \"MoveAccount\") || ($.eventName = \"RemoveAccountFromOrganization\") || ($.eventName = \"UpdatePolicy\") || ($.eventName = \"UpdateOrganizationalUnit\")) }' ``` **Note:** You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.  2. Create an SNS topic that the alarm will notify: ``` aws sns create-topic --name  ``` **Note:** you can execute this command once and then re-use the same topic for all monitoring alarms.  3. Create an SNS subscription to the topic created in step 2: ``` aws sns subscribe --topic-arn  --protocol  --notification-endpoint  ``` **Note:** you can execute this command once and then re-use the SNS subscription for all monitoring alarms.  4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2: ``` aws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions  ```",
+          "AuditProcedure": "1. Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured: - Identify the log group name configured for use with active multi-region CloudTrail: - List all CloudTrails:  ``` aws cloudtrail describe-trails ``` - Identify Multi region Cloudtrails, Trails with `\"IsMultiRegionTrail\"` set to true - From value associated with CloudWatchLogsLogGroupArn note   **Example:** for CloudWatchLogsLogGroupArn that looks like arn:aws:logs:::log-group:NewGroup:*,  would be NewGroup  - Ensure Identified Multi region CloudTrail is active: ``` aws cloudtrail get-trail-status --name  ``` Ensure `IsLogging` is set to `TRUE`  - Ensure identified Multi-region Cloudtrail captures all Management Events: ``` aws cloudtrail get-event-selectors --trail-name  ``` - Ensure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to true and `ReadWriteType` set to `All`.  2. Get a list of all associated metric filters for this : ``` aws logs describe-metric-filters --log-group-name \"\" ``` 3. Ensure the output from the above command contains the following: ``` \"filterPattern\": \"{ ($.eventSource = organizations.amazonaws.com) && (($.eventName = \"AcceptHandshake\") || ($.eventName = \"AttachPolicy\") || ($.eventName = \"CreateAccount\") || ($.eventName = \"CreateOrganizationalUnit\") || ($.eventName = \"CreatePolicy\") || ($.eventName = \"DeclineHandshake\") || ($.eventName = \"DeleteOrganization\") || ($.eventName = \"DeleteOrganizationalUnit\") || ($.eventName = \"DeletePolicy\") || ($.eventName = \"DetachPolicy\") || ($.eventName = \"DisablePolicyType\") || ($.eventName = \"EnablePolicyType\") || ($.eventName = \"InviteAccountToOrganization\") || ($.eventName = \"LeaveOrganization\") || ($.eventName = \"MoveAccount\") || ($.eventName = \"RemoveAccountFromOrganization\") || ($.eventName = \"UpdatePolicy\") || ($.eventName = \"UpdateOrganizationalUnit\")) }\" ``` 4. Note the `` value associated with the filterPattern found in step 3.  5. Get a list of CloudWatch alarms and filter on the `` captured in step 4: ``` aws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]' ``` 6. Note the AlarmActions value - this will provide the SNS topic ARN value.  7. Ensure there is at least one active subscriber to the SNS topic: ``` aws sns list-subscriptions-by-topic --topic-arn   ``` at least one subscription should have \"SubscriptionArn\" with valid aws ARN. Example of valid \"SubscriptionArn\":  ``` \"arn:aws:sns::::\" ```",
           "AdditionalInformation": "",
           "References": "https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudwatch-alarms-for-cloudtrail.html:https://docs.aws.amazon.com/organizations/latest/userguide/orgs_security_incident-response.html"
         }
@@ -1050,8 +1050,8 @@
           "Description": "Security Hub collects security data from across AWS accounts, services, and supported third-party partner products and helps you analyze your security trends and identify the highest priority security issues. When you enable Security Hub, it begins to consume, aggregate, organize, and prioritize findings from AWS services that you have enabled, such as Amazon GuardDuty, Amazon Inspector, and Amazon Macie. You can also enable integrations with AWS partner security products.",
           "RationaleStatement": "AWS Security Hub provides you with a comprehensive view of your security state in AWS and helps you check your environment against security industry standards and best practices - enabling you to quickly assess the security posture across your AWS accounts.",
           "ImpactStatement": "It is recommended AWS Security Hub be enabled in all regions. AWS Security Hub requires AWS Config to be enabled.",
-          "RemediationProcedure": "To grant the permissions required to enable Security Hub, attach the Security Hub managed policy AWSSecurityHubFullAccess to an IAM user, group, or role.\n\nEnabling Security Hub\n\n**From Console:**\n\n1. Use the credentials of the IAM identity to sign in to the Security Hub console.\n2. When you open the Security Hub console for the first time, choose Enable AWS Security Hub.\n3. On the welcome page, Security standards list the security standards that Security Hub supports.\n4. Choose Enable Security Hub.\n\n**From Command Line:**\n\n1. Run the enable-security-hub command. To enable the default standards, include `--enable-default-standards`.\n```\naws securityhub enable-security-hub --enable-default-standards\n```\n\n2. To enable the security hub without the default standards, include `--no-enable-default-standards`.\n```\naws securityhub enable-security-hub --no-enable-default-standards\n```",
-          "AuditProcedure": "The process to evaluate AWS Security Hub configuration per region \n\n**From Console:**\n\n1. Sign in to the AWS Management Console and open the AWS Security Hub console at https://console.aws.amazon.com/securityhub/.\n2. On the top right of the console, select the target Region.\n3. If presented with the Security Hub > Summary page then Security Hub is set-up for the selected region.\n4. If presented with Setup Security Hub or Get Started With Security Hub - follow the online instructions.\n5. Repeat steps 2 to 4 for each region.",
+          "RemediationProcedure": "To grant the permissions required to enable Security Hub, attach the Security Hub managed policy AWSSecurityHubFullAccess to an IAM user, group, or role.  Enabling Security Hub  **From Console:**  1. Use the credentials of the IAM identity to sign in to the Security Hub console. 2. When you open the Security Hub console for the first time, choose Enable AWS Security Hub. 3. On the welcome page, Security standards list the security standards that Security Hub supports. 4. Choose Enable Security Hub.  **From Command Line:**  1. Run the enable-security-hub command. To enable the default standards, include `--enable-default-standards`. ``` aws securityhub enable-security-hub --enable-default-standards ```  2. To enable the security hub without the default standards, include `--no-enable-default-standards`. ``` aws securityhub enable-security-hub --no-enable-default-standards ```",
+          "AuditProcedure": "The process to evaluate AWS Security Hub configuration per region   **From Console:**  1. Sign in to the AWS Management Console and open the AWS Security Hub console at https://console.aws.amazon.com/securityhub/. 2. On the top right of the console, select the target Region. 3. If presented with the Security Hub > Summary page then Security Hub is set-up for the selected region. 4. If presented with Setup Security Hub or Get Started With Security Hub - follow the online instructions. 5. Repeat steps 2 to 4 for each region.",
           "AdditionalInformation": "",
           "References": "https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-get-started.html:https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-enable.html#securityhub-enable-api:https://awscli.amazonaws.com/v2/documentation/api/latest/reference/securityhub/enable-security-hub.html"
         }
@@ -1071,9 +1071,9 @@
           "Description": "Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs and establishing corresponding metric filters and alarms. It is recommended that a metric filter and alarm be established for console logins that are not protected by multi-factor authentication (MFA).",
           "RationaleStatement": "Monitoring for single-factor console logins will increase visibility into accounts that are not protected by MFA.",
           "ImpactStatement": "",
-          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:\n\n1. Create a metric filter based on filter pattern provided which checks for AWS Management Console sign-in without MFA and the `` taken from audit step 1.\n\nUse Command: \n\n```\naws logs put-metric-filter --log-group-name  --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventName = \"ConsoleLogin\") && ($.additionalEventData.MFAUsed != \"Yes\") }'\n```\n\nOr (To reduce false positives incase Single Sign-On (SSO) is used in organization):\n\n```\naws logs put-metric-filter --log-group-name  --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventName = \"ConsoleLogin\") && ($.additionalEventData.MFAUsed != \"Yes\") && ($.userIdentity.type = \"IAMUser\") && ($.responseElements.ConsoleLogin = \"Success\") }'\n```\n\n**Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.\n\n2. Create an SNS topic that the alarm will notify\n```\naws sns create-topic --name \n```\n\n**Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.\n\n3. Create an SNS subscription to the topic created in step 2\n```\naws sns subscribe --topic-arn  --protocol  --notification-endpoint \n```\n\n**Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.\n\n4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2\n```\naws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions \n```",
-          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:\n\n1. Identify the log group name configured for use with active multi-region CloudTrail:\n\n- List all `CloudTrails`:\n\n```\naws cloudtrail describe-trails\n```\n\n- Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`\n\n- From value associated with CloudWatchLogsLogGroupArn note ``\n\nExample: for CloudWatchLogsLogGroupArn that looks like `arn:aws:logs:::log-group:NewGroup:*`, `` would be `NewGroup`\n\n- Ensure Identified Multi region `CloudTrail` is active\n\n```\naws cloudtrail get-trail-status --name \n```\n\nEnsure in the output that `IsLogging` is set to `TRUE`\n\n- Ensure identified Multi-region 'Cloudtrail' captures all Management Events\n\n```\naws cloudtrail get-event-selectors --trail-name \n```\n\nEnsure in the output there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`\n\n2. Get a list of all associated metric filters for this ``:\n```\naws logs describe-metric-filters --log-group-name \"\"\n```\n3. Ensure the output from the above command contains the following:\n```\n\"filterPattern\": \"{ ($.eventName = \"ConsoleLogin\") && ($.additionalEventData.MFAUsed != \"Yes\") }\"\n```\n\nOr (To reduce false positives incase Single Sign-On (SSO) is used in organization):\n\n```\n\"filterPattern\": \"{ ($.eventName = \"ConsoleLogin\") && ($.additionalEventData.MFAUsed != \"Yes\") && ($.userIdentity.type = \"IAMUser\") && ($.responseElements.ConsoleLogin = \"Success\") }\"\n```\n\n4. Note the `` value associated with the `filterPattern` found in step 3.\n\n5. Get a list of CloudWatch alarms and filter on the `` captured in step 4.\n\n```\naws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]'\n```\n6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.\n\n7. Ensure there is at least one active subscriber to the SNS topic\n```\naws sns list-subscriptions-by-topic --topic-arn  \n```\nat least one subscription should have \"SubscriptionArn\" with valid aws ARN.\n```\nExample of valid \"SubscriptionArn\": \"arn:aws:sns::::\"\n```",
-          "AdditionalInformation": "Configuring log metric filter and alarm on Multi-region (global) CloudTrail\n- ensures that activities from all regions (used as well as unused) are monitored\n- ensures that activities on all supported global services are monitored\n- ensures that all management events across all regions are monitored\n-Filter pattern set to `{ ($.eventName = \"ConsoleLogin\") && ($.additionalEventData.MFAUsed != \"Yes\") && ($.userIdentity.type = \"IAMUser\") && ($.responseElements.ConsoleLogin = \"Success\"}` reduces false alarms raised when user logs in via SSO account.",
+          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:  1. Create a metric filter based on filter pattern provided which checks for AWS Management Console sign-in without MFA and the `` taken from audit step 1.  Use Command:   ``` aws logs put-metric-filter --log-group-name  --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventName = \"ConsoleLogin\") && ($.additionalEventData.MFAUsed != \"Yes\") }' ```  Or (To reduce false positives incase Single Sign-On (SSO) is used in organization):  ``` aws logs put-metric-filter --log-group-name  --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventName = \"ConsoleLogin\") && ($.additionalEventData.MFAUsed != \"Yes\") && ($.userIdentity.type = \"IAMUser\") && ($.responseElements.ConsoleLogin = \"Success\") }' ```  **Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.  2. Create an SNS topic that the alarm will notify ``` aws sns create-topic --name  ```  **Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.  3. Create an SNS subscription to the topic created in step 2 ``` aws sns subscribe --topic-arn  --protocol  --notification-endpoint  ```  **Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.  4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 ``` aws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions  ```",
+          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:  1. Identify the log group name configured for use with active multi-region CloudTrail:  - List all `CloudTrails`:  ``` aws cloudtrail describe-trails ```  - Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`  - From value associated with CloudWatchLogsLogGroupArn note ``  Example: for CloudWatchLogsLogGroupArn that looks like `arn:aws:logs:::log-group:NewGroup:*`, `` would be `NewGroup`  - Ensure Identified Multi region `CloudTrail` is active  ``` aws cloudtrail get-trail-status --name  ```  Ensure in the output that `IsLogging` is set to `TRUE`  - Ensure identified Multi-region 'Cloudtrail' captures all Management Events  ``` aws cloudtrail get-event-selectors --trail-name  ```  Ensure in the output there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`  2. Get a list of all associated metric filters for this ``: ``` aws logs describe-metric-filters --log-group-name \"\" ``` 3. Ensure the output from the above command contains the following: ``` \"filterPattern\": \"{ ($.eventName = \"ConsoleLogin\") && ($.additionalEventData.MFAUsed != \"Yes\") }\" ```  Or (To reduce false positives incase Single Sign-On (SSO) is used in organization):  ``` \"filterPattern\": \"{ ($.eventName = \"ConsoleLogin\") && ($.additionalEventData.MFAUsed != \"Yes\") && ($.userIdentity.type = \"IAMUser\") && ($.responseElements.ConsoleLogin = \"Success\") }\" ```  4. Note the `` value associated with the `filterPattern` found in step 3.  5. Get a list of CloudWatch alarms and filter on the `` captured in step 4.  ``` aws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]' ``` 6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.  7. Ensure there is at least one active subscriber to the SNS topic ``` aws sns list-subscriptions-by-topic --topic-arn   ``` at least one subscription should have \"SubscriptionArn\" with valid aws ARN. ``` Example of valid \"SubscriptionArn\": \"arn:aws:sns::::\" ```",
+          "AdditionalInformation": "Configuring log metric filter and alarm on Multi-region (global) CloudTrail - ensures that activities from all regions (used as well as unused) are monitored - ensures that activities on all supported global services are monitored - ensures that all management events across all regions are monitored -Filter pattern set to `{ ($.eventName = \"ConsoleLogin\") && ($.additionalEventData.MFAUsed != \"Yes\") && ($.userIdentity.type = \"IAMUser\") && ($.responseElements.ConsoleLogin = \"Success\"}` reduces false alarms raised when user logs in via SSO account.",
           "References": "https://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/viewing_metrics_with_cloudwatch.html:https://docs.aws.amazon.com/awscloudtrail/latest/userguide/receive-cloudtrail-log-files-from-multiple-regions.html:https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudwatch-alarms-for-cloudtrail.html:https://docs.aws.amazon.com/sns/latest/dg/SubscribeTopic.html"
         }
       ]
@@ -1092,9 +1092,9 @@
           "Description": "Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs and establishing corresponding metric filters and alarms. It is recommended that a metric filter and alarm be established for 'root' login attempts.",
           "RationaleStatement": "Monitoring for 'root' account logins will provide visibility into the use of a fully privileged account and an opportunity to reduce the use of it.",
           "ImpactStatement": "",
-          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:\n\n1. Create a metric filter based on filter pattern provided which checks for 'Root' account usage and the `` taken from audit step 1.\n```\naws logs put-metric-filter --log-group-name `` --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ $.userIdentity.type = \"Root\" && $.userIdentity.invokedBy NOT EXISTS && $.eventType != \"AwsServiceEvent\" }'\n```\n\n**Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.\n\n2. Create an SNS topic that the alarm will notify\n```\naws sns create-topic --name \n```\n\n**Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.\n\n3. Create an SNS subscription to the topic created in step 2\n```\naws sns subscribe --topic-arn  --protocol  --notification-endpoint \n```\n\n**Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.\n\n4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2\n```\naws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions \n```",
-          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:\n\n1. Identify the log group name configured for use with active multi-region CloudTrail:\n\n- List all CloudTrails:\n\n`aws cloudtrail describe-trails`\n\n- Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`\n\n- From value associated with CloudWatchLogsLogGroupArn note ``\n\nExample: for CloudWatchLogsLogGroupArn that looks like `arn:aws:logs:::log-group:NewGroup:*`, `` would be `NewGroup`\n\n- Ensure Identified Multi region CloudTrail is active\n\n`aws cloudtrail get-trail-status --name `\n\nensure `IsLogging` is set to `TRUE`\n\n- Ensure identified Multi-region Cloudtrail captures all Management Events\n\n`aws cloudtrail get-event-selectors --trail-name `\n\nEnsure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`\n\n2. Get a list of all associated metric filters for this ``:\n\n```\naws logs describe-metric-filters --log-group-name \"\"\n```\n\n3. Ensure the output from the above command contains the following:\n\n```\n\"filterPattern\": \"{ $.userIdentity.type = \"Root\" && $.userIdentity.invokedBy NOT EXISTS && $.eventType != \"AwsServiceEvent\" }\"\n```\n\n4. Note the `` value associated with the `filterPattern` found in step 3.\n\n5. Get a list of CloudWatch alarms and filter on the `` captured in step 4.\n\n```\naws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]'\n```\n\n6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.\n\n7. Ensure there is at least one active subscriber to the SNS topic\n\n```\naws sns list-subscriptions-by-topic --topic-arn  \n```\nat least one subscription should have \"SubscriptionArn\" with valid aws ARN.\n\n```\nExample of valid \"SubscriptionArn\": \"arn:aws:sns::::\"\n```",
-          "AdditionalInformation": "**Configuring log metric filter and alarm on Multi-region (global) CloudTrail**\n\n- ensures that activities from all regions (used as well as unused) are monitored\n\n- ensures that activities on all supported global services are monitored\n\n- ensures that all management events across all regions are monitored",
+          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:  1. Create a metric filter based on filter pattern provided which checks for 'Root' account usage and the `` taken from audit step 1. ``` aws logs put-metric-filter --log-group-name `` --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ $.userIdentity.type = \"Root\" && $.userIdentity.invokedBy NOT EXISTS && $.eventType != \"AwsServiceEvent\" }' ```  **Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.  2. Create an SNS topic that the alarm will notify ``` aws sns create-topic --name  ```  **Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.  3. Create an SNS subscription to the topic created in step 2 ``` aws sns subscribe --topic-arn  --protocol  --notification-endpoint  ```  **Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.  4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 ``` aws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions  ```",
+          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:  1. Identify the log group name configured for use with active multi-region CloudTrail:  - List all CloudTrails:  `aws cloudtrail describe-trails`  - Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`  - From value associated with CloudWatchLogsLogGroupArn note ``  Example: for CloudWatchLogsLogGroupArn that looks like `arn:aws:logs:::log-group:NewGroup:*`, `` would be `NewGroup`  - Ensure Identified Multi region CloudTrail is active  `aws cloudtrail get-trail-status --name `  ensure `IsLogging` is set to `TRUE`  - Ensure identified Multi-region Cloudtrail captures all Management Events  `aws cloudtrail get-event-selectors --trail-name `  Ensure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`  2. Get a list of all associated metric filters for this ``:  ``` aws logs describe-metric-filters --log-group-name \"\" ```  3. Ensure the output from the above command contains the following:  ``` \"filterPattern\": \"{ $.userIdentity.type = \"Root\" && $.userIdentity.invokedBy NOT EXISTS && $.eventType != \"AwsServiceEvent\" }\" ```  4. Note the `` value associated with the `filterPattern` found in step 3.  5. Get a list of CloudWatch alarms and filter on the `` captured in step 4.  ``` aws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]' ```  6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.  7. Ensure there is at least one active subscriber to the SNS topic  ``` aws sns list-subscriptions-by-topic --topic-arn   ``` at least one subscription should have \"SubscriptionArn\" with valid aws ARN.  ``` Example of valid \"SubscriptionArn\": \"arn:aws:sns::::\" ```",
+          "AdditionalInformation": "**Configuring log metric filter and alarm on Multi-region (global) CloudTrail**  - ensures that activities from all regions (used as well as unused) are monitored  - ensures that activities on all supported global services are monitored  - ensures that all management events across all regions are monitored",
           "References": "https://docs.aws.amazon.com/awscloudtrail/latest/userguide/receive-cloudtrail-log-files-from-multiple-regions.html:https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudwatch-alarms-for-cloudtrail.html:https://docs.aws.amazon.com/sns/latest/dg/SubscribeTopic.html"
         }
       ]
@@ -1113,9 +1113,9 @@
           "Description": "Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs and establishing corresponding metric filters and alarms. It is recommended that a metric filter and alarm be established changes made to Identity and Access Management (IAM) policies.",
           "RationaleStatement": "Monitoring changes to IAM policies will help ensure authentication and authorization controls remain intact.",
           "ImpactStatement": "",
-          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:\n\n1. Create a metric filter based on filter pattern provided which checks for IAM policy changes and the `` taken from audit step 1.\n```\naws logs put-metric-filter --log-group-name `` --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{($.eventName=DeleteGroupPolicy)||($.eventName=DeleteRolePolicy)||($.eventName=DeleteUserPolicy)||($.eventName=PutGroupPolicy)||($.eventName=PutRolePolicy)||($.eventName=PutUserPolicy)||($.eventName=CreatePolicy)||($.eventName=DeletePolicy)||($.eventName=CreatePolicyVersion)||($.eventName=DeletePolicyVersion)||($.eventName=AttachRolePolicy)||($.eventName=DetachRolePolicy)||($.eventName=AttachUserPolicy)||($.eventName=DetachUserPolicy)||($.eventName=AttachGroupPolicy)||($.eventName=DetachGroupPolicy)}'\n```\n\n**Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.\n\n2. Create an SNS topic that the alarm will notify\n```\naws sns create-topic --name \n```\n\n**Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.\n\n3. Create an SNS subscription to the topic created in step 2\n```\naws sns subscribe --topic-arn  --protocol  --notification-endpoint \n```\n\n**Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.\n\n4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2\n```\naws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions \n```",
-          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:\n\n1. Identify the log group name configured for use with active multi-region CloudTrail:\n\n- List all CloudTrails:\n\n`aws cloudtrail describe-trails`\n\n- Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`\n\n- From value associated with CloudWatchLogsLogGroupArn note ``\n\nExample: for CloudWatchLogsLogGroupArn that looks like `arn:aws:logs:::log-group:NewGroup:*`, `` would be `NewGroup`\n\n- Ensure Identified Multi region CloudTrail is active\n\n`aws cloudtrail get-trail-status --name `\n\nensure `IsLogging` is set to `TRUE`\n\n- Ensure identified Multi-region Cloudtrail captures all Management Events\n\n`aws cloudtrail get-event-selectors --trail-name `\n\nEnsure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`\n\n2. Get a list of all associated metric filters for this ``:\n\n```\naws logs describe-metric-filters --log-group-name \"\"\n```\n\n3. Ensure the output from the above command contains the following:\n\n```\n\"filterPattern\": \"{($.eventName=DeleteGroupPolicy)||($.eventName=DeleteRolePolicy)||($.eventName=DeleteUserPolicy)||($.eventName=PutGroupPolicy)||($.eventName=PutRolePolicy)||($.eventName=PutUserPolicy)||($.eventName=CreatePolicy)||($.eventName=DeletePolicy)||($.eventName=CreatePolicyVersion)||($.eventName=DeletePolicyVersion)||($.eventName=AttachRolePolicy)||($.eventName=DetachRolePolicy)||($.eventName=AttachUserPolicy)||($.eventName=DetachUserPolicy)||($.eventName=AttachGroupPolicy)||($.eventName=DetachGroupPolicy)}\"\n```\n\n4. Note the `` value associated with the `filterPattern` found in step 3.\n\n5. Get a list of CloudWatch alarms and filter on the `` captured in step 4.\n\n```\naws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]'\n```\n\n6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.\n\n7. Ensure there is at least one active subscriber to the SNS topic\n\n```\naws sns list-subscriptions-by-topic --topic-arn  \n```\nat least one subscription should have \"SubscriptionArn\" with valid aws ARN.\n\n```\nExample of valid \"SubscriptionArn\": \"arn:aws:sns::::\"\n```",
-          "AdditionalInformation": "Configuring log metric filter and alarm on Multi-region (global) CloudTrail\n- ensures that activities from all regions (used as well as unused) are monitored\n- ensures that activities on all supported global services are monitored\n- ensures that all management events across all regions are monitored",
+          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:  1. Create a metric filter based on filter pattern provided which checks for IAM policy changes and the `` taken from audit step 1. ``` aws logs put-metric-filter --log-group-name `` --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{($.eventName=DeleteGroupPolicy)||($.eventName=DeleteRolePolicy)||($.eventName=DeleteUserPolicy)||($.eventName=PutGroupPolicy)||($.eventName=PutRolePolicy)||($.eventName=PutUserPolicy)||($.eventName=CreatePolicy)||($.eventName=DeletePolicy)||($.eventName=CreatePolicyVersion)||($.eventName=DeletePolicyVersion)||($.eventName=AttachRolePolicy)||($.eventName=DetachRolePolicy)||($.eventName=AttachUserPolicy)||($.eventName=DetachUserPolicy)||($.eventName=AttachGroupPolicy)||($.eventName=DetachGroupPolicy)}' ```  **Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.  2. Create an SNS topic that the alarm will notify ``` aws sns create-topic --name  ```  **Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.  3. Create an SNS subscription to the topic created in step 2 ``` aws sns subscribe --topic-arn  --protocol  --notification-endpoint  ```  **Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.  4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 ``` aws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions  ```",
+          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:  1. Identify the log group name configured for use with active multi-region CloudTrail:  - List all CloudTrails:  `aws cloudtrail describe-trails`  - Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`  - From value associated with CloudWatchLogsLogGroupArn note ``  Example: for CloudWatchLogsLogGroupArn that looks like `arn:aws:logs:::log-group:NewGroup:*`, `` would be `NewGroup`  - Ensure Identified Multi region CloudTrail is active  `aws cloudtrail get-trail-status --name `  ensure `IsLogging` is set to `TRUE`  - Ensure identified Multi-region Cloudtrail captures all Management Events  `aws cloudtrail get-event-selectors --trail-name `  Ensure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`  2. Get a list of all associated metric filters for this ``:  ``` aws logs describe-metric-filters --log-group-name \"\" ```  3. Ensure the output from the above command contains the following:  ``` \"filterPattern\": \"{($.eventName=DeleteGroupPolicy)||($.eventName=DeleteRolePolicy)||($.eventName=DeleteUserPolicy)||($.eventName=PutGroupPolicy)||($.eventName=PutRolePolicy)||($.eventName=PutUserPolicy)||($.eventName=CreatePolicy)||($.eventName=DeletePolicy)||($.eventName=CreatePolicyVersion)||($.eventName=DeletePolicyVersion)||($.eventName=AttachRolePolicy)||($.eventName=DetachRolePolicy)||($.eventName=AttachUserPolicy)||($.eventName=DetachUserPolicy)||($.eventName=AttachGroupPolicy)||($.eventName=DetachGroupPolicy)}\" ```  4. Note the `` value associated with the `filterPattern` found in step 3.  5. Get a list of CloudWatch alarms and filter on the `` captured in step 4.  ``` aws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]' ```  6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.  7. Ensure there is at least one active subscriber to the SNS topic  ``` aws sns list-subscriptions-by-topic --topic-arn   ``` at least one subscription should have \"SubscriptionArn\" with valid aws ARN.  ``` Example of valid \"SubscriptionArn\": \"arn:aws:sns::::\" ```",
+          "AdditionalInformation": "Configuring log metric filter and alarm on Multi-region (global) CloudTrail - ensures that activities from all regions (used as well as unused) are monitored - ensures that activities on all supported global services are monitored - ensures that all management events across all regions are monitored",
           "References": "https://docs.aws.amazon.com/awscloudtrail/latest/userguide/receive-cloudtrail-log-files-from-multiple-regions.html:https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudwatch-alarms-for-cloudtrail.html:https://docs.aws.amazon.com/sns/latest/dg/SubscribeTopic.html"
         }
       ]
@@ -1134,9 +1134,9 @@
           "Description": "Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs and establishing corresponding metric filters and alarms. It is recommended that a metric filter and alarm be established for detecting changes to CloudTrail's configurations.",
           "RationaleStatement": "Monitoring changes to CloudTrail's configuration will help ensure sustained visibility to activities performed in the AWS account.",
           "ImpactStatement": "",
-          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:\n\n1. Create a metric filter based on filter pattern provided which checks for cloudtrail configuration changes and the `` taken from audit step 1.\n```\naws logs put-metric-filter --log-group-name  --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventName = CreateTrail) || ($.eventName = UpdateTrail) || ($.eventName = DeleteTrail) || ($.eventName = StartLogging) || ($.eventName = StopLogging) }'\n```\n\n**Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.\n\n2. Create an SNS topic that the alarm will notify\n```\naws sns create-topic --name \n```\n\n**Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.\n\n3. Create an SNS subscription to the topic created in step 2\n```\naws sns subscribe --topic-arn  --protocol  --notification-endpoint \n```\n\n**Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.\n\n4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2\n```\naws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions \n```",
-          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:\n\n1. Identify the log group name configured for use with active multi-region CloudTrail:\n\n- List all CloudTrails: `aws cloudtrail describe-trails`\n\n- Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`\n\n- From value associated with CloudWatchLogsLogGroupArn note ``\n\nExample: for CloudWatchLogsLogGroupArn that looks like `arn:aws:logs:::log-group:NewGroup:*`, `` would be `NewGroup`\n\n- Ensure Identified Multi region CloudTrail is active\n\n`aws cloudtrail get-trail-status --name `\n\nensure `IsLogging` is set to `TRUE`\n\n- Ensure identified Multi-region Cloudtrail captures all Management Events\n\n`aws cloudtrail get-event-selectors --trail-name `\n\nEnsure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`\n\n2. Get a list of all associated metric filters for this ``:\n\n```\naws logs describe-metric-filters --log-group-name \"\"\n```\n\n3. Ensure the output from the above command contains the following:\n\n```\n\"filterPattern\": \"{ ($.eventName = CreateTrail) || ($.eventName = UpdateTrail) || ($.eventName = DeleteTrail) || ($.eventName = StartLogging) || ($.eventName = StopLogging) }\"\n```\n\n4. Note the `` value associated with the `filterPattern` found in step 3.\n\n5. Get a list of CloudWatch alarms and filter on the `` captured in step 4.\n\n```\naws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]'\n```\n\n6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.\n\n7. Ensure there is at least one active subscriber to the SNS topic\n\n```\naws sns list-subscriptions-by-topic --topic-arn  \n```\nat least one subscription should have \"SubscriptionArn\" with valid aws ARN.\n\n```\nExample of valid \"SubscriptionArn\": \"arn:aws:sns::::\"\n```",
-          "AdditionalInformation": "Configuring log metric filter and alarm on Multi-region (global) CloudTrail\n- ensures that activities from all regions (used as well as unused) are monitored\n- ensures that activities on all supported global services are monitored\n- ensures that all management events across all regions are monitored",
+          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:  1. Create a metric filter based on filter pattern provided which checks for cloudtrail configuration changes and the `` taken from audit step 1. ``` aws logs put-metric-filter --log-group-name  --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventName = CreateTrail) || ($.eventName = UpdateTrail) || ($.eventName = DeleteTrail) || ($.eventName = StartLogging) || ($.eventName = StopLogging) }' ```  **Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.  2. Create an SNS topic that the alarm will notify ``` aws sns create-topic --name  ```  **Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.  3. Create an SNS subscription to the topic created in step 2 ``` aws sns subscribe --topic-arn  --protocol  --notification-endpoint  ```  **Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.  4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 ``` aws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions  ```",
+          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:  1. Identify the log group name configured for use with active multi-region CloudTrail:  - List all CloudTrails: `aws cloudtrail describe-trails`  - Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`  - From value associated with CloudWatchLogsLogGroupArn note ``  Example: for CloudWatchLogsLogGroupArn that looks like `arn:aws:logs:::log-group:NewGroup:*`, `` would be `NewGroup`  - Ensure Identified Multi region CloudTrail is active  `aws cloudtrail get-trail-status --name `  ensure `IsLogging` is set to `TRUE`  - Ensure identified Multi-region Cloudtrail captures all Management Events  `aws cloudtrail get-event-selectors --trail-name `  Ensure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`  2. Get a list of all associated metric filters for this ``:  ``` aws logs describe-metric-filters --log-group-name \"\" ```  3. Ensure the output from the above command contains the following:  ``` \"filterPattern\": \"{ ($.eventName = CreateTrail) || ($.eventName = UpdateTrail) || ($.eventName = DeleteTrail) || ($.eventName = StartLogging) || ($.eventName = StopLogging) }\" ```  4. Note the `` value associated with the `filterPattern` found in step 3.  5. Get a list of CloudWatch alarms and filter on the `` captured in step 4.  ``` aws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]' ```  6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.  7. Ensure there is at least one active subscriber to the SNS topic  ``` aws sns list-subscriptions-by-topic --topic-arn   ``` at least one subscription should have \"SubscriptionArn\" with valid aws ARN.  ``` Example of valid \"SubscriptionArn\": \"arn:aws:sns::::\" ```",
+          "AdditionalInformation": "Configuring log metric filter and alarm on Multi-region (global) CloudTrail - ensures that activities from all regions (used as well as unused) are monitored - ensures that activities on all supported global services are monitored - ensures that all management events across all regions are monitored",
           "References": "https://docs.aws.amazon.com/awscloudtrail/latest/userguide/receive-cloudtrail-log-files-from-multiple-regions.html:https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudwatch-alarms-for-cloudtrail.html:https://docs.aws.amazon.com/sns/latest/dg/SubscribeTopic.html"
         }
       ]
@@ -1155,9 +1155,9 @@
           "Description": "Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs and establishing corresponding metric filters and alarms. It is recommended that a metric filter and alarm be established for failed console authentication attempts.",
           "RationaleStatement": "Monitoring failed console logins may decrease lead time to detect an attempt to brute force a credential, which may provide an indicator, such as source IP, that can be used in other event correlation.",
           "ImpactStatement": "",
-          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:\n\n1. Create a metric filter based on filter pattern provided which checks for AWS management Console Login Failures and the `` taken from audit step 1.\n```\naws logs put-metric-filter --log-group-name  --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventName = ConsoleLogin) && ($.errorMessage = \"Failed authentication\") }'\n```\n**Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.\n\n2. Create an SNS topic that the alarm will notify\n```\naws sns create-topic --name \n```\n**Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.\n\n3. Create an SNS subscription to the topic created in step 2\n```\naws sns subscribe --topic-arn  --protocol  --notification-endpoint \n```\n**Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.\n\n4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2\n```\naws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions \n```",
-          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:\n\n1. Identify the log group name configured for use with active multi-region CloudTrail:\n\n- List all CloudTrails: `aws cloudtrail describe-trails`\n\n- Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`\n\n- From value associated with CloudWatchLogsLogGroupArn note ``\n\nExample: for CloudWatchLogsLogGroupArn that looks like `arn:aws:logs:::log-group:NewGroup:*`, `` would be `NewGroup`\n\n- Ensure Identified Multi region CloudTrail is active\n\n`aws cloudtrail get-trail-status --name `\n\nensure `IsLogging` is set to `TRUE`\n\n- Ensure identified Multi-region Cloudtrail captures all Management Events\n\n`aws cloudtrail get-event-selectors --trail-name `\n\nEnsure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`\n\n2. Get a list of all associated metric filters for this ``:\n```\naws logs describe-metric-filters --log-group-name \"\"\n```\n3. Ensure the output from the above command contains the following:\n```\n\"filterPattern\": \"{ ($.eventName = ConsoleLogin) && ($.errorMessage = \"Failed authentication\") }\"\n```\n\n4. Note the `` value associated with the `filterPattern` found in step 3.\n\n5. Get a list of CloudWatch alarms and filter on the `` captured in step 4.\n```\naws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]'\n```\n6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.\n\n7. Ensure there is at least one active subscriber to the SNS topic\n```\naws sns list-subscriptions-by-topic --topic-arn  \n```\nat least one subscription should have \"SubscriptionArn\" with valid aws ARN.\n```\nExample of valid \"SubscriptionArn\": \"arn:aws:sns::::\"\n```",
-          "AdditionalInformation": "Configuring log metric filter and alarm on Multi-region (global) CloudTrail\n- ensures that activities from all regions (used as well as unused) are monitored\n- ensures that activities on all supported global services are monitored\n- ensures that all management events across all regions are monitored",
+          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:  1. Create a metric filter based on filter pattern provided which checks for AWS management Console Login Failures and the `` taken from audit step 1. ``` aws logs put-metric-filter --log-group-name  --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventName = ConsoleLogin) && ($.errorMessage = \"Failed authentication\") }' ``` **Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.  2. Create an SNS topic that the alarm will notify ``` aws sns create-topic --name  ``` **Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.  3. Create an SNS subscription to the topic created in step 2 ``` aws sns subscribe --topic-arn  --protocol  --notification-endpoint  ``` **Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.  4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 ``` aws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions  ```",
+          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:  1. Identify the log group name configured for use with active multi-region CloudTrail:  - List all CloudTrails: `aws cloudtrail describe-trails`  - Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`  - From value associated with CloudWatchLogsLogGroupArn note ``  Example: for CloudWatchLogsLogGroupArn that looks like `arn:aws:logs:::log-group:NewGroup:*`, `` would be `NewGroup`  - Ensure Identified Multi region CloudTrail is active  `aws cloudtrail get-trail-status --name `  ensure `IsLogging` is set to `TRUE`  - Ensure identified Multi-region Cloudtrail captures all Management Events  `aws cloudtrail get-event-selectors --trail-name `  Ensure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`  2. Get a list of all associated metric filters for this ``: ``` aws logs describe-metric-filters --log-group-name \"\" ``` 3. Ensure the output from the above command contains the following: ``` \"filterPattern\": \"{ ($.eventName = ConsoleLogin) && ($.errorMessage = \"Failed authentication\") }\" ```  4. Note the `` value associated with the `filterPattern` found in step 3.  5. Get a list of CloudWatch alarms and filter on the `` captured in step 4. ``` aws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]' ``` 6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.  7. Ensure there is at least one active subscriber to the SNS topic ``` aws sns list-subscriptions-by-topic --topic-arn   ``` at least one subscription should have \"SubscriptionArn\" with valid aws ARN. ``` Example of valid \"SubscriptionArn\": \"arn:aws:sns::::\" ```",
+          "AdditionalInformation": "Configuring log metric filter and alarm on Multi-region (global) CloudTrail - ensures that activities from all regions (used as well as unused) are monitored - ensures that activities on all supported global services are monitored - ensures that all management events across all regions are monitored",
           "References": "https://docs.aws.amazon.com/awscloudtrail/latest/userguide/receive-cloudtrail-log-files-from-multiple-regions.html:https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudwatch-alarms-for-cloudtrail.html:https://docs.aws.amazon.com/sns/latest/dg/SubscribeTopic.html"
         }
       ]
@@ -1176,9 +1176,9 @@
           "Description": "Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs and establishing corresponding metric filters and alarms. It is recommended that a metric filter and alarm be established for customer created CMKs which have changed state to disabled or scheduled deletion.",
           "RationaleStatement": "Data encrypted with disabled or deleted keys will no longer be accessible.",
           "ImpactStatement": "",
-          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:\n\n1. Create a metric filter based on filter pattern provided which checks for disabled or scheduled for deletion CMK's and the `` taken from audit step 1.\n```\naws logs put-metric-filter --log-group-name  --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{($.eventSource = kms.amazonaws.com) && (($.eventName=DisableKey)||($.eventName=ScheduleKeyDeletion)) }'\n```\n**Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.\n\n2. Create an SNS topic that the alarm will notify\n```\naws sns create-topic --name \n```\n**Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.\n\n3. Create an SNS subscription to the topic created in step 2\n```\naws sns subscribe --topic-arn  --protocol  --notification-endpoint \n```\n**Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.\n\n4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2\n```\naws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions \n```",
-          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:\n\n1. Identify the log group name configured for use with active multi-region CloudTrail:\n\n- List all CloudTrails: `aws cloudtrail describe-trails`\n\n- Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`\n\n- From value associated with CloudWatchLogsLogGroupArn note ``\n\nExample: for CloudWatchLogsLogGroupArn that looks like `arn:aws:logs:::log-group:NewGroup:*`, `` would be `NewGroup`\n\n- Ensure Identified Multi region CloudTrail is active\n\n`aws cloudtrail get-trail-status --name `\n\nensure `IsLogging` is set to `TRUE`\n\n- Ensure identified Multi-region Cloudtrail captures all Management Events\n\n`aws cloudtrail get-event-selectors --trail-name `\n\nEnsure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`\n\n2. Get a list of all associated metric filters for this ``:\n```\naws logs describe-metric-filters --log-group-name \"\"\n```\n3. Ensure the output from the above command contains the following:\n```\n\"filterPattern\": \"{($.eventSource = kms.amazonaws.com) && (($.eventName=DisableKey)||($.eventName=ScheduleKeyDeletion)) }\"\n```\n4. Note the `` value associated with the `filterPattern` found in step 3.\n\n5. Get a list of CloudWatch alarms and filter on the `` captured in step 4.\n```\naws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]'\n```\n6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.\n\n7. Ensure there is at least one active subscriber to the SNS topic\n```\naws sns list-subscriptions-by-topic --topic-arn  \n```\nat least one subscription should have \"SubscriptionArn\" with valid aws ARN.\n```\nExample of valid \"SubscriptionArn\": \"arn:aws:sns::::\"\n```",
-          "AdditionalInformation": "Configuring log metric filter and alarm on Multi-region (global) CloudTrail\n- ensures that activities from all regions (used as well as unused) are monitored\n- ensures that activities on all supported global services are monitored\n- ensures that all management events across all regions are monitored",
+          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:  1. Create a metric filter based on filter pattern provided which checks for disabled or scheduled for deletion CMK's and the `` taken from audit step 1. ``` aws logs put-metric-filter --log-group-name  --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{($.eventSource = kms.amazonaws.com) && (($.eventName=DisableKey)||($.eventName=ScheduleKeyDeletion)) }' ``` **Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.  2. Create an SNS topic that the alarm will notify ``` aws sns create-topic --name  ``` **Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.  3. Create an SNS subscription to the topic created in step 2 ``` aws sns subscribe --topic-arn  --protocol  --notification-endpoint  ``` **Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.  4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 ``` aws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions  ```",
+          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:  1. Identify the log group name configured for use with active multi-region CloudTrail:  - List all CloudTrails: `aws cloudtrail describe-trails`  - Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`  - From value associated with CloudWatchLogsLogGroupArn note ``  Example: for CloudWatchLogsLogGroupArn that looks like `arn:aws:logs:::log-group:NewGroup:*`, `` would be `NewGroup`  - Ensure Identified Multi region CloudTrail is active  `aws cloudtrail get-trail-status --name `  ensure `IsLogging` is set to `TRUE`  - Ensure identified Multi-region Cloudtrail captures all Management Events  `aws cloudtrail get-event-selectors --trail-name `  Ensure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`  2. Get a list of all associated metric filters for this ``: ``` aws logs describe-metric-filters --log-group-name \"\" ``` 3. Ensure the output from the above command contains the following: ``` \"filterPattern\": \"{($.eventSource = kms.amazonaws.com) && (($.eventName=DisableKey)||($.eventName=ScheduleKeyDeletion)) }\" ``` 4. Note the `` value associated with the `filterPattern` found in step 3.  5. Get a list of CloudWatch alarms and filter on the `` captured in step 4. ``` aws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]' ``` 6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.  7. Ensure there is at least one active subscriber to the SNS topic ``` aws sns list-subscriptions-by-topic --topic-arn   ``` at least one subscription should have \"SubscriptionArn\" with valid aws ARN. ``` Example of valid \"SubscriptionArn\": \"arn:aws:sns::::\" ```",
+          "AdditionalInformation": "Configuring log metric filter and alarm on Multi-region (global) CloudTrail - ensures that activities from all regions (used as well as unused) are monitored - ensures that activities on all supported global services are monitored - ensures that all management events across all regions are monitored",
           "References": "https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudwatch-alarms-for-cloudtrail.html:https://docs.aws.amazon.com/awscloudtrail/latest/userguide/receive-cloudtrail-log-files-from-multiple-regions.html:https://docs.aws.amazon.com/sns/latest/dg/SubscribeTopic.html"
         }
       ]
@@ -1197,9 +1197,9 @@
           "Description": "Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs and establishing corresponding metric filters and alarms. It is recommended that a metric filter and alarm be established for changes to S3 bucket policies.",
           "RationaleStatement": "Monitoring changes to S3 bucket policies may reduce time to detect and correct permissive policies on sensitive S3 buckets.",
           "ImpactStatement": "",
-          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:\n\n1. Create a metric filter based on filter pattern provided which checks for S3 bucket policy changes and the `` taken from audit step 1.\n```\naws logs put-metric-filter --log-group-name  --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventSource = s3.amazonaws.com) && (($.eventName = PutBucketAcl) || ($.eventName = PutBucketPolicy) || ($.eventName = PutBucketCors) || ($.eventName = PutBucketLifecycle) || ($.eventName = PutBucketReplication) || ($.eventName = DeleteBucketPolicy) || ($.eventName = DeleteBucketCors) || ($.eventName = DeleteBucketLifecycle) || ($.eventName = DeleteBucketReplication)) }'\n```\n\n**Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.\n\n2. Create an SNS topic that the alarm will notify\n```\naws sns create-topic --name \n```\n\n**Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.\n\n3. Create an SNS subscription to the topic created in step 2\n```\naws sns subscribe --topic-arn  --protocol  --notification-endpoint \n```\n\n**Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.\n\n4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2\n```\naws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions \n```",
-          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:\n\n1. Identify the log group name configured for use with active multi-region CloudTrail:\n\n- List all CloudTrails: `aws cloudtrail describe-trails`\n\n- Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`\n\n- From value associated with CloudWatchLogsLogGroupArn note ``\n\nExample: for CloudWatchLogsLogGroupArn that looks like `arn:aws:logs:::log-group:NewGroup:*`, `` would be `NewGroup`\n\n- Ensure Identified Multi region CloudTrail is active\n\n`aws cloudtrail get-trail-status --name `\n\nensure `IsLogging` is set to `TRUE`\n\n- Ensure identified Multi-region Cloudtrail captures all Management Events\n\n`aws cloudtrail get-event-selectors --trail-name `\n\nEnsure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`\n\n2. Get a list of all associated metric filters for this ``:\n```\naws logs describe-metric-filters --log-group-name \"\"\n```\n3. Ensure the output from the above command contains the following:\n```\n\"filterPattern\": \"{ ($.eventSource = s3.amazonaws.com) && (($.eventName = PutBucketAcl) || ($.eventName = PutBucketPolicy) || ($.eventName = PutBucketCors) || ($.eventName = PutBucketLifecycle) || ($.eventName = PutBucketReplication) || ($.eventName = DeleteBucketPolicy) || ($.eventName = DeleteBucketCors) || ($.eventName = DeleteBucketLifecycle) || ($.eventName = DeleteBucketReplication)) }\"\n```\n4. Note the `` value associated with the `filterPattern` found in step 3.\n\n5. Get a list of CloudWatch alarms and filter on the `` captured in step 4.\n```\naws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]'\n```\n6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.\n\n7. Ensure there is at least one active subscriber to the SNS topic\n```\naws sns list-subscriptions-by-topic --topic-arn  \n```\nat least one subscription should have \"SubscriptionArn\" with valid aws ARN.\n```\nExample of valid \"SubscriptionArn\": \"arn:aws:sns::::\"\n```",
-          "AdditionalInformation": "Configuring log metric filter and alarm on Multi-region (global) CloudTrail\n- ensures that activities from all regions (used as well as unused) are monitored\n- ensures that activities on all supported global services are monitored\n- ensures that all management events across all regions are monitored",
+          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:  1. Create a metric filter based on filter pattern provided which checks for S3 bucket policy changes and the `` taken from audit step 1. ``` aws logs put-metric-filter --log-group-name  --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventSource = s3.amazonaws.com) && (($.eventName = PutBucketAcl) || ($.eventName = PutBucketPolicy) || ($.eventName = PutBucketCors) || ($.eventName = PutBucketLifecycle) || ($.eventName = PutBucketReplication) || ($.eventName = DeleteBucketPolicy) || ($.eventName = DeleteBucketCors) || ($.eventName = DeleteBucketLifecycle) || ($.eventName = DeleteBucketReplication)) }' ```  **Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.  2. Create an SNS topic that the alarm will notify ``` aws sns create-topic --name  ```  **Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.  3. Create an SNS subscription to the topic created in step 2 ``` aws sns subscribe --topic-arn  --protocol  --notification-endpoint  ```  **Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.  4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 ``` aws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions  ```",
+          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:  1. Identify the log group name configured for use with active multi-region CloudTrail:  - List all CloudTrails: `aws cloudtrail describe-trails`  - Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`  - From value associated with CloudWatchLogsLogGroupArn note ``  Example: for CloudWatchLogsLogGroupArn that looks like `arn:aws:logs:::log-group:NewGroup:*`, `` would be `NewGroup`  - Ensure Identified Multi region CloudTrail is active  `aws cloudtrail get-trail-status --name `  ensure `IsLogging` is set to `TRUE`  - Ensure identified Multi-region Cloudtrail captures all Management Events  `aws cloudtrail get-event-selectors --trail-name `  Ensure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`  2. Get a list of all associated metric filters for this ``: ``` aws logs describe-metric-filters --log-group-name \"\" ``` 3. Ensure the output from the above command contains the following: ``` \"filterPattern\": \"{ ($.eventSource = s3.amazonaws.com) && (($.eventName = PutBucketAcl) || ($.eventName = PutBucketPolicy) || ($.eventName = PutBucketCors) || ($.eventName = PutBucketLifecycle) || ($.eventName = PutBucketReplication) || ($.eventName = DeleteBucketPolicy) || ($.eventName = DeleteBucketCors) || ($.eventName = DeleteBucketLifecycle) || ($.eventName = DeleteBucketReplication)) }\" ``` 4. Note the `` value associated with the `filterPattern` found in step 3.  5. Get a list of CloudWatch alarms and filter on the `` captured in step 4. ``` aws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]' ``` 6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.  7. Ensure there is at least one active subscriber to the SNS topic ``` aws sns list-subscriptions-by-topic --topic-arn   ``` at least one subscription should have \"SubscriptionArn\" with valid aws ARN. ``` Example of valid \"SubscriptionArn\": \"arn:aws:sns::::\" ```",
+          "AdditionalInformation": "Configuring log metric filter and alarm on Multi-region (global) CloudTrail - ensures that activities from all regions (used as well as unused) are monitored - ensures that activities on all supported global services are monitored - ensures that all management events across all regions are monitored",
           "References": "https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudwatch-alarms-for-cloudtrail.html:https://docs.aws.amazon.com/awscloudtrail/latest/userguide/receive-cloudtrail-log-files-from-multiple-regions.html:https://docs.aws.amazon.com/sns/latest/dg/SubscribeTopic.html"
         }
       ]
@@ -1218,9 +1218,9 @@
           "Description": "Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs and establishing corresponding metric filters and alarms. It is recommended that a metric filter and alarm be established for detecting changes to CloudTrail's configurations.",
           "RationaleStatement": "Monitoring changes to AWS Config configuration will help ensure sustained visibility of configuration items within the AWS account.",
           "ImpactStatement": "",
-          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:\n\n1. Create a metric filter based on filter pattern provided which checks for AWS Configuration changes and the `` taken from audit step 1.\n```\naws logs put-metric-filter --log-group-name  --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventSource = config.amazonaws.com) && (($.eventName=StopConfigurationRecorder)||($.eventName=DeleteDeliveryChannel)||($.eventName=PutDeliveryChannel)||($.eventName=PutConfigurationRecorder)) }'\n```\n\n**Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.\n\n2. Create an SNS topic that the alarm will notify\n```\naws sns create-topic --name \n```\n\n**Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.\n\n3. Create an SNS subscription to topic created in step 2\n```\naws sns subscribe --topic-arn  --protocol  --notification-endpoint \n```\n\n**Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.\n\n4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2\n```\naws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions \n```",
-          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:\n\n1. Identify the log group name configured for use with active multi-region CloudTrail:\n\n- List all CloudTrails: `aws cloudtrail describe-trails`\n\n- Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`\n\n- From value associated with CloudWatchLogsLogGroupArn note ``\n\nExample: for CloudWatchLogsLogGroupArn that looks like `arn:aws:logs:::log-group:NewGroup:*`, `` would be `NewGroup`\n\n- Ensure Identified Multi region CloudTrail is active\n\n`aws cloudtrail get-trail-status --name `\n\nensure `IsLogging` is set to `TRUE`\n\n- Ensure identified Multi-region Cloudtrail captures all Management Events\n\n`aws cloudtrail get-event-selectors --trail-name `\n\nEnsure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`\n\n2. Get a list of all associated metric filters for this ``:\n```\naws logs describe-metric-filters --log-group-name \"\"\n```\n3. Ensure the output from the above command contains the following:\n```\n\"filterPattern\": \"{ ($.eventSource = config.amazonaws.com) && (($.eventName=StopConfigurationRecorder)||($.eventName=DeleteDeliveryChannel)||($.eventName=PutDeliveryChannel)||($.eventName=PutConfigurationRecorder)) }\"\n```\n4. Note the `` value associated with the `filterPattern` found in step 3.\n\n5. Get a list of CloudWatch alarms and filter on the `` captured in step 4.\n```\naws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]'\n```\n6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.\n\n7. Ensure there is at least one active subscriber to the SNS topic\n```\naws sns list-subscriptions-by-topic --topic-arn  \n```\nat least one subscription should have \"SubscriptionArn\" with valid aws ARN.\n```\nExample of valid \"SubscriptionArn\": \"arn:aws:sns::::\"\n```",
-          "AdditionalInformation": "Configuring log metric filter and alarm on Multi-region (global) CloudTrail\n- ensures that activities from all regions (used as well as unused) are monitored\n- ensures that activities on all supported global services are monitored\n- ensures that all management events across all regions are monitored",
+          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:  1. Create a metric filter based on filter pattern provided which checks for AWS Configuration changes and the `` taken from audit step 1. ``` aws logs put-metric-filter --log-group-name  --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventSource = config.amazonaws.com) && (($.eventName=StopConfigurationRecorder)||($.eventName=DeleteDeliveryChannel)||($.eventName=PutDeliveryChannel)||($.eventName=PutConfigurationRecorder)) }' ```  **Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.  2. Create an SNS topic that the alarm will notify ``` aws sns create-topic --name  ```  **Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.  3. Create an SNS subscription to topic created in step 2 ``` aws sns subscribe --topic-arn  --protocol  --notification-endpoint  ```  **Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.  4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 ``` aws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions  ```",
+          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:  1. Identify the log group name configured for use with active multi-region CloudTrail:  - List all CloudTrails: `aws cloudtrail describe-trails`  - Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`  - From value associated with CloudWatchLogsLogGroupArn note ``  Example: for CloudWatchLogsLogGroupArn that looks like `arn:aws:logs:::log-group:NewGroup:*`, `` would be `NewGroup`  - Ensure Identified Multi region CloudTrail is active  `aws cloudtrail get-trail-status --name `  ensure `IsLogging` is set to `TRUE`  - Ensure identified Multi-region Cloudtrail captures all Management Events  `aws cloudtrail get-event-selectors --trail-name `  Ensure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`  2. Get a list of all associated metric filters for this ``: ``` aws logs describe-metric-filters --log-group-name \"\" ``` 3. Ensure the output from the above command contains the following: ``` \"filterPattern\": \"{ ($.eventSource = config.amazonaws.com) && (($.eventName=StopConfigurationRecorder)||($.eventName=DeleteDeliveryChannel)||($.eventName=PutDeliveryChannel)||($.eventName=PutConfigurationRecorder)) }\" ``` 4. Note the `` value associated with the `filterPattern` found in step 3.  5. Get a list of CloudWatch alarms and filter on the `` captured in step 4. ``` aws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]' ``` 6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.  7. Ensure there is at least one active subscriber to the SNS topic ``` aws sns list-subscriptions-by-topic --topic-arn   ``` at least one subscription should have \"SubscriptionArn\" with valid aws ARN. ``` Example of valid \"SubscriptionArn\": \"arn:aws:sns::::\" ```",
+          "AdditionalInformation": "Configuring log metric filter and alarm on Multi-region (global) CloudTrail - ensures that activities from all regions (used as well as unused) are monitored - ensures that activities on all supported global services are monitored - ensures that all management events across all regions are monitored",
           "References": "https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudwatch-alarms-for-cloudtrail.html:https://docs.aws.amazon.com/awscloudtrail/latest/userguide/receive-cloudtrail-log-files-from-multiple-regions.html:https://docs.aws.amazon.com/sns/latest/dg/SubscribeTopic.html"
         }
       ]
@@ -1241,8 +1241,8 @@
           "Description": "The Network Access Control List (NACL) function provide stateless filtering of ingress and egress network traffic to AWS resources. It is recommended that no NACL allows unrestricted ingress access to remote server administration ports, such as SSH to port `22` and RDP to port `3389`.",
           "RationaleStatement": "Public access to remote server administration ports, such as 22 and 3389, increases resource attack surface and unnecessarily raises the risk of resource compromise.",
           "ImpactStatement": "",
-          "RemediationProcedure": "**From Console:**\n\nPerform the following:\n1. Login to the AWS Management Console at https://console.aws.amazon.com/vpc/home\n2. In the left pane, click `Network ACLs`\n3. For each network ACL to remediate, perform the following:\n - Select the network ACL\n - Click the `Inbound Rules` tab\n - Click `Edit inbound rules`\n - Either A) update the Source field to a range other than 0.0.0.0/0, or, B) Click `Delete` to remove the offending inbound rule\n - Click `Save`",
-          "AuditProcedure": "**From Console:**\n\nPerform the following to determine if the account is configured as prescribed:\n1. Login to the AWS Management Console at https://console.aws.amazon.com/vpc/home\n2. In the left pane, click `Network ACLs`\n3. For each network ACL, perform the following:\n - Select the network ACL\n - Click the `Inbound Rules` tab\n - Ensure no rule exists that has a port range that includes port `22`, `3389`, or other remote server administration ports for your environment and has a `Source` of `0.0.0.0/0` and shows `ALLOW`\n\n**Note:** A Port value of `ALL` or a port range such as `0-1024` are inclusive of port `22`, `3389`, and other remote server administration ports",
+          "RemediationProcedure": "**From Console:**  Perform the following: 1. Login to the AWS Management Console at https://console.aws.amazon.com/vpc/home 2. In the left pane, click `Network ACLs` 3. For each network ACL to remediate, perform the following:  - Select the network ACL  - Click the `Inbound Rules` tab  - Click `Edit inbound rules`  - Either A) update the Source field to a range other than 0.0.0.0/0, or, B) Click `Delete` to remove the offending inbound rule  - Click `Save`",
+          "AuditProcedure": "**From Console:**  Perform the following to determine if the account is configured as prescribed: 1. Login to the AWS Management Console at https://console.aws.amazon.com/vpc/home 2. In the left pane, click `Network ACLs` 3. For each network ACL, perform the following:  - Select the network ACL  - Click the `Inbound Rules` tab  - Ensure no rule exists that has a port range that includes port `22`, `3389`, or other remote server administration ports for your environment and has a `Source` of `0.0.0.0/0` and shows `ALLOW`  **Note:** A Port value of `ALL` or a port range such as `0-1024` are inclusive of port `22`, `3389`, and other remote server administration ports",
           "AdditionalInformation": "",
           "References": "https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-acls.html:https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Security.html#VPC_Security_Comparison"
         }
@@ -1264,8 +1264,8 @@
           "Description": "Security groups provide stateful filtering of ingress and egress network traffic to AWS resources. It is recommended that no security group allows unrestricted ingress access to remote server administration ports, such as SSH to port `22` and RDP to port `3389`.",
           "RationaleStatement": "Public access to remote server administration ports, such as 22 and 3389, increases resource attack surface and unnecessarily raises the risk of resource compromise.",
           "ImpactStatement": "When updating an existing environment, ensure that administrators have access to remote server administration ports through another mechanism before removing access by deleting the 0.0.0.0/0 inbound rule.",
-          "RemediationProcedure": "Perform the following to implement the prescribed state:\n\n1. Login to the AWS Management Console at [https://console.aws.amazon.com/vpc/home](https://console.aws.amazon.com/vpc/home)\n2. In the left pane, click `Security Groups` \n3. For each security group, perform the following:\n1. Select the security group\n2. Click the `Inbound Rules` tab\n3. Click the `Edit inbound rules` button\n4. Identify the rules to be edited or removed\n5. Either A) update the Source field to a range other than 0.0.0.0/0, or, B) Click `Delete` to remove the offending inbound rule\n6. Click `Save rules`",
-          "AuditProcedure": "Perform the following to determine if the account is configured as prescribed:\n\n1. Login to the AWS Management Console at [https://console.aws.amazon.com/vpc/home](https://console.aws.amazon.com/vpc/home)\n2. In the left pane, click `Security Groups` \n3. For each security group, perform the following:\n1. Select the security group\n2. Click the `Inbound Rules` tab\n3. Ensure no rule exists that has a port range that includes port `22`, `3389`, or other remote server administration ports for your environment and has a `Source` of `0.0.0.0/0` \n\n**Note:** A Port value of `ALL` or a port range such as `0-1024` are inclusive of port `22`, `3389`, and other remote server administration ports.",
+          "RemediationProcedure": "Perform the following to implement the prescribed state:  1. Login to the AWS Management Console at [https://console.aws.amazon.com/vpc/home](https://console.aws.amazon.com/vpc/home) 2. In the left pane, click `Security Groups`  3. For each security group, perform the following: 1. Select the security group 2. Click the `Inbound Rules` tab 3. Click the `Edit inbound rules` button 4. Identify the rules to be edited or removed 5. Either A) update the Source field to a range other than 0.0.0.0/0, or, B) Click `Delete` to remove the offending inbound rule 6. Click `Save rules`",
+          "AuditProcedure": "Perform the following to determine if the account is configured as prescribed:  1. Login to the AWS Management Console at [https://console.aws.amazon.com/vpc/home](https://console.aws.amazon.com/vpc/home) 2. In the left pane, click `Security Groups`  3. For each security group, perform the following: 1. Select the security group 2. Click the `Inbound Rules` tab 3. Ensure no rule exists that has a port range that includes port `22`, `3389`, or other remote server administration ports for your environment and has a `Source` of `0.0.0.0/0`   **Note:** A Port value of `ALL` or a port range such as `0-1024` are inclusive of port `22`, `3389`, and other remote server administration ports.",
           "AdditionalInformation": "",
           "References": "https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-security-groups.html#deleting-security-group-rule"
         }
@@ -1287,8 +1287,8 @@
           "Description": "Security groups provide stateful filtering of ingress and egress network traffic to AWS resources. It is recommended that no security group allows unrestricted ingress access to remote server administration ports, such as SSH to port `22` and RDP to port `3389`.",
           "RationaleStatement": "Public access to remote server administration ports, such as 22 and 3389, increases resource attack surface and unnecessarily raises the risk of resource compromise.",
           "ImpactStatement": "When updating an existing environment, ensure that administrators have access to remote server administration ports through another mechanism before removing access by deleting the ::/0 inbound rule.",
-          "RemediationProcedure": "Perform the following to implement the prescribed state:\n\n1. Login to the AWS Management Console at [https://console.aws.amazon.com/vpc/home](https://console.aws.amazon.com/vpc/home)\n2. In the left pane, click `Security Groups` \n3. For each security group, perform the following:\n1. Select the security group\n2. Click the `Inbound Rules` tab\n3. Click the `Edit inbound rules` button\n4. Identify the rules to be edited or removed\n5. Either A) update the Source field to a range other than ::/0, or, B) Click `Delete` to remove the offending inbound rule\n6. Click `Save rules`",
-          "AuditProcedure": "Perform the following to determine if the account is configured as prescribed:\n\n1. Login to the AWS Management Console at [https://console.aws.amazon.com/vpc/home](https://console.aws.amazon.com/vpc/home)\n2. In the left pane, click `Security Groups` \n3. For each security group, perform the following:\n1. Select the security group\n2. Click the `Inbound Rules` tab\n3. Ensure no rule exists that has a port range that includes port `22`, `3389`, or other remote server administration ports for your environment and has a `Source` of `::/0` \n\n**Note:** A Port value of `ALL` or a port range such as `0-1024` are inclusive of port `22`, `3389`, and other remote server administration ports.",
+          "RemediationProcedure": "Perform the following to implement the prescribed state:  1. Login to the AWS Management Console at [https://console.aws.amazon.com/vpc/home](https://console.aws.amazon.com/vpc/home) 2. In the left pane, click `Security Groups`  3. For each security group, perform the following: 1. Select the security group 2. Click the `Inbound Rules` tab 3. Click the `Edit inbound rules` button 4. Identify the rules to be edited or removed 5. Either A) update the Source field to a range other than ::/0, or, B) Click `Delete` to remove the offending inbound rule 6. Click `Save rules`",
+          "AuditProcedure": "Perform the following to determine if the account is configured as prescribed:  1. Login to the AWS Management Console at [https://console.aws.amazon.com/vpc/home](https://console.aws.amazon.com/vpc/home) 2. In the left pane, click `Security Groups`  3. For each security group, perform the following: 1. Select the security group 2. Click the `Inbound Rules` tab 3. Ensure no rule exists that has a port range that includes port `22`, `3389`, or other remote server administration ports for your environment and has a `Source` of `::/0`   **Note:** A Port value of `ALL` or a port range such as `0-1024` are inclusive of port `22`, `3389`, and other remote server administration ports.",
           "AdditionalInformation": "",
           "References": "https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-security-groups.html#deleting-security-group-rule"
         }
@@ -1305,11 +1305,11 @@
           "Section": "5. Networking",
           "Profile": "Level 2",
           "AssessmentStatus": "Automated",
-          "Description": "A VPC comes with a default security group whose initial settings deny all inbound traffic, allow all outbound traffic, and allow all traffic between instances assigned to the security group. If you don't specify a security group when you launch an instance, the instance is automatically assigned to this default security group. Security groups provide stateful filtering of ingress/egress network traffic to AWS resources. It is recommended that the default security group restrict all traffic.\n\nThe default VPC in every region should have its default security group updated to comply. Any newly created VPCs will automatically contain a default security group that will need remediation to comply with this recommendation.\n\n**NOTE:** When implementing this recommendation, VPC flow logging is invaluable in determining the least privilege port access required by systems to work properly because it can log all packet acceptances and rejections occurring under the current security groups. This dramatically reduces the primary barrier to least privilege engineering - discovering the minimum ports required by systems in the environment. Even if the VPC flow logging recommendation in this benchmark is not adopted as a permanent security measure, it should be used during any period of discovery and engineering for least privileged security groups.",
+          "Description": "A VPC comes with a default security group whose initial settings deny all inbound traffic, allow all outbound traffic, and allow all traffic between instances assigned to the security group. If you don't specify a security group when you launch an instance, the instance is automatically assigned to this default security group. Security groups provide stateful filtering of ingress/egress network traffic to AWS resources. It is recommended that the default security group restrict all traffic.  The default VPC in every region should have its default security group updated to comply. Any newly created VPCs will automatically contain a default security group that will need remediation to comply with this recommendation.  **NOTE:** When implementing this recommendation, VPC flow logging is invaluable in determining the least privilege port access required by systems to work properly because it can log all packet acceptances and rejections occurring under the current security groups. This dramatically reduces the primary barrier to least privilege engineering - discovering the minimum ports required by systems in the environment. Even if the VPC flow logging recommendation in this benchmark is not adopted as a permanent security measure, it should be used during any period of discovery and engineering for least privileged security groups.",
           "RationaleStatement": "Configuring all VPC default security groups to restrict all traffic will encourage least privilege security group development and mindful placement of AWS resources into security groups which will in-turn reduce the exposure of those resources.",
           "ImpactStatement": "Implementing this recommendation in an existing VPC containing operating resources requires extremely careful migration planning as the default security groups are likely to be enabling many ports that are unknown. Enabling VPC flow logging (of accepts) in an existing environment that is known to be breach free will reveal the current pattern of ports being used for each instance to communicate successfully.",
-          "RemediationProcedure": "Security Group Members\n\nPerform the following to implement the prescribed state:\n\n1. Identify AWS resources that exist within the default security group\n2. Create a set of least privilege security groups for those resources\n3. Place the resources in those security groups\n4. Remove the resources noted in #1 from the default security group\n\nSecurity Group State\n\n1. Login to the AWS Management Console at [https://console.aws.amazon.com/vpc/home](https://console.aws.amazon.com/vpc/home)\n2. Repeat the next steps for all VPCs - including the default VPC in each AWS region:\n3. In the left pane, click `Security Groups` \n4. For each default security group, perform the following:\n1. Select the `default` security group\n2. Click the `Inbound Rules` tab\n3. Remove any inbound rules\n4. Click the `Outbound Rules` tab\n5. Remove any Outbound rules\n\nRecommended:\n\nIAM groups allow you to edit the \"name\" field. After remediating default groups rules for all VPCs in all regions, edit this field to add text similar to \"DO NOT USE. DO NOT ADD RULES\"",
-          "AuditProcedure": "Perform the following to determine if the account is configured as prescribed:\n\nSecurity Group State\n\n1. Login to the AWS Management Console at [https://console.aws.amazon.com/vpc/home](https://console.aws.amazon.com/vpc/home)\n2. Repeat the next steps for all VPCs - including the default VPC in each AWS region:\n3. In the left pane, click `Security Groups` \n4. For each default security group, perform the following:\n1. Select the `default` security group\n2. Click the `Inbound Rules` tab\n3. Ensure no rule exist\n4. Click the `Outbound Rules` tab\n5. Ensure no rules exist\n\nSecurity Group Members\n\n1. Login to the AWS Management Console at [https://console.aws.amazon.com/vpc/home](https://console.aws.amazon.com/vpc/home)\n2. Repeat the next steps for all default groups in all VPCs - including the default VPC in each AWS region:\n3. In the left pane, click `Security Groups` \n4. Copy the id of the default security group.\n5. Change to the EC2 Management Console at https://console.aws.amazon.com/ec2/v2/home\n6. In the filter column type 'Security Group ID : < security group id from #4 >'",
+          "RemediationProcedure": "Security Group Members  Perform the following to implement the prescribed state:  1. Identify AWS resources that exist within the default security group 2. Create a set of least privilege security groups for those resources 3. Place the resources in those security groups 4. Remove the resources noted in #1 from the default security group  Security Group State  1. Login to the AWS Management Console at [https://console.aws.amazon.com/vpc/home](https://console.aws.amazon.com/vpc/home) 2. Repeat the next steps for all VPCs - including the default VPC in each AWS region: 3. In the left pane, click `Security Groups`  4. For each default security group, perform the following: 1. Select the `default` security group 2. Click the `Inbound Rules` tab 3. Remove any inbound rules 4. Click the `Outbound Rules` tab 5. Remove any Outbound rules  Recommended:  IAM groups allow you to edit the \"name\" field. After remediating default groups rules for all VPCs in all regions, edit this field to add text similar to \"DO NOT USE. DO NOT ADD RULES\"",
+          "AuditProcedure": "Perform the following to determine if the account is configured as prescribed:  Security Group State  1. Login to the AWS Management Console at [https://console.aws.amazon.com/vpc/home](https://console.aws.amazon.com/vpc/home) 2. Repeat the next steps for all VPCs - including the default VPC in each AWS region: 3. In the left pane, click `Security Groups`  4. For each default security group, perform the following: 1. Select the `default` security group 2. Click the `Inbound Rules` tab 3. Ensure no rule exist 4. Click the `Outbound Rules` tab 5. Ensure no rules exist  Security Group Members  1. Login to the AWS Management Console at [https://console.aws.amazon.com/vpc/home](https://console.aws.amazon.com/vpc/home) 2. Repeat the next steps for all default groups in all VPCs - including the default VPC in each AWS region: 3. In the left pane, click `Security Groups`  4. Copy the id of the default security group. 5. Change to the EC2 Management Console at https://console.aws.amazon.com/ec2/v2/home 6. In the filter column type 'Security Group ID : < security group id from #4 >'",
           "AdditionalInformation": "",
           "References": "https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html:https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-security-groups.html#default-security-group"
         }
@@ -1329,8 +1329,8 @@
           "Description": "Once a VPC peering connection is established, routing tables must be updated to establish any connections between the peered VPCs. These routes can be as specific as desired - even peering a VPC to only a single host on the other side of the connection.",
           "RationaleStatement": "Being highly selective in peering routing tables is a very effective way of minimizing the impact of breach as resources outside of these routes are inaccessible to the peered VPC.",
           "ImpactStatement": "",
-          "RemediationProcedure": "Remove and add route table entries to ensure that the least number of subnets or hosts as is required to accomplish the purpose for peering are routable.\n\n**From Command Line:**\n\n1. For each __ containing routes non compliant with your routing policy (which grants more than desired \"least access\"), delete the non compliant route:\n```\naws ec2 delete-route --route-table-id  --destination-cidr-block \n```\n 2. Create a new compliant route:\n```\naws ec2 create-route --route-table-id  --destination-cidr-block  --vpc-peering-connection-id \n```",
-          "AuditProcedure": "Review routing tables of peered VPCs for whether they route all subnets of each VPC and whether that is necessary to accomplish the intended purposes for peering the VPCs.\n\n**From Command Line:**\n\n1. List all the route tables from a VPC and check if \"GatewayId\" is pointing to a __ (e.g. pcx-1a2b3c4d) and if \"DestinationCidrBlock\" is as specific as desired.\n```\naws ec2 describe-route-tables --filter \"Name=vpc-id,Values=\" --query \"RouteTables[*].{RouteTableId:RouteTableId, VpcId:VpcId, Routes:Routes, AssociatedSubnets:Associations[*].SubnetId}\"\n```",
+          "RemediationProcedure": "Remove and add route table entries to ensure that the least number of subnets or hosts as is required to accomplish the purpose for peering are routable.  **From Command Line:**  1. For each __ containing routes non compliant with your routing policy (which grants more than desired \"least access\"), delete the non compliant route: ``` aws ec2 delete-route --route-table-id  --destination-cidr-block  ```  2. Create a new compliant route: ``` aws ec2 create-route --route-table-id  --destination-cidr-block  --vpc-peering-connection-id  ```",
+          "AuditProcedure": "Review routing tables of peered VPCs for whether they route all subnets of each VPC and whether that is necessary to accomplish the intended purposes for peering the VPCs.  **From Command Line:**  1. List all the route tables from a VPC and check if \"GatewayId\" is pointing to a __ (e.g. pcx-1a2b3c4d) and if \"DestinationCidrBlock\" is as specific as desired. ``` aws ec2 describe-route-tables --filter \"Name=vpc-id,Values=\" --query \"RouteTables[*].{RouteTableId:RouteTableId, VpcId:VpcId, Routes:Routes, AssociatedSubnets:Associations[*].SubnetId}\" ```",
           "AdditionalInformation": "If an organization has AWS transit gateway implemented in their VPC architecture they should look to apply the recommendation above for \"least access\" routing architecture at the AWS transit gateway level in combination with what must be implemented at the standard VPC route table. More specifically, to route traffic between two or more VPCs via a transit gateway VPCs must have an attachment to a transit gateway route table as well as a route, therefore to avoid routing traffic between VPCs an attachment to the transit gateway route table should only be added where there is an intention to route traffic between the VPCs. As transit gateways are able to host multiple route tables it is possible to group VPCs by attaching them to a common route table.",
           "References": "https://docs.aws.amazon.com/AmazonVPC/latest/PeeringGuide/peering-configurations-partial-access.html:https://docs.aws.amazon.com/cli/latest/reference/ec2/create-vpc-peering-connection.html"
         }
diff --git a/prowler/compliance/aws/cis_2.0_aws.json b/prowler/compliance/aws/cis_2.0_aws.json
index 8683bc0a..6ea5cdad 100644
--- a/prowler/compliance/aws/cis_2.0_aws.json
+++ b/prowler/compliance/aws/cis_2.0_aws.json
@@ -15,11 +15,11 @@
           "Section": "1. Identity and Access Management",
           "Profile": "Level 1",
           "AssessmentStatus": "Manual",
-          "Description": "Ensure contact email and telephone details for AWS accounts are current and map to more than one individual in your organization.\n\nAn AWS account supports a number of contact details, and AWS will use these to contact the account owner if activity judged to be in breach of Acceptable Use Policy or indicative of likely security compromise is observed by the AWS Abuse team. Contact details should not be for a single individual, as circumstances may arise where that individual is unavailable. Email contact details should point to a mail alias which forwards email to multiple individuals within the organization; where feasible, phone contact details should point to a PABX hunt group or other call-forwarding system.",
+          "Description": "Ensure contact email and telephone details for AWS accounts are current and map to more than one individual in your organization.  An AWS account supports a number of contact details, and AWS will use these to contact the account owner if activity judged to be in breach of Acceptable Use Policy or indicative of likely security compromise is observed by the AWS Abuse team. Contact details should not be for a single individual, as circumstances may arise where that individual is unavailable. Email contact details should point to a mail alias which forwards email to multiple individuals within the organization; where feasible, phone contact details should point to a PABX hunt group or other call-forwarding system.",
           "RationaleStatement": "If an AWS account is observed to be behaving in a prohibited or suspicious manner, AWS will attempt to contact the account owner by email and phone using the contact details listed. If this is unsuccessful and the account behavior needs urgent mitigation, proactive measures may be taken, including throttling of traffic between the account exhibiting suspicious behavior and the AWS API endpoints and the Internet. This will result in impaired service to and from the account in question, so it is in both the customers' and AWS' best interests that prompt contact can be established. This is best achieved by setting AWS account contact details to point to resources which have multiple individuals as recipients, such as email aliases and PABX hunt groups.",
           "ImpactStatement": "",
-          "RemediationProcedure": "This activity can only be performed via the AWS Console, with a user who has permission to read and write Billing information (aws-portal:\\*Billing ).\n\n1. Sign in to the AWS Management Console and open the `Billing and Cost Management` console at https://console.aws.amazon.com/billing/home#/.\n2. On the navigation bar, choose your account name, and then choose `My Account`.\n3. On the `Account Settings` page, next to `Account Settings`, choose `Edit`.\n4. Next to the field that you need to update, choose `Edit`.\n5. After you have entered your changes, choose `Save changes`.\n6. After you have made your changes, choose `Done`.\n7. To edit your contact information, under `Contact Information`, choose `Edit`.\n8. For the fields that you want to change, type your updated information, and then choose `Update`.",
-          "AuditProcedure": "This activity can only be performed via the AWS Console, with a user who has permission to read and write Billing information (aws-portal:\\*Billing )\n\n1. Sign in to the AWS Management Console and open the `Billing and Cost Management` console at https://console.aws.amazon.com/billing/home#/.\n2. On the navigation bar, choose your account name, and then choose `My Account`.\n3. On the `Account Settings` page, review and verify the current details.\n4. Under `Contact Information`, review and verify the current details.",
+          "RemediationProcedure": "This activity can only be performed via the AWS Console, with a user who has permission to read and write Billing information (aws-portal:\\*Billing ).  1. Sign in to the AWS Management Console and open the `Billing and Cost Management` console at https://console.aws.amazon.com/billing/home#/. 2. On the navigation bar, choose your account name, and then choose `My Account`. 3. On the `Account Settings` page, next to `Account Settings`, choose `Edit`. 4. Next to the field that you need to update, choose `Edit`. 5. After you have entered your changes, choose `Save changes`. 6. After you have made your changes, choose `Done`. 7. To edit your contact information, under `Contact Information`, choose `Edit`. 8. For the fields that you want to change, type your updated information, and then choose `Update`.",
+          "AuditProcedure": "This activity can only be performed via the AWS Console, with a user who has permission to read and write Billing information (aws-portal:\\*Billing )  1. Sign in to the AWS Management Console and open the `Billing and Cost Management` console at https://console.aws.amazon.com/billing/home#/. 2. On the navigation bar, choose your account name, and then choose `My Account`. 3. On the `Account Settings` page, review and verify the current details. 4. Under `Contact Information`, review and verify the current details.",
           "AdditionalInformation": "",
           "References": "https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/manage-account-payment.html#contact-info"
         }
@@ -39,9 +39,9 @@
           "Description": "Multi-Factor Authentication (MFA) adds an extra layer of authentication assurance beyond traditional credentials. With MFA enabled, when a user signs in to the AWS Console, they will be prompted for their user name and password as well as for an authentication code from their physical or virtual MFA token. It is recommended that MFA be enabled for all accounts that have a console password.",
           "RationaleStatement": "Enabling MFA provides increased security for console access as it requires the authenticating principal to possess a device that displays a time-sensitive key and have knowledge of a credential.",
           "ImpactStatement": "AWS will soon end support for SMS multi-factor authentication (MFA). New customers are not allowed to use this feature. We recommend that existing customers switch to one of the following alternative methods of MFA.",
-          "RemediationProcedure": "Perform the following to enable MFA:\n\n**From Console:**\n\n1. Sign in to the AWS Management Console and open the IAM console at 'https://console.aws.amazon.com/iam/'\n2. In the left pane, select `Users`.\n3. In the `User Name` list, choose the name of the intended MFA user.\n4. Choose the `Security Credentials` tab, and then choose `Manage MFA Device`.\n5. In the `Manage MFA Device wizard`, choose `Virtual MFA` device, and then choose `Continue`.\n\n IAM generates and displays configuration information for the virtual MFA device, including a QR code graphic. The graphic is a representation of the 'secret configuration key' that is available for manual entry on devices that do not support QR codes.\n\n6. Open your virtual MFA application. (For a list of apps that you can use for hosting virtual MFA devices, see Virtual MFA Applications at https://aws.amazon.com/iam/details/mfa/#Virtual_MFA_Applications). If the virtual MFA application supports multiple accounts (multiple virtual MFA devices), choose the option to create a new account (a new virtual MFA device).\n7. Determine whether the MFA app supports QR codes, and then do one of the following:\n\n - Use the app to scan the QR code. For example, you might choose the camera icon or choose an option similar to Scan code, and then use the device's camera to scan the code.\n - In the Manage MFA Device wizard, choose Show secret key for manual configuration, and then type the secret configuration key into your MFA application.\n\n When you are finished, the virtual MFA device starts generating one-time passwords.\n\n8. In the `Manage MFA Device wizard`, in the `MFA Code 1 box`, type the `one-time password` that currently appears in the virtual MFA device. Wait up to 30 seconds for the device to generate a new one-time password. Then type the second `one-time password` into the `MFA Code 2 box`.\n\n9. Click `Assign MFA`.",
-          "AuditProcedure": "Perform the following to determine if a MFA device is enabled for all IAM users having a console password:\n\n**From Console:**\n\n1. Open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).\n2. In the left pane, select `Users` \n3. If the `MFA` or `Password age` columns are not visible in the table, click the gear icon at the upper right corner of the table and ensure a checkmark is next to both, then click `Close`.\n4. Ensure that for each user where the `Password age` column shows a password age, the `MFA` column shows `Virtual`, `U2F Security Key`, or `Hardware`.\n\n**From Command Line:**\n\n1. Run the following command (OSX/Linux/UNIX) to generate a list of all IAM users along with their password and MFA status:\n```\n aws iam generate-credential-report\n```\n```\n aws iam get-credential-report --query 'Content' --output text | base64 -d | cut -d, -f1,4,8 \n```\n2. The output of this command will produce a table similar to the following:\n```\n user,password_enabled,mfa_active\n elise,false,false\n brandon,true,true\n rakesh,false,false\n helene,false,false\n paras,true,true\n anitha,false,false \n```\n3. For any column having `password_enabled` set to `true` , ensure `mfa_active` is also set to `true.`",
-          "AdditionalInformation": "**Forced IAM User Self-Service Remediation**\n\nAmazon has published a pattern that forces users to self-service setup MFA before they have access to their complete permissions set. Until they complete this step, they cannot access their full permissions. This pattern can be used on new AWS accounts. It can also be used on existing accounts - it is recommended users are given instructions and a grace period to accomplish MFA enrollment before active enforcement on existing AWS accounts.",
+          "RemediationProcedure": "Perform the following to enable MFA:  **From Console:**  1. Sign in to the AWS Management Console and open the IAM console at 'https://console.aws.amazon.com/iam/' 2. In the left pane, select `Users`. 3. In the `User Name` list, choose the name of the intended MFA user. 4. Choose the `Security Credentials` tab, and then choose `Manage MFA Device`. 5. In the `Manage MFA Device wizard`, choose `Virtual MFA` device, and then choose `Continue`.   IAM generates and displays configuration information for the virtual MFA device, including a QR code graphic. The graphic is a representation of the 'secret configuration key' that is available for manual entry on devices that do not support QR codes.  6. Open your virtual MFA application. (For a list of apps that you can use for hosting virtual MFA devices, see Virtual MFA Applications at https://aws.amazon.com/iam/details/mfa/#Virtual_MFA_Applications). If the virtual MFA application supports multiple accounts (multiple virtual MFA devices), choose the option to create a new account (a new virtual MFA device). 7. Determine whether the MFA app supports QR codes, and then do one of the following:   - Use the app to scan the QR code. For example, you might choose the camera icon or choose an option similar to Scan code, and then use the device's camera to scan the code.  - In the Manage MFA Device wizard, choose Show secret key for manual configuration, and then type the secret configuration key into your MFA application.   When you are finished, the virtual MFA device starts generating one-time passwords.  8. In the `Manage MFA Device wizard`, in the `MFA Code 1 box`, type the `one-time password` that currently appears in the virtual MFA device. Wait up to 30 seconds for the device to generate a new one-time password. Then type the second `one-time password` into the `MFA Code 2 box`.  9. Click `Assign MFA`.",
+          "AuditProcedure": "Perform the following to determine if a MFA device is enabled for all IAM users having a console password:  **From Console:**  1. Open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/). 2. In the left pane, select `Users`  3. If the `MFA` or `Password age` columns are not visible in the table, click the gear icon at the upper right corner of the table and ensure a checkmark is next to both, then click `Close`. 4. Ensure that for each user where the `Password age` column shows a password age, the `MFA` column shows `Virtual`, `U2F Security Key`, or `Hardware`.  **From Command Line:**  1. Run the following command (OSX/Linux/UNIX) to generate a list of all IAM users along with their password and MFA status: ```  aws iam generate-credential-report ``` ```  aws iam get-credential-report --query 'Content' --output text | base64 -d | cut -d, -f1,4,8  ``` 2. The output of this command will produce a table similar to the following: ```  user,password_enabled,mfa_active  elise,false,false  brandon,true,true  rakesh,false,false  helene,false,false  paras,true,true  anitha,false,false  ``` 3. For any column having `password_enabled` set to `true` , ensure `mfa_active` is also set to `true.`",
+          "AdditionalInformation": "**Forced IAM User Self-Service Remediation**  Amazon has published a pattern that forces users to self-service setup MFA before they have access to their complete permissions set. Until they complete this step, they cannot access their full permissions. This pattern can be used on new AWS accounts. It can also be used on existing accounts - it is recommended users are given instructions and a grace period to accomplish MFA enrollment before active enforcement on existing AWS accounts.",
           "References": "https://tools.ietf.org/html/rfc6238:https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa.html:https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#enable-mfa-for-privileged-users:https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa_enable_virtual.html:https://blogs.aws.amazon.com/security/post/Tx2SJJYE082KBUK/How-to-Delegate-Management-of-Multi-Factor-Authentication-to-AWS-IAM-Users"
         }
       ]
@@ -57,11 +57,11 @@
           "Section": "1. Identity and Access Management",
           "Profile": "Level 1",
           "AssessmentStatus": "Automated",
-          "Description": "AWS console defaults to no check boxes selected when creating a new IAM user. When cerating the IAM User credentials you have to determine what type of access they require. \n\nProgrammatic access: The IAM user might need to make API calls, use the AWS CLI, or use the Tools for Windows PowerShell. In that case, create an access key (access key ID and a secret access key) for that user. \n\nAWS Management Console access: If the user needs to access the AWS Management Console, create a password for the user.",
-          "RationaleStatement": "Requiring the additional steps be taken by the user for programmatic access after their profile has been created will give a stronger indication of intent that access keys are [a] necessary for their work and [b] once the access key is established on an account that the keys may be in use somewhere in the organization.\n\n**Note**: Even if it is known the user will need access keys, require them to create the keys themselves or put in a support ticket to have them created as a separate step from user creation.",
+          "Description": "AWS console defaults to no check boxes selected when creating a new IAM user. When cerating the IAM User credentials you have to determine what type of access they require.   Programmatic access: The IAM user might need to make API calls, use the AWS CLI, or use the Tools for Windows PowerShell. In that case, create an access key (access key ID and a secret access key) for that user.   AWS Management Console access: If the user needs to access the AWS Management Console, create a password for the user.",
+          "RationaleStatement": "Requiring the additional steps be taken by the user for programmatic access after their profile has been created will give a stronger indication of intent that access keys are [a] necessary for their work and [b] once the access key is established on an account that the keys may be in use somewhere in the organization.  **Note**: Even if it is known the user will need access keys, require them to create the keys themselves or put in a support ticket to have them created as a separate step from user creation.",
           "ImpactStatement": "",
-          "RemediationProcedure": "Perform the following to delete access keys that do not pass the audit:\n\n**From Console:**\n\n1. Login to the AWS Management Console:\n2. Click `Services` \n3. Click `IAM` \n4. Click on `Users` \n5. Click on `Security Credentials` \n6. As an Administrator \n - Click on the X `(Delete)` for keys that were created at the same time as the user profile but have not been used.\n7. As an IAM User\n - Click on the X `(Delete)` for keys that were created at the same time as the user profile but have not been used.\n\n**From Command Line:**\n```\naws iam delete-access-key --access-key-id  --user-name \n```",
-          "AuditProcedure": "Perform the following to determine if access keys were created upon user creation and are being used and rotated as prescribed:\n\n**From Console:**\n\n1. Login to the AWS Management Console\n2. Click `Services` \n3. Click `IAM` \n4. Click on a User where column `Password age` and `Access key age` is not set to `None`\n5. Click on `Security credentials` Tab\n6. Compare the user 'Creation time` to the Access Key `Created` date.\n6. For any that match, the key was created during initial user setup.\n\n- Keys that were created at the same time as the user profile and do not have a last used date should be deleted. Refer to the remediation below.\n\n**From Command Line:**\n\n1. Run the following command (OSX/Linux/UNIX) to generate a list of all IAM users along with their access keys utilization:\n```\n aws iam generate-credential-report\n```\n```\n aws iam get-credential-report --query 'Content' --output text | base64 -d | cut -d, -f1,4,9,11,14,16\n```\n2. The output of this command will produce a table similar to the following:\n```\nuser,password_enabled,access_key_1_active,access_key_1_last_used_date,access_key_2_active,access_key_2_last_used_date\n elise,false,true,2015-04-16T15:14:00+00:00,false,N/A\n brandon,true,true,N/A,false,N/A\n rakesh,false,false,N/A,false,N/A\n helene,false,true,2015-11-18T17:47:00+00:00,false,N/A\n paras,true,true,2016-08-28T12:04:00+00:00,true,2016-03-04T10:11:00+00:00\n anitha,true,true,2016-06-08T11:43:00+00:00,true,N/A \n```\n3. For any user having `password_enabled` set to `true` AND `access_key_last_used_date` set to `N/A` refer to the remediation below.",
+          "RemediationProcedure": "Perform the following to delete access keys that do not pass the audit:  **From Console:**  1. Login to the AWS Management Console: 2. Click `Services`  3. Click `IAM`  4. Click on `Users`  5. Click on `Security Credentials`  6. As an Administrator   - Click on the X `(Delete)` for keys that were created at the same time as the user profile but have not been used. 7. As an IAM User  - Click on the X `(Delete)` for keys that were created at the same time as the user profile but have not been used.  **From Command Line:** ``` aws iam delete-access-key --access-key-id  --user-name  ```",
+          "AuditProcedure": "Perform the following to determine if access keys were created upon user creation and are being used and rotated as prescribed:  **From Console:**  1. Login to the AWS Management Console 2. Click `Services`  3. Click `IAM`  4. Click on a User where column `Password age` and `Access key age` is not set to `None` 5. Click on `Security credentials` Tab 6. Compare the user 'Creation time` to the Access Key `Created` date. 6. For any that match, the key was created during initial user setup.  - Keys that were created at the same time as the user profile and do not have a last used date should be deleted. Refer to the remediation below.  **From Command Line:**  1. Run the following command (OSX/Linux/UNIX) to generate a list of all IAM users along with their access keys utilization: ```  aws iam generate-credential-report ``` ```  aws iam get-credential-report --query 'Content' --output text | base64 -d | cut -d, -f1,4,9,11,14,16 ``` 2. The output of this command will produce a table similar to the following: ``` user,password_enabled,access_key_1_active,access_key_1_last_used_date,access_key_2_active,access_key_2_last_used_date  elise,false,true,2015-04-16T15:14:00+00:00,false,N/A  brandon,true,true,N/A,false,N/A  rakesh,false,false,N/A,false,N/A  helene,false,true,2015-11-18T17:47:00+00:00,false,N/A  paras,true,true,2016-08-28T12:04:00+00:00,true,2016-03-04T10:11:00+00:00  anitha,true,true,2016-06-08T11:43:00+00:00,true,N/A  ``` 3. For any user having `password_enabled` set to `true` AND `access_key_last_used_date` set to `N/A` refer to the remediation below.",
           "AdditionalInformation": "Credential report does not appear to contain \"Key Creation Date\"",
           "References": "https://docs.aws.amazon.com/cli/latest/reference/iam/delete-access-key.html:https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html"
         }
@@ -82,8 +82,8 @@
           "Description": "AWS IAM users can access AWS resources using different types of credentials, such as passwords or access keys. It is recommended that all credentials that have been unused in 45 or greater days be deactivated or removed.",
           "RationaleStatement": "Disabling or removing unnecessary credentials will reduce the window of opportunity for credentials associated with a compromised or abandoned account to be used.",
           "ImpactStatement": "",
-          "RemediationProcedure": "**From Console:**\n\nPerform the following to manage Unused Password (IAM user console access)\n\n1. Login to the AWS Management Console:\n2. Click `Services` \n3. Click `IAM` \n4. Click on `Users` \n5. Click on `Security Credentials` \n6. Select user whose `Console last sign-in` is greater than 45 days\n7. Click `Security credentials`\n8. In section `Sign-in credentials`, `Console password` click `Manage` \n9. Under Console Access select `Disable`\n10.Click `Apply`\n\nPerform the following to deactivate Access Keys:\n\n1. Login to the AWS Management Console:\n2. Click `Services` \n3. Click `IAM` \n4. Click on `Users` \n5. Click on `Security Credentials` \n6. Select any access keys that are over 45 days old and that have been used and \n - Click on `Make Inactive`\n7. Select any access keys that are over 45 days old and that have not been used and \n - Click the X to `Delete`",
-          "AuditProcedure": "Perform the following to determine if unused credentials exist:\n\n**From Console:**\n\n1. Login to the AWS Management Console\n2. Click `Services` \n3. Click `IAM`\n4. Click on `Users`\n5. Click the `Settings` (gear) icon.\n6. Select `Console last sign-in`, `Access key last used`, and `Access Key Id`\n7. Click on `Close` \n8. Check and ensure that `Console last sign-in` is less than 45 days ago.\n\n**Note** - `Never` means the user has never logged in.\n\n9. Check and ensure that `Access key age` is less than 45 days and that `Access key last used` does not say `None`\n\nIf the user hasn't signed into the Console in the last 45 days or Access keys are over 45 days old refer to the remediation.\n\n**From Command Line:**\n\n**Download Credential Report:**\n\n1. Run the following commands:\n```\n aws iam generate-credential-report\n\n aws iam get-credential-report --query 'Content' --output text | base64 -d | cut -d, -f1,4,5,6,9,10,11,14,15,16 | grep -v '^'\n```\n\n**Ensure unused credentials do not exist:**\n\n2. For each user having `password_enabled` set to `TRUE` , ensure `password_last_used_date` is less than `45` days ago.\n\n- When `password_enabled` is set to `TRUE` and `password_last_used` is set to `No_Information` , ensure `password_last_changed` is less than 45 days ago.\n\n3. For each user having an `access_key_1_active` or `access_key_2_active` to `TRUE` , ensure the corresponding `access_key_n_last_used_date` is less than `45` days ago.\n\n- When a user having an `access_key_x_active` (where x is 1 or 2) to `TRUE` and corresponding access_key_x_last_used_date is set to `N/A', ensure `access_key_x_last_rotated` is less than 45 days ago.",
+          "RemediationProcedure": "**From Console:**  Perform the following to manage Unused Password (IAM user console access)  1. Login to the AWS Management Console: 2. Click `Services`  3. Click `IAM`  4. Click on `Users`  5. Click on `Security Credentials`  6. Select user whose `Console last sign-in` is greater than 45 days 7. Click `Security credentials` 8. In section `Sign-in credentials`, `Console password` click `Manage`  9. Under Console Access select `Disable` 10.Click `Apply`  Perform the following to deactivate Access Keys:  1. Login to the AWS Management Console: 2. Click `Services`  3. Click `IAM`  4. Click on `Users`  5. Click on `Security Credentials`  6. Select any access keys that are over 45 days old and that have been used and   - Click on `Make Inactive` 7. Select any access keys that are over 45 days old and that have not been used and   - Click the X to `Delete`",
+          "AuditProcedure": "Perform the following to determine if unused credentials exist:  **From Console:**  1. Login to the AWS Management Console 2. Click `Services`  3. Click `IAM` 4. Click on `Users` 5. Click the `Settings` (gear) icon. 6. Select `Console last sign-in`, `Access key last used`, and `Access Key Id` 7. Click on `Close`  8. Check and ensure that `Console last sign-in` is less than 45 days ago.  **Note** - `Never` means the user has never logged in.  9. Check and ensure that `Access key age` is less than 45 days and that `Access key last used` does not say `None`  If the user hasn't signed into the Console in the last 45 days or Access keys are over 45 days old refer to the remediation.  **From Command Line:**  **Download Credential Report:**  1. Run the following commands: ```  aws iam generate-credential-report   aws iam get-credential-report --query 'Content' --output text | base64 -d | cut -d, -f1,4,5,6,9,10,11,14,15,16 | grep -v '^' ```  **Ensure unused credentials do not exist:**  2. For each user having `password_enabled` set to `TRUE` , ensure `password_last_used_date` is less than `45` days ago.  - When `password_enabled` is set to `TRUE` and `password_last_used` is set to `No_Information` , ensure `password_last_changed` is less than 45 days ago.  3. For each user having an `access_key_1_active` or `access_key_2_active` to `TRUE` , ensure the corresponding `access_key_n_last_used_date` is less than `45` days ago.  - When a user having an `access_key_x_active` (where x is 1 or 2) to `TRUE` and corresponding access_key_x_last_used_date is set to `N/A', ensure `access_key_x_last_rotated` is less than 45 days ago.",
           "AdditionalInformation": " is excluded in the audit since the root account should not be used for day to day business and would likely be unused for more than 45 days.",
           "References": "https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#remove-credentials:https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_finding-unused.html:https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_passwords_admin-change-user.html:https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html"
         }
@@ -103,8 +103,8 @@
           "Description": "Access keys are long-term credentials for an IAM user or the AWS account 'root' user. You can use access keys to sign programmatic requests to the AWS CLI or AWS API (directly or using the AWS SDK)",
           "RationaleStatement": "Access keys are long-term credentials for an IAM user or the AWS account 'root' user. You can use access keys to sign programmatic requests to the AWS CLI or AWS API. One of the best ways to protect your account is to not allow users to have multiple access keys.",
           "ImpactStatement": "",
-          "RemediationProcedure": "**From Console:**\n\n1. Sign in to the AWS Management Console and navigate to IAM dashboard at `https://console.aws.amazon.com/iam/`.\n2. In the left navigation panel, choose `Users`.\n3. Click on the IAM user name that you want to examine.\n4. On the IAM user configuration page, select `Security Credentials` tab.\n5. In `Access Keys` section, choose one access key that is less than 90 days old. This should be the only active key used by this IAM user to access AWS resources programmatically. Test your application(s) to make sure that the chosen access key is working.\n6. In the same `Access Keys` section, identify your non-operational access keys (other than the chosen one) and deactivate it by clicking the `Make Inactive` link.\n7. If you receive the `Change Key Status` confirmation box, click `Deactivate` to switch off the selected key.\n8. Repeat steps no. 3 – 7 for each IAM user in your AWS account.\n\n**From Command Line:**\n\n1. Using the IAM user and access key information provided in the `Audit CLI`, choose one access key that is less than 90 days old. This should be the only active key used by this IAM user to access AWS resources programmatically. Test your application(s) to make sure that the chosen access key is working.\n\n2. Run the `update-access-key` command below using the IAM user name and the non-operational access key IDs to deactivate the unnecessary key(s). Refer to the Audit section to identify the unnecessary access key ID for the selected IAM user\n\n**Note** - the command does not return any output:\n```\naws iam update-access-key --access-key-id  --status Inactive --user-name \n```\n3. To confirm that the selected access key pair has been successfully `deactivated` run the `list-access-keys` audit command again for that IAM User:\n```\naws iam list-access-keys --user-name \n```\n- The command output should expose the metadata for each access key associated with the IAM user. If the non-operational key pair(s) `Status` is set to `Inactive`, the key has been successfully deactivated and the IAM user access configuration adheres now to this recommendation.\n\n4. Repeat steps no. 1 – 3 for each IAM user in your AWS account.",
-          "AuditProcedure": "**From Console:**\n\n1. Sign in to the AWS Management Console and navigate to IAM dashboard at `https://console.aws.amazon.com/iam/`.\n2. In the left navigation panel, choose `Users`.\n3. Click on the IAM user name that you want to examine.\n4. On the IAM user configuration page, select `Security Credentials` tab.\n5. Under `Access Keys` section, in the Status column, check the current status for each access key associated with the IAM user. If the selected IAM user has more than one access key activated then the users access configuration does not adhere to security best practices and the risk of accidental exposures increases.\n- Repeat steps no. 3 – 5 for each IAM user in your AWS account.\n\n**From Command Line:**\n\n1. Run `list-users` command to list all IAM users within your account:\n```\naws iam list-users --query \"Users[*].UserName\"\n```\nThe command output should return an array that contains all your IAM user names.\n\n2. Run `list-access-keys` command using the IAM user name list to return the current status of each access key associated with the selected IAM user:\n```\naws iam list-access-keys --user-name \n```\nThe command output should expose the metadata `(\"Username\", \"AccessKeyId\", \"Status\", \"CreateDate\")` for each access key on that user account.\n\n3. Check the `Status` property value for each key returned to determine each keys current state. If the `Status` property value for more than one IAM access key is set to `Active`, the user access configuration does not adhere to this recommendation, refer to the remediation below.\n\n- Repeat steps no. 2 and 3 for each IAM user in your AWS account.",
+          "RemediationProcedure": "**From Console:**  1. Sign in to the AWS Management Console and navigate to IAM dashboard at `https://console.aws.amazon.com/iam/`. 2. In the left navigation panel, choose `Users`. 3. Click on the IAM user name that you want to examine. 4. On the IAM user configuration page, select `Security Credentials` tab. 5. In `Access Keys` section, choose one access key that is less than 90 days old. This should be the only active key used by this IAM user to access AWS resources programmatically. Test your application(s) to make sure that the chosen access key is working. 6. In the same `Access Keys` section, identify your non-operational access keys (other than the chosen one) and deactivate it by clicking the `Make Inactive` link. 7. If you receive the `Change Key Status` confirmation box, click `Deactivate` to switch off the selected key. 8. Repeat steps no. 3 – 7 for each IAM user in your AWS account.  **From Command Line:**  1. Using the IAM user and access key information provided in the `Audit CLI`, choose one access key that is less than 90 days old. This should be the only active key used by this IAM user to access AWS resources programmatically. Test your application(s) to make sure that the chosen access key is working.  2. Run the `update-access-key` command below using the IAM user name and the non-operational access key IDs to deactivate the unnecessary key(s). Refer to the Audit section to identify the unnecessary access key ID for the selected IAM user  **Note** - the command does not return any output: ``` aws iam update-access-key --access-key-id  --status Inactive --user-name  ``` 3. To confirm that the selected access key pair has been successfully `deactivated` run the `list-access-keys` audit command again for that IAM User: ``` aws iam list-access-keys --user-name  ``` - The command output should expose the metadata for each access key associated with the IAM user. If the non-operational key pair(s) `Status` is set to `Inactive`, the key has been successfully deactivated and the IAM user access configuration adheres now to this recommendation.  4. Repeat steps no. 1 – 3 for each IAM user in your AWS account.",
+          "AuditProcedure": "**From Console:**  1. Sign in to the AWS Management Console and navigate to IAM dashboard at `https://console.aws.amazon.com/iam/`. 2. In the left navigation panel, choose `Users`. 3. Click on the IAM user name that you want to examine. 4. On the IAM user configuration page, select `Security Credentials` tab. 5. Under `Access Keys` section, in the Status column, check the current status for each access key associated with the IAM user. If the selected IAM user has more than one access key activated then the users access configuration does not adhere to security best practices and the risk of accidental exposures increases. - Repeat steps no. 3 – 5 for each IAM user in your AWS account.  **From Command Line:**  1. Run `list-users` command to list all IAM users within your account: ``` aws iam list-users --query \"Users[*].UserName\" ``` The command output should return an array that contains all your IAM user names.  2. Run `list-access-keys` command using the IAM user name list to return the current status of each access key associated with the selected IAM user: ``` aws iam list-access-keys --user-name  ``` The command output should expose the metadata `(\"Username\", \"AccessKeyId\", \"Status\", \"CreateDate\")` for each access key on that user account.  3. Check the `Status` property value for each key returned to determine each keys current state. If the `Status` property value for more than one IAM access key is set to `Active`, the user access configuration does not adhere to this recommendation, refer to the remediation below.  - Repeat steps no. 2 and 3 for each IAM user in your AWS account.",
           "AdditionalInformation": "",
           "References": "https://docs.aws.amazon.com/general/latest/gr/aws-access-keys-best-practices.html:https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html"
         }
@@ -122,10 +122,10 @@
           "Profile": "Level 1",
           "AssessmentStatus": "Automated",
           "Description": "Access keys consist of an access key ID and secret access key, which are used to sign programmatic requests that you make to AWS. AWS users need their own access keys to make programmatic calls to AWS from the AWS Command Line Interface (AWS CLI), Tools for Windows PowerShell, the AWS SDKs, or direct HTTP calls using the APIs for individual AWS services. It is recommended that all access keys be regularly rotated.",
-          "RationaleStatement": "Rotating access keys will reduce the window of opportunity for an access key that is associated with a compromised or terminated account to be used.\n\nAccess keys should be rotated to ensure that data cannot be accessed with an old key which might have been lost, cracked, or stolen.",
+          "RationaleStatement": "Rotating access keys will reduce the window of opportunity for an access key that is associated with a compromised or terminated account to be used.  Access keys should be rotated to ensure that data cannot be accessed with an old key which might have been lost, cracked, or stolen.",
           "ImpactStatement": "",
-          "RemediationProcedure": "Perform the following to rotate access keys:\n\n**From Console:**\n\n1. Go to Management Console (https://console.aws.amazon.com/iam)\n2. Click on `Users`\n3. Click on `Security Credentials` \n4. As an Administrator \n - Click on `Make Inactive` for keys that have not been rotated in `90` Days\n5. As an IAM User\n - Click on `Make Inactive` or `Delete` for keys which have not been rotated or used in `90` Days\n6. Click on `Create Access Key` \n7. Update programmatic call with new Access Key credentials\n\n**From Command Line:**\n\n1. While the first access key is still active, create a second access key, which is active by default. Run the following command:\n```\naws iam create-access-key\n```\n\nAt this point, the user has two active access keys.\n\n2. Update all applications and tools to use the new access key.\n3. Determine whether the first access key is still in use by using this command:\n```\naws iam get-access-key-last-used\n```\n4. One approach is to wait several days and then check the old access key for any use before proceeding.\n\nEven if step Step 3 indicates no use of the old key, it is recommended that you do not immediately delete the first access key. Instead, change the state of the first access key to Inactive using this command:\n```\naws iam update-access-key\n```\n5. Use only the new access key to confirm that your applications are working. Any applications and tools that still use the original access key will stop working at this point because they no longer have access to AWS resources. If you find such an application or tool, you can switch its state back to Active to reenable the first access key. Then return to step Step 2 and update this application to use the new key.\n\n6. After you wait some period of time to ensure that all applications and tools have been updated, you can delete the first access key with this command:\n```\naws iam delete-access-key\n```",
-          "AuditProcedure": "Perform the following to determine if access keys are rotated as prescribed:\n\n**From Console:**\n\n1. Go to Management Console (https://console.aws.amazon.com/iam)\n2. Click on `Users`\n3. Click `setting` icon\n4. Select `Console last sign-in`\n5. Click `Close`\n6. Ensure that `Access key age` is less than 90 days ago. note) `None` in the `Access key age` means the user has not used the access key.\n\n**From Command Line:**\n\n```\naws iam generate-credential-report\naws iam get-credential-report --query 'Content' --output text | base64 -d\n```\nThe `access_key_1_last_rotated` field in this file notes The date and time, in ISO 8601 date-time format, when the user's access key was created or last changed. If the user does not have an active access key, the value in this field is N/A (not applicable).",
+          "RemediationProcedure": "Perform the following to rotate access keys:  **From Console:**  1. Go to Management Console (https://console.aws.amazon.com/iam) 2. Click on `Users` 3. Click on `Security Credentials`  4. As an Administrator   - Click on `Make Inactive` for keys that have not been rotated in `90` Days 5. As an IAM User  - Click on `Make Inactive` or `Delete` for keys which have not been rotated or used in `90` Days 6. Click on `Create Access Key`  7. Update programmatic call with new Access Key credentials  **From Command Line:**  1. While the first access key is still active, create a second access key, which is active by default. Run the following command: ``` aws iam create-access-key ```  At this point, the user has two active access keys.  2. Update all applications and tools to use the new access key. 3. Determine whether the first access key is still in use by using this command: ``` aws iam get-access-key-last-used ``` 4. One approach is to wait several days and then check the old access key for any use before proceeding.  Even if step Step 3 indicates no use of the old key, it is recommended that you do not immediately delete the first access key. Instead, change the state of the first access key to Inactive using this command: ``` aws iam update-access-key ``` 5. Use only the new access key to confirm that your applications are working. Any applications and tools that still use the original access key will stop working at this point because they no longer have access to AWS resources. If you find such an application or tool, you can switch its state back to Active to reenable the first access key. Then return to step Step 2 and update this application to use the new key.  6. After you wait some period of time to ensure that all applications and tools have been updated, you can delete the first access key with this command: ``` aws iam delete-access-key ```",
+          "AuditProcedure": "Perform the following to determine if access keys are rotated as prescribed:  **From Console:**  1. Go to Management Console (https://console.aws.amazon.com/iam) 2. Click on `Users` 3. Click `setting` icon 4. Select `Console last sign-in` 5. Click `Close` 6. Ensure that `Access key age` is less than 90 days ago. note) `None` in the `Access key age` means the user has not used the access key.  **From Command Line:**  ``` aws iam generate-credential-report aws iam get-credential-report --query 'Content' --output text | base64 -d ``` The `access_key_1_last_rotated` field in this file notes The date and time, in ISO 8601 date-time format, when the user's access key was created or last changed. If the user does not have an active access key, the value in this field is N/A (not applicable).",
           "AdditionalInformation": "",
           "References": "https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#rotate-credentials:https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_finding-unused.html:https://docs.aws.amazon.com/general/latest/gr/managing-aws-access-keys.html:https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html"
         }
@@ -142,11 +142,11 @@
           "Section": "1. Identity and Access Management",
           "Profile": "Level 1",
           "AssessmentStatus": "Automated",
-          "Description": "IAM users are granted access to services, functions, and data through IAM policies. There are three ways to define policies for a user: 1) Edit the user policy directly, aka an inline, or user, policy; 2) attach a policy directly to a user; 3) add the user to an IAM group that has an attached policy. \n\nOnly the third implementation is recommended.",
+          "Description": "IAM users are granted access to services, functions, and data through IAM policies. There are three ways to define policies for a user: 1) Edit the user policy directly, aka an inline, or user, policy; 2) attach a policy directly to a user; 3) add the user to an IAM group that has an attached policy.   Only the third implementation is recommended.",
           "RationaleStatement": "Assigning IAM policy only through groups unifies permissions management to a single, flexible layer consistent with organizational functional roles. By unifying permissions management, the likelihood of excessive permissions is reduced.",
           "ImpactStatement": "",
-          "RemediationProcedure": "Perform the following to create an IAM group and assign a policy to it:\n\n1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).\n2. In the navigation pane, click `Groups` and then click `Create New Group` .\n3. In the `Group Name` box, type the name of the group and then click `Next Step` .\n4. In the list of policies, select the check box for each policy that you want to apply to all members of the group. Then click `Next Step` .\n5. Click `Create Group` \n\nPerform the following to add a user to a given group:\n\n1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).\n2. In the navigation pane, click `Groups` \n3. Select the group to add a user to\n4. Click `Add Users To Group` \n5. Select the users to be added to the group\n6. Click `Add Users` \n\nPerform the following to remove a direct association between a user and policy:\n\n1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).\n2. In the left navigation pane, click on Users\n3. For each user:\n - Select the user\n - Click on the `Permissions` tab\n - Expand `Permissions policies` \n - Click `X` for each policy; then click Detach or Remove (depending on policy type)",
-          "AuditProcedure": "Perform the following to determine if an inline policy is set or a policy is directly attached to users:\n\n1. Run the following to get a list of IAM users:\n```\n aws iam list-users --query 'Users[*].UserName' --output text \n```\n2. For each user returned, run the following command to determine if any policies are attached to them:\n```\n aws iam list-attached-user-policies --user-name \n aws iam list-user-policies --user-name  \n```\n3. If any policies are returned, the user has an inline policy or direct policy attachment.",
+          "RemediationProcedure": "Perform the following to create an IAM group and assign a policy to it:  1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/). 2. In the navigation pane, click `Groups` and then click `Create New Group` . 3. In the `Group Name` box, type the name of the group and then click `Next Step` . 4. In the list of policies, select the check box for each policy that you want to apply to all members of the group. Then click `Next Step` . 5. Click `Create Group`   Perform the following to add a user to a given group:  1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/). 2. In the navigation pane, click `Groups`  3. Select the group to add a user to 4. Click `Add Users To Group`  5. Select the users to be added to the group 6. Click `Add Users`   Perform the following to remove a direct association between a user and policy:  1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/). 2. In the left navigation pane, click on Users 3. For each user:  - Select the user  - Click on the `Permissions` tab  - Expand `Permissions policies`   - Click `X` for each policy; then click Detach or Remove (depending on policy type)",
+          "AuditProcedure": "Perform the following to determine if an inline policy is set or a policy is directly attached to users:  1. Run the following to get a list of IAM users: ```  aws iam list-users --query 'Users[*].UserName' --output text  ``` 2. For each user returned, run the following command to determine if any policies are attached to them: ```  aws iam list-attached-user-policies --user-name   aws iam list-user-policies --user-name   ``` 3. If any policies are returned, the user has an inline policy or direct policy attachment.",
           "AdditionalInformation": "",
           "References": "http://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html:http://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_managed-vs-inline.html"
         }
@@ -165,10 +165,10 @@
           "Profile": "Level 1",
           "AssessmentStatus": "Automated",
           "Description": "IAM policies are the means by which privileges are granted to users, groups, or roles. It is recommended and considered a standard security advice to grant _least privilege_ -that is, granting only the permissions required to perform a task. Determine what users need to do and then craft policies for them that let the users perform _only_ those tasks, instead of allowing full administrative privileges.",
-          "RationaleStatement": "It's more secure to start with a minimum set of permissions and grant additional permissions as necessary, rather than starting with permissions that are too lenient and then trying to tighten them later.\n\nProviding full administrative privileges instead of restricting to the minimum set of permissions that the user is required to do exposes the resources to potentially unwanted actions.\n\nIAM policies that have a statement with \"Effect\": \"Allow\" with \"Action\": \"\\*\" over \"Resource\": \"\\*\" should be removed.",
+          "RationaleStatement": "It's more secure to start with a minimum set of permissions and grant additional permissions as necessary, rather than starting with permissions that are too lenient and then trying to tighten them later.  Providing full administrative privileges instead of restricting to the minimum set of permissions that the user is required to do exposes the resources to potentially unwanted actions.  IAM policies that have a statement with \"Effect\": \"Allow\" with \"Action\": \"\\*\" over \"Resource\": \"\\*\" should be removed.",
           "ImpactStatement": "",
-          "RemediationProcedure": "**From Console:**\n\nPerform the following to detach the policy that has full administrative privileges:\n\n1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).\n2. In the navigation pane, click Policies and then search for the policy name found in the audit step.\n3. Select the policy that needs to be deleted.\n4. In the policy action menu, select first `Detach` \n5. Select all Users, Groups, Roles that have this policy attached\n6. Click `Detach Policy` \n7. In the policy action menu, select `Detach` \n\n**From Command Line:**\n\nPerform the following to detach the policy that has full administrative privileges as found in the audit step:\n\n1. Lists all IAM users, groups, and roles that the specified managed policy is attached to.\n\n```\n aws iam list-entities-for-policy --policy-arn \n```\n2. Detach the policy from all IAM Users:\n```\n aws iam detach-user-policy --user-name  --policy-arn \n```\n3. Detach the policy from all IAM Groups:\n```\n aws iam detach-group-policy --group-name  --policy-arn \n```\n4. Detach the policy from all IAM Roles:\n```\n aws iam detach-role-policy --role-name  --policy-arn \n```",
-          "AuditProcedure": "Perform the following to determine what policies are created:\n\n**From Command Line:**\n\n1. Run the following to get a list of IAM policies:\n```\n aws iam list-policies --only-attached --output text\n```\n2. For each policy returned, run the following command to determine if any policies is allowing full administrative privileges on the account:\n```\n aws iam get-policy-version --policy-arn  --version-id \n```\n3. In output ensure policy should not have any Statement block with `\"Effect\": \"Allow\"` and `Action` set to `\"*\"` and `Resource` set to `\"*\"`",
+          "RemediationProcedure": "**From Console:**  Perform the following to detach the policy that has full administrative privileges:  1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/). 2. In the navigation pane, click Policies and then search for the policy name found in the audit step. 3. Select the policy that needs to be deleted. 4. In the policy action menu, select first `Detach`  5. Select all Users, Groups, Roles that have this policy attached 6. Click `Detach Policy`  7. In the policy action menu, select `Detach`   **From Command Line:**  Perform the following to detach the policy that has full administrative privileges as found in the audit step:  1. Lists all IAM users, groups, and roles that the specified managed policy is attached to.  ```  aws iam list-entities-for-policy --policy-arn  ``` 2. Detach the policy from all IAM Users: ```  aws iam detach-user-policy --user-name  --policy-arn  ``` 3. Detach the policy from all IAM Groups: ```  aws iam detach-group-policy --group-name  --policy-arn  ``` 4. Detach the policy from all IAM Roles: ```  aws iam detach-role-policy --role-name  --policy-arn  ```",
+          "AuditProcedure": "Perform the following to determine what policies are created:  **From Command Line:**  1. Run the following to get a list of IAM policies: ```  aws iam list-policies --only-attached --output text ``` 2. For each policy returned, run the following command to determine if any policies is allowing full administrative privileges on the account: ```  aws iam get-policy-version --policy-arn  --version-id  ``` 3. In output ensure policy should not have any Statement block with `\"Effect\": \"Allow\"` and `Action` set to `\"*\"` and `Resource` set to `\"*\"`",
           "AdditionalInformation": "",
           "References": "https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html:https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_managed-vs-inline.html:https://docs.aws.amazon.com/cli/latest/reference/iam/index.html#cli-aws-iam"
         }
@@ -188,8 +188,8 @@
           "Description": "AWS provides a support center that can be used for incident notification and response, as well as technical support and customer services. Create an IAM Role to allow authorized users to manage incidents with AWS Support.",
           "RationaleStatement": "By implementing least privilege for access control, an IAM Role will require an appropriate IAM Policy to allow Support Center Access in order to manage Incidents with AWS Support.",
           "ImpactStatement": "All AWS Support plans include an unlimited number of account and billing support cases, with no long-term contracts. Support billing calculations are performed on a per-account basis for all plans. Enterprise Support plan customers have the option to include multiple enabled accounts in an aggregated monthly billing calculation. Monthly charges for the Business and Enterprise support plans are based on each month's AWS usage charges, subject to a monthly minimum, billed in advance.",
-          "RemediationProcedure": "**From Command Line:**\n\n1. Create an IAM role for managing incidents with AWS:\n - Create a trust relationship policy document that allows  to manage AWS incidents, and save it locally as /tmp/TrustPolicy.json:\n```\n {\n \"Version\": \"2012-10-17\",\n \"Statement\": [\n {\n \"Effect\": \"Allow\",\n \"Principal\": {\n \"AWS\": \"\"\n },\n \"Action\": \"sts:AssumeRole\"\n }\n ]\n }\n```\n2. Create the IAM role using the above trust policy:\n```\naws iam create-role --role-name  --assume-role-policy-document file:///tmp/TrustPolicy.json\n```\n3. Attach 'AWSSupportAccess' managed policy to the created IAM role:\n```\naws iam attach-role-policy --policy-arn arn:aws:iam::aws:policy/AWSSupportAccess --role-name \n```",
-          "AuditProcedure": "**From Command Line:**\n\n1. List IAM policies, filter for the 'AWSSupportAccess' managed policy, and note the \"Arn\" element value:\n```\naws iam list-policies --query \"Policies[?PolicyName == 'AWSSupportAccess']\"\n```\n2. Check if the 'AWSSupportAccess' policy is attached to any role:\n\n```\naws iam list-entities-for-policy --policy-arn arn:aws:iam::aws:policy/AWSSupportAccess\n```\n\n3. In Output, Ensure `PolicyRoles` does not return empty. 'Example: Example: PolicyRoles: [ ]'\n\nIf it returns empty refer to the remediation below.",
+          "RemediationProcedure": "**From Command Line:**  1. Create an IAM role for managing incidents with AWS:  - Create a trust relationship policy document that allows  to manage AWS incidents, and save it locally as /tmp/TrustPolicy.json: ```  {  \"Version\": \"2012-10-17\",  \"Statement\": [  {  \"Effect\": \"Allow\",  \"Principal\": {  \"AWS\": \"\"  },  \"Action\": \"sts:AssumeRole\"  }  ]  } ``` 2. Create the IAM role using the above trust policy: ``` aws iam create-role --role-name  --assume-role-policy-document file:///tmp/TrustPolicy.json ``` 3. Attach 'AWSSupportAccess' managed policy to the created IAM role: ``` aws iam attach-role-policy --policy-arn arn:aws:iam::aws:policy/AWSSupportAccess --role-name  ```",
+          "AuditProcedure": "**From Command Line:**  1. List IAM policies, filter for the 'AWSSupportAccess' managed policy, and note the \"Arn\" element value: ``` aws iam list-policies --query \"Policies[?PolicyName == 'AWSSupportAccess']\" ``` 2. Check if the 'AWSSupportAccess' policy is attached to any role:  ``` aws iam list-entities-for-policy --policy-arn arn:aws:iam::aws:policy/AWSSupportAccess ```  3. In Output, Ensure `PolicyRoles` does not return empty. 'Example: Example: PolicyRoles: [ ]'  If it returns empty refer to the remediation below.",
           "AdditionalInformation": "AWSSupportAccess policy is a global AWS resource. It has same ARN as `arn:aws:iam::aws:policy/AWSSupportAccess` for every account.",
           "References": "https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_managed-vs-inline.html:https://aws.amazon.com/premiumsupport/pricing/:https://docs.aws.amazon.com/cli/latest/reference/iam/list-policies.html:https://docs.aws.amazon.com/cli/latest/reference/iam/attach-role-policy.html:https://docs.aws.amazon.com/cli/latest/reference/iam/list-entities-for-policy.html"
         }
@@ -207,10 +207,10 @@
           "Profile": "Level 2",
           "AssessmentStatus": "Manual",
           "Description": "AWS access from within AWS instances can be done by either encoding AWS keys into AWS API calls or by assigning the instance to a role which has an appropriate permissions policy for the required access. \"AWS Access\" means accessing the APIs of AWS in order to access AWS resources or manage AWS account resources.",
-          "RationaleStatement": "AWS IAM roles reduce the risks associated with sharing and rotating credentials that can be used outside of AWS itself. If credentials are compromised, they can be used from outside of the AWS account they give access to. In contrast, in order to leverage role permissions an attacker would need to gain and maintain access to a specific instance to use the privileges associated with it.\n\nAdditionally, if credentials are encoded into compiled applications or other hard to change mechanisms, then they are even more unlikely to be properly rotated due to service disruption risks. As time goes on, credentials that cannot be rotated are more likely to be known by an increasing number of individuals who no longer work for the organization owning the credentials.",
+          "RationaleStatement": "AWS IAM roles reduce the risks associated with sharing and rotating credentials that can be used outside of AWS itself. If credentials are compromised, they can be used from outside of the AWS account they give access to. In contrast, in order to leverage role permissions an attacker would need to gain and maintain access to a specific instance to use the privileges associated with it.  Additionally, if credentials are encoded into compiled applications or other hard to change mechanisms, then they are even more unlikely to be properly rotated due to service disruption risks. As time goes on, credentials that cannot be rotated are more likely to be known by an increasing number of individuals who no longer work for the organization owning the credentials.",
           "ImpactStatement": "",
-          "RemediationProcedure": "IAM roles can only be associated at the launch of an instance. To remediate an instance to add it to a role you must create a new instance.\n\nIf the instance has no external dependencies on its current private ip or public addresses are elastic IPs:\n\n1. In AWS IAM create a new role. Assign a permissions policy if needed permissions are already known.\n2. In the AWS console launch a new instance with identical settings to the existing instance, and ensure that the newly created role is selected.\n3. Shutdown both the existing instance and the new instance.\n4. Detach disks from both instances.\n5. Attach the existing instance disks to the new instance.\n6. Boot the new instance and you should have the same machine, but with the associated role.\n\n**Note:** if your environment has dependencies on a dynamically assigned PRIVATE IP address you can create an AMI from the existing instance, destroy the old one and then when launching from the AMI, manually assign the previous private IP address.\n\n**Note: **if your environment has dependencies on a dynamically assigned PUBLIC IP address there is not a way ensure the address is retained and assign an instance role. Dependencies on dynamically assigned public IP addresses are a bad practice and, if possible, you may wish to rebuild the instance with a new elastic IP address and make the investment to remediate affected systems while assigning the system to a role.",
-          "AuditProcedure": "Where an instance is associated with a Role:\n\nFor instances that are known to perform AWS actions, ensure that they belong to an instance role that has the necessary permissions:\n\n1. Login to AWS Console (with appropriate permissions to View Identity Access Management Account Settings)\n2. Open the EC2 Dashboard and choose \"Instances\"\n3. Click the EC2 instance that performs AWS actions, in the lower pane details find \"IAM Role\"\n4. If the Role is blank, the instance is not assigned to one.\n5. If the Role is filled in, it does not mean the instance might not \\*also\\* have credentials encoded on it for some activities.\n\nWhere an Instance Contains Embedded Credentials:\n\n- On the instance that is known to perform AWS actions, audit all scripts and environment variables to ensure that none of them contain AWS credentials.\n\nWhere an Instance Application Contains Embedded Credentials:\n\n- Applications that run on an instance may also have credentials embedded. This is a bad practice, but even worse if the source code is stored in a public code repository such as github. When an application contains credentials can be determined by eliminating all other sources of credentials and if the application can still access AWS resources - it likely contains embedded credentials. Another method is to examine all source code and configuration files of the application.",
+          "RemediationProcedure": "IAM roles can only be associated at the launch of an instance. To remediate an instance to add it to a role you must create a new instance.  If the instance has no external dependencies on its current private ip or public addresses are elastic IPs:  1. In AWS IAM create a new role. Assign a permissions policy if needed permissions are already known. 2. In the AWS console launch a new instance with identical settings to the existing instance, and ensure that the newly created role is selected. 3. Shutdown both the existing instance and the new instance. 4. Detach disks from both instances. 5. Attach the existing instance disks to the new instance. 6. Boot the new instance and you should have the same machine, but with the associated role.  **Note:** if your environment has dependencies on a dynamically assigned PRIVATE IP address you can create an AMI from the existing instance, destroy the old one and then when launching from the AMI, manually assign the previous private IP address.  **Note: **if your environment has dependencies on a dynamically assigned PUBLIC IP address there is not a way ensure the address is retained and assign an instance role. Dependencies on dynamically assigned public IP addresses are a bad practice and, if possible, you may wish to rebuild the instance with a new elastic IP address and make the investment to remediate affected systems while assigning the system to a role.",
+          "AuditProcedure": "Where an instance is associated with a Role:  For instances that are known to perform AWS actions, ensure that they belong to an instance role that has the necessary permissions:  1. Login to AWS Console (with appropriate permissions to View Identity Access Management Account Settings) 2. Open the EC2 Dashboard and choose \"Instances\" 3. Click the EC2 instance that performs AWS actions, in the lower pane details find \"IAM Role\" 4. If the Role is blank, the instance is not assigned to one. 5. If the Role is filled in, it does not mean the instance might not \\*also\\* have credentials encoded on it for some activities.  Where an Instance Contains Embedded Credentials:  - On the instance that is known to perform AWS actions, audit all scripts and environment variables to ensure that none of them contain AWS credentials.  Where an Instance Application Contains Embedded Credentials:  - Applications that run on an instance may also have credentials embedded. This is a bad practice, but even worse if the source code is stored in a public code repository such as github. When an application contains credentials can be determined by eliminating all other sources of credentials and if the application can still access AWS resources - it likely contains embedded credentials. Another method is to examine all source code and configuration files of the application.",
           "AdditionalInformation": "",
           "References": "https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2.html:https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html"
         }
@@ -227,11 +227,11 @@
           "Section": "1. Identity and Access Management",
           "Profile": "Level 1",
           "AssessmentStatus": "Automated",
-          "Description": "To enable HTTPS connections to your website or application in AWS, you need an SSL/TLS server certificate. You can use ACM or IAM to store and deploy server certificates. \nUse IAM as a certificate manager only when you must support HTTPS connections in a region that is not supported by ACM. IAM securely encrypts your private keys and stores the encrypted version in IAM SSL certificate storage. IAM supports deploying server certificates in all regions, but you must obtain your certificate from an external provider for use with AWS. You cannot upload an ACM certificate to IAM. Additionally, you cannot manage your certificates from the IAM Console.",
+          "Description": "To enable HTTPS connections to your website or application in AWS, you need an SSL/TLS server certificate. You can use ACM or IAM to store and deploy server certificates.  Use IAM as a certificate manager only when you must support HTTPS connections in a region that is not supported by ACM. IAM securely encrypts your private keys and stores the encrypted version in IAM SSL certificate storage. IAM supports deploying server certificates in all regions, but you must obtain your certificate from an external provider for use with AWS. You cannot upload an ACM certificate to IAM. Additionally, you cannot manage your certificates from the IAM Console.",
           "RationaleStatement": "Removing expired SSL/TLS certificates eliminates the risk that an invalid certificate will be deployed accidentally to a resource such as AWS Elastic Load Balancer (ELB), which can damage the credibility of the application/website behind the ELB. As a best practice, it is recommended to delete expired certificates.",
-          "ImpactStatement": "Deleting the certificate could have implications for your application if you are using an expired server certificate with Elastic Load Balancing, CloudFront, etc.\nOne has to make configurations at respective services to ensure there is no interruption in application functionality.",
-          "RemediationProcedure": "**From Console:**\n\nRemoving expired certificates via AWS Management Console is not currently supported. To delete SSL/TLS certificates stored in IAM via the AWS API use the Command Line Interface (CLI).\n\n**From Command Line:**\n\nTo delete Expired Certificate run following command by replacing  with the name of the certificate to delete:\n\n```\naws iam delete-server-certificate --server-certificate-name \n```\n\nWhen the preceding command is successful, it does not return any output.",
-          "AuditProcedure": "**From Console:**\n\nGetting the certificates expiration information via AWS Management Console is not currently supported. \nTo request information about the SSL/TLS certificates stored in IAM via the AWS API use the Command Line Interface (CLI).\n\n**From Command Line:**\n\nRun list-server-certificates command to list all the IAM-stored server certificates:\n\n```\naws iam list-server-certificates\n```\n\nThe command output should return an array that contains all the SSL/TLS certificates currently stored in IAM and their metadata (name, ID, expiration date, etc):\n\n```\n{\n \"ServerCertificateMetadataList\": [\n {\n \"ServerCertificateId\": \"EHDGFRW7EJFYTE88D\",\n \"ServerCertificateName\": \"MyServerCertificate\",\n \"Expiration\": \"2018-07-10T23:59:59Z\",\n \"Path\": \"/\",\n \"Arn\": \"arn:aws:iam::012345678910:server-certificate/MySSLCertificate\",\n \"UploadDate\": \"2018-06-10T11:56:08Z\"\n }\n ]\n}\n```\n\nVerify the `ServerCertificateName` and `Expiration` parameter value (expiration date) for each SSL/TLS certificate returned by the list-server-certificates command and determine if there are any expired server certificates currently stored in AWS IAM. If so, use the AWS API to remove them.\n\nIf this command returns:\n```\n{ { \"ServerCertificateMetadataList\": [] }\n```\nThis means that there are no expired certificates, It DOES NOT mean that no certificates exist.",
+          "ImpactStatement": "Deleting the certificate could have implications for your application if you are using an expired server certificate with Elastic Load Balancing, CloudFront, etc. One has to make configurations at respective services to ensure there is no interruption in application functionality.",
+          "RemediationProcedure": "**From Console:**  Removing expired certificates via AWS Management Console is not currently supported. To delete SSL/TLS certificates stored in IAM via the AWS API use the Command Line Interface (CLI).  **From Command Line:**  To delete Expired Certificate run following command by replacing  with the name of the certificate to delete:  ``` aws iam delete-server-certificate --server-certificate-name  ```  When the preceding command is successful, it does not return any output.",
+          "AuditProcedure": "**From Console:**  Getting the certificates expiration information via AWS Management Console is not currently supported.  To request information about the SSL/TLS certificates stored in IAM via the AWS API use the Command Line Interface (CLI).  **From Command Line:**  Run list-server-certificates command to list all the IAM-stored server certificates:  ``` aws iam list-server-certificates ```  The command output should return an array that contains all the SSL/TLS certificates currently stored in IAM and their metadata (name, ID, expiration date, etc):  ``` {  \"ServerCertificateMetadataList\": [  {  \"ServerCertificateId\": \"EHDGFRW7EJFYTE88D\",  \"ServerCertificateName\": \"MyServerCertificate\",  \"Expiration\": \"2018-07-10T23:59:59Z\",  \"Path\": \"/\",  \"Arn\": \"arn:aws:iam::012345678910:server-certificate/MySSLCertificate\",  \"UploadDate\": \"2018-06-10T11:56:08Z\"  }  ] } ```  Verify the `ServerCertificateName` and `Expiration` parameter value (expiration date) for each SSL/TLS certificate returned by the list-server-certificates command and determine if there are any expired server certificates currently stored in AWS IAM. If so, use the AWS API to remove them.  If this command returns: ``` { { \"ServerCertificateMetadataList\": [] } ``` This means that there are no expired certificates, It DOES NOT mean that no certificates exist.",
           "AdditionalInformation": "",
           "References": "https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_server-certs.html:https://docs.aws.amazon.com/cli/latest/reference/iam/delete-server-certificate.html"
         }
@@ -251,8 +251,8 @@
           "Description": "AWS provides customers with the option of specifying the contact information for account's security team. It is recommended that this information be provided.",
           "RationaleStatement": "Specifying security-specific contact information will help ensure that security advisories sent by AWS reach the team in your organization that is best equipped to respond to them.",
           "ImpactStatement": "",
-          "RemediationProcedure": "Perform the following to establish security contact information:\n\n**From Console:**\n\n1. Click on your account name at the top right corner of the console.\n2. From the drop-down menu Click `My Account` \n3. Scroll down to the `Alternate Contacts` section\n4. Enter contact information in the `Security` section\n\n**Note:** Consider specifying an internal email distribution list to ensure emails are regularly monitored by more than one individual.",
-          "AuditProcedure": "Perform the following to determine if security contact information is present:\n\n**From Console:**\n\n1. Click on your account name at the top right corner of the console\n2. From the drop-down menu Click `My Account` \n3. Scroll down to the `Alternate Contacts` section\n4. Ensure contact information is specified in the `Security` section",
+          "RemediationProcedure": "Perform the following to establish security contact information:  **From Console:**  1. Click on your account name at the top right corner of the console. 2. From the drop-down menu Click `My Account`  3. Scroll down to the `Alternate Contacts` section 4. Enter contact information in the `Security` section  **Note:** Consider specifying an internal email distribution list to ensure emails are regularly monitored by more than one individual.",
+          "AuditProcedure": "Perform the following to determine if security contact information is present:  **From Console:**  1. Click on your account name at the top right corner of the console 2. From the drop-down menu Click `My Account`  3. Scroll down to the `Alternate Contacts` section 4. Ensure contact information is specified in the `Security` section",
           "AdditionalInformation": "",
           "References": ""
         }
@@ -269,11 +269,11 @@
           "Section": "1. Identity and Access Management",
           "Profile": "Level 1",
           "AssessmentStatus": "Automated",
-          "Description": "Enable IAM Access analyzer for IAM policies about all resources in each region.\n\nIAM Access Analyzer is a technology introduced at AWS reinvent 2019. After the Analyzer is enabled in IAM, scan results are displayed on the console showing the accessible resources. Scans show resources that other accounts and federated users can access, such as KMS keys and IAM roles. So the results allow you to determine if an unintended user is allowed, making it easier for administrators to monitor least privileges access.\nAccess Analyzer analyzes only policies that are applied to resources in the same AWS Region.",
+          "Description": "Enable IAM Access analyzer for IAM policies about all resources in each region.  IAM Access Analyzer is a technology introduced at AWS reinvent 2019. After the Analyzer is enabled in IAM, scan results are displayed on the console showing the accessible resources. Scans show resources that other accounts and federated users can access, such as KMS keys and IAM roles. So the results allow you to determine if an unintended user is allowed, making it easier for administrators to monitor least privileges access. Access Analyzer analyzes only policies that are applied to resources in the same AWS Region.",
           "RationaleStatement": "AWS IAM Access Analyzer helps you identify the resources in your organization and accounts, such as Amazon S3 buckets or IAM roles, that are shared with an external entity. This lets you identify unintended access to your resources and data. Access Analyzer identifies resources that are shared with external principals by using logic-based reasoning to analyze the resource-based policies in your AWS environment. IAM Access Analyzer continuously monitors all policies for S3 bucket, IAM roles, KMS(Key Management Service) keys, AWS Lambda functions, and Amazon SQS(Simple Queue Service) queues.",
           "ImpactStatement": "",
-          "RemediationProcedure": "**From Console:**\n\nPerform the following to enable IAM Access analyzer for IAM policies:\n\n1. Open the IAM console at `https://console.aws.amazon.com/iam/.`\n2. Choose `Access analyzer`.\n3. Choose `Create analyzer`.\n4. On the `Create analyzer` page, confirm that the `Region` displayed is the Region where you want to enable Access Analyzer.\n5. Enter a name for the analyzer. `Optional as it will generate a name for you automatically`.\n6. Add any tags that you want to apply to the analyzer. `Optional`. \n7. Choose `Create Analyzer`.\n8. Repeat these step for each active region\n\n**From Command Line:**\n\nRun the following command:\n```\naws accessanalyzer create-analyzer --analyzer-name  --type \n```\nRepeat this command above for each active region.\n\n**Note:** The IAM Access Analyzer is successfully configured only when the account you use has the necessary permissions.",
-          "AuditProcedure": "**From Console:**\n\n1. Open the IAM console at `https://console.aws.amazon.com/iam/`\n2. Choose `Access analyzer`\n3. Click 'Analyzers'\n4. Ensure that at least one analyzer is present\n5. Ensure that the `STATUS` is set to `Active`\n6. Repeat these step for each active region\n\n**From Command Line:**\n\n1. Run the following command:\n```\naws accessanalyzer list-analyzers | grep status\n```\n2. Ensure that at least one Analyzer the `status` is set to `ACTIVE`\n\n3. Repeat the steps above for each active region.\n\nIf an Access analyzer is not listed for each region or the status is not set to active refer to the remediation procedure below.",
+          "RemediationProcedure": "**From Console:**  Perform the following to enable IAM Access analyzer for IAM policies:  1. Open the IAM console at `https://console.aws.amazon.com/iam/.` 2. Choose `Access analyzer`. 3. Choose `Create analyzer`. 4. On the `Create analyzer` page, confirm that the `Region` displayed is the Region where you want to enable Access Analyzer. 5. Enter a name for the analyzer. `Optional as it will generate a name for you automatically`. 6. Add any tags that you want to apply to the analyzer. `Optional`.  7. Choose `Create Analyzer`. 8. Repeat these step for each active region  **From Command Line:**  Run the following command: ``` aws accessanalyzer create-analyzer --analyzer-name  --type  ``` Repeat this command above for each active region.  **Note:** The IAM Access Analyzer is successfully configured only when the account you use has the necessary permissions.",
+          "AuditProcedure": "**From Console:**  1. Open the IAM console at `https://console.aws.amazon.com/iam/` 2. Choose `Access analyzer` 3. Click 'Analyzers' 4. Ensure that at least one analyzer is present 5. Ensure that the `STATUS` is set to `Active` 6. Repeat these step for each active region  **From Command Line:**  1. Run the following command: ``` aws accessanalyzer list-analyzers | grep status ``` 2. Ensure that at least one Analyzer the `status` is set to `ACTIVE`  3. Repeat the steps above for each active region.  If an Access analyzer is not listed for each region or the status is not set to active refer to the remediation procedure below.",
           "AdditionalInformation": "",
           "References": "https://docs.aws.amazon.com/IAM/latest/UserGuide/what-is-access-analyzer.html:https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-getting-started.html:https://docs.aws.amazon.com/cli/latest/reference/accessanalyzer/get-analyzer.html:https://docs.aws.amazon.com/cli/latest/reference/accessanalyzer/create-analyzer.html"
         }
@@ -294,7 +294,7 @@
           "RationaleStatement": "Centralizing IAM user management to a single identity store reduces complexity and thus the likelihood of access management errors.",
           "ImpactStatement": "",
           "RemediationProcedure": "The remediation procedure will vary based on the individual organization's implementation of identity federation and/or AWS Organizations with the acceptance criteria that no non-service IAM users, and non-root accounts, are present outside the account providing centralized IAM user management.",
-          "AuditProcedure": "For multi-account AWS environments with an external identity provider... \n\n1. Determine the master account for identity federation or IAM user management\n2. Login to that account through the AWS Management Console\n3. Click `Services` \n4. Click `IAM` \n5. Click `Identity providers`\n6. Verify the configuration\n\nThen..., determine all accounts that should not have local users present. For each account...\n\n1. Determine all accounts that should not have local users present\n2. Log into the AWS Management Console\n3. Switch role into each identified account\n4. Click `Services` \n5. Click `IAM` \n6. Click `Users`\n7. Confirm that no IAM users representing individuals are present\n\nFor multi-account AWS environments implementing AWS Organizations without an external identity provider... \n\n1. Determine all accounts that should not have local users present\n2. Log into the AWS Management Console\n3. Switch role into each identified account\n4. Click `Services` \n5. Click `IAM` \n6. Click `Users`\n7. Confirm that no IAM users representing individuals are present",
+          "AuditProcedure": "For multi-account AWS environments with an external identity provider...   1. Determine the master account for identity federation or IAM user management 2. Login to that account through the AWS Management Console 3. Click `Services`  4. Click `IAM`  5. Click `Identity providers` 6. Verify the configuration  Then..., determine all accounts that should not have local users present. For each account...  1. Determine all accounts that should not have local users present 2. Log into the AWS Management Console 3. Switch role into each identified account 4. Click `Services`  5. Click `IAM`  6. Click `Users` 7. Confirm that no IAM users representing individuals are present  For multi-account AWS environments implementing AWS Organizations without an external identity provider...   1. Determine all accounts that should not have local users present 2. Log into the AWS Management Console 3. Switch role into each identified account 4. Click `Services`  5. Click `IAM`  6. Click `Users` 7. Confirm that no IAM users representing individuals are present",
           "AdditionalInformation": "",
           "References": ""
         }
@@ -312,8 +312,8 @@
           "Description": "AWS CloudShell is a convenient way of running CLI commands against AWS services; a managed IAM policy ('AWSCloudShellFullAccess') provides full access to CloudShell, which allows file upload and download capability between a user's local system and the CloudShell environment. Within the CloudShell environment a user has sudo permissions, and can access the internet. So it is feasible to install file transfer software (for example) and move data from CloudShell to external internet servers.",
           "RationaleStatement": "Access to this policy should be restricted as it presents a potential channel for data exfiltration by malicious cloud admins that are given full permissions to the service. AWS documentation describes how to create a more restrictive IAM policy which denies file transfer permissions.",
           "ImpactStatement": "",
-          "RemediationProcedure": "**From Console**\n\n1. Open the IAM console at https://console.aws.amazon.com/iam/\n\n2. In the left pane, select Policies 3. Search for and select AWSCloudShellFullAccess\n\n4. On the Entities attached tab, for each item, check the box and select Detach",
-          "AuditProcedure": "**From Console**\n1. Open the IAM console at https://console.aws.amazon.com/iam/\n 2. In the left pane, select Policies\n 3. Search for and select AWSCloudShellFullAccess\n 4. On the Entities attached tab, ensure that there are no entities using this policy\n\n **From Command Line**\n 1. List IAM policies, filter for the 'AWSCloudShellFullAccess' managed policy, and note the \"\"Arn\"\" element value:\n ```\n aws iam list-policies --query \"\"Policies[?PolicyName == 'AWSCloudShellFullAccess']\"\"\n ```\n  2. Check if the 'AWSCloudShellFullAccess' policy is attached to any role:\n  ```\n aws iam list-entities-for-policy --policy-arn arn:aws:iam::aws:policy/AWSCloudShellFullAccess\n ```\n 3. In Output, Ensure PolicyRoles returns empty. 'Example: Example: PolicyRoles: [ ]'\n If it does not return empty refer to the remediation below.\n Note: Keep in mind that other policies may grant access.",
+          "RemediationProcedure": "**From Console**  1. Open the IAM console at https://console.aws.amazon.com/iam/  2. In the left pane, select Policies 3. Search for and select AWSCloudShellFullAccess  4. On the Entities attached tab, for each item, check the box and select Detach",
+          "AuditProcedure": "**From Console** 1. Open the IAM console at https://console.aws.amazon.com/iam/  2. In the left pane, select Policies  3. Search for and select AWSCloudShellFullAccess  4. On the Entities attached tab, ensure that there are no entities using this policy   **From Command Line**  1. List IAM policies, filter for the 'AWSCloudShellFullAccess' managed policy, and note the \"\"Arn\"\" element value:  ```  aws iam list-policies --query \"\"Policies[?PolicyName == 'AWSCloudShellFullAccess']\"\"  ```   2. Check if the 'AWSCloudShellFullAccess' policy is attached to any role:   ```  aws iam list-entities-for-policy --policy-arn arn:aws:iam::aws:policy/AWSCloudShellFullAccess  ```  3. In Output, Ensure PolicyRoles returns empty. 'Example: Example: PolicyRoles: [ ]'  If it does not return empty refer to the remediation below.  Note: Keep in mind that other policies may grant access.",
           "AdditionalInformation": "",
           "References": "https://docs.aws.amazon.com/cloudshell/latest/userguide/sec-auth-with-identities.html"
         }
@@ -333,8 +333,8 @@
           "Description": "The AWS support portal allows account owners to establish security questions that can be used to authenticate individuals calling AWS customer service for support. It is recommended that security questions be established.",
           "RationaleStatement": "When creating a new AWS account, a default super user is automatically created. This account is referred to as the 'root user' or 'root' account. It is recommended that the use of this account be limited and highly controlled. During events in which the 'root' password is no longer accessible or the MFA token associated with 'root' is lost/destroyed it is possible, through authentication using secret questions and associated answers, to recover 'root' user login access.",
           "ImpactStatement": "",
-          "RemediationProcedure": "**From Console:**\n\n1. Login to the AWS Account as the 'root' user\n2. Click on the __ from the top right of the console\n3. From the drop-down menu Click _My Account_\n4. Scroll down to the `Configure Security Questions` section\n5. Click on `Edit` \n6. Click on each `Question` \n - From the drop-down select an appropriate question\n - Click on the `Answer` section\n - Enter an appropriate answer \n - Follow process for all 3 questions\n7. Click `Update` when complete\n8. Save Questions and Answers and place in a secure physical location",
-          "AuditProcedure": "**From Console:**\n\n1. Login to the AWS account as the 'root' user\n2. On the top right you will see the __\n3. Click on the __\n4. From the drop-down menu Click `My Account` \n5. In the `Configure Security Challenge Questions` section on the `Personal Information` page, configure three security challenge questions.\n6. Click `Save questions` .",
+          "RemediationProcedure": "**From Console:**  1. Login to the AWS Account as the 'root' user 2. Click on the __ from the top right of the console 3. From the drop-down menu Click _My Account_ 4. Scroll down to the `Configure Security Questions` section 5. Click on `Edit`  6. Click on each `Question`   - From the drop-down select an appropriate question  - Click on the `Answer` section  - Enter an appropriate answer   - Follow process for all 3 questions 7. Click `Update` when complete 8. Save Questions and Answers and place in a secure physical location",
+          "AuditProcedure": "**From Console:**  1. Login to the AWS account as the 'root' user 2. On the top right you will see the __ 3. Click on the __ 4. From the drop-down menu Click `My Account`  5. In the `Configure Security Challenge Questions` section on the `Personal Information` page, configure three security challenge questions. 6. Click `Save questions` .",
           "AdditionalInformation": "",
           "References": ""
         }
@@ -354,8 +354,8 @@
           "Description": "The 'root' user account is the most privileged user in an AWS account. AWS Access Keys provide programmatic access to a given AWS account. It is recommended that all access keys associated with the 'root' user account be removed.",
           "RationaleStatement": "Removing access keys associated with the 'root' user account limits vectors by which the account can be compromised. Additionally, removing the 'root' access keys encourages the creation and use of role based accounts that are least privileged.",
           "ImpactStatement": "",
-          "RemediationProcedure": "Perform the following to delete or disable active 'root' user access keys\n\n**From Console:**\n\n1. Sign in to the AWS Management Console as 'root' and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).\n2. Click on __ at the top right and select `My Security Credentials` from the drop down list\n3. On the pop out screen Click on `Continue to Security Credentials` \n4. Click on `Access Keys` _(Access Key ID and Secret Access Key)_\n5. Under the `Status` column if there are any Keys which are Active\n - Click on `Make Inactive` - (Temporarily disable Key - may be needed again)\n - Click `Delete` - (Deleted keys cannot be recovered)",
-          "AuditProcedure": "Perform the following to determine if the 'root' user account has access keys:\n\n**From Console:**\n\n1. Login to the AWS Management Console\n2. Click `Services` \n3. Click `IAM` \n4. Click on `Credential Report` \n5. This will download a `.csv` file which contains credential usage for all IAM users within an AWS Account - open this file\n6. For the `` user, ensure the `access_key_1_active` and `access_key_2_active` fields are set to `FALSE` .\n\n**From Command Line:**\n\nRun the following command:\n```\n aws iam get-account-summary | grep \"AccountAccessKeysPresent\" \n```\nIf no 'root' access keys exist the output will show \"AccountAccessKeysPresent\": 0,. \n\nIf the output shows a \"1\" than 'root' keys exist, refer to the remediation procedure below.",
+          "RemediationProcedure": "Perform the following to delete or disable active 'root' user access keys  **From Console:**  1. Sign in to the AWS Management Console as 'root' and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/). 2. Click on __ at the top right and select `My Security Credentials` from the drop down list 3. On the pop out screen Click on `Continue to Security Credentials`  4. Click on `Access Keys` _(Access Key ID and Secret Access Key)_ 5. Under the `Status` column if there are any Keys which are Active  - Click on `Make Inactive` - (Temporarily disable Key - may be needed again)  - Click `Delete` - (Deleted keys cannot be recovered)",
+          "AuditProcedure": "Perform the following to determine if the 'root' user account has access keys:  **From Console:**  1. Login to the AWS Management Console 2. Click `Services`  3. Click `IAM`  4. Click on `Credential Report`  5. This will download a `.csv` file which contains credential usage for all IAM users within an AWS Account - open this file 6. For the `` user, ensure the `access_key_1_active` and `access_key_2_active` fields are set to `FALSE` .  **From Command Line:**  Run the following command: ```  aws iam get-account-summary | grep \"AccountAccessKeysPresent\"  ``` If no 'root' access keys exist the output will show \"AccountAccessKeysPresent\": 0,.   If the output shows a \"1\" than 'root' keys exist, refer to the remediation procedure below.",
           "AdditionalInformation": "IAM User account \"root\" for us-gov cloud regions is not enabled by default. However, on request to AWS support enables 'root' access only through access-keys (CLI, API methods) for us-gov cloud region.",
           "References": "http://docs.aws.amazon.com/general/latest/gr/aws-access-keys-best-practices.html:http://docs.aws.amazon.com/general/latest/gr/managing-aws-access-keys.html:http://docs.aws.amazon.com/IAM/latest/APIReference/API_GetAccountSummary.html:https://aws.amazon.com/blogs/security/an-easier-way-to-determine-the-presence-of-aws-account-access-keys/"
         }
@@ -372,11 +372,11 @@
           "Section": "1. Identity and Access Management",
           "Profile": "Level 1",
           "AssessmentStatus": "Automated",
-          "Description": "The 'root' user account is the most privileged user in an AWS account. Multi-factor Authentication (MFA) adds an extra layer of protection on top of a username and password. With MFA enabled, when a user signs in to an AWS website, they will be prompted for their username and password as well as for an authentication code from their AWS MFA device.\n\n**Note:** When virtual MFA is used for 'root' accounts, it is recommended that the device used is NOT a personal device, but rather a dedicated mobile device (tablet or phone) that is managed to be kept charged and secured independent of any individual personal devices. (\"non-personal virtual MFA\") This lessens the risks of losing access to the MFA due to device loss, device trade-in or if the individual owning the device is no longer employed at the company.",
+          "Description": "The 'root' user account is the most privileged user in an AWS account. Multi-factor Authentication (MFA) adds an extra layer of protection on top of a username and password. With MFA enabled, when a user signs in to an AWS website, they will be prompted for their username and password as well as for an authentication code from their AWS MFA device.  **Note:** When virtual MFA is used for 'root' accounts, it is recommended that the device used is NOT a personal device, but rather a dedicated mobile device (tablet or phone) that is managed to be kept charged and secured independent of any individual personal devices. (\"non-personal virtual MFA\") This lessens the risks of losing access to the MFA due to device loss, device trade-in or if the individual owning the device is no longer employed at the company.",
           "RationaleStatement": "Enabling MFA provides increased security for console access as it requires the authenticating principal to possess a device that emits a time-sensitive key and have knowledge of a credential.",
           "ImpactStatement": "",
-          "RemediationProcedure": "Perform the following to establish MFA for the 'root' user account:\n\n1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).\n\n Note: to manage MFA devices for the 'root' AWS account, you must use your 'root' account credentials to sign in to AWS. You cannot manage MFA devices for the 'root' account using other credentials.\n\n2. Choose `Dashboard` , and under `Security Status` , expand `Activate MFA` on your root account.\n3. Choose `Activate MFA` \n4. In the wizard, choose `A virtual MFA` device and then choose `Next Step` .\n5. IAM generates and displays configuration information for the virtual MFA device, including a QR code graphic. The graphic is a representation of the 'secret configuration key' that is available for manual entry on devices that do not support QR codes.\n6. Open your virtual MFA application. (For a list of apps that you can use for hosting virtual MFA devices, see [Virtual MFA Applications](http://aws.amazon.com/iam/details/mfa/#Virtual_MFA_Applications).) If the virtual MFA application supports multiple accounts (multiple virtual MFA devices), choose the option to create a new account (a new virtual MFA device).\n7. Determine whether the MFA app supports QR codes, and then do one of the following:\n\n - Use the app to scan the QR code. For example, you might choose the camera icon or choose an option similar to Scan code, and then use the device's camera to scan the code.\n - In the Manage MFA Device wizard, choose Show secret key for manual configuration, and then type the secret configuration key into your MFA application.\n\nWhen you are finished, the virtual MFA device starts generating one-time passwords.\n\nIn the Manage MFA Device wizard, in the Authentication Code 1 box, type the one-time password that currently appears in the virtual MFA device. Wait up to 30 seconds for the device to generate a new one-time password. Then type the second one-time password into the Authentication Code 2 box. Choose Assign Virtual MFA.",
-          "AuditProcedure": "Perform the following to determine if the 'root' user account has MFA setup:\n\n**From Console:**\n\n1. Login to the AWS Management Console\n2. Click `Services` \n3. Click `IAM` \n4. Click on `Credential Report` \n5. This will download a `.csv` file which contains credential usage for all IAM users within an AWS Account - open this file\n6. For the `` user, ensure the `mfa_active` field is set to `TRUE` .\n\n**From Command Line:**\n\n1. Run the following command:\n```\n aws iam get-account-summary | grep \"AccountMFAEnabled\"\n```\n2. Ensure the AccountMFAEnabled property is set to 1",
+          "RemediationProcedure": "Perform the following to establish MFA for the 'root' user account:  1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).   Note: to manage MFA devices for the 'root' AWS account, you must use your 'root' account credentials to sign in to AWS. You cannot manage MFA devices for the 'root' account using other credentials.  2. Choose `Dashboard` , and under `Security Status` , expand `Activate MFA` on your root account. 3. Choose `Activate MFA`  4. In the wizard, choose `A virtual MFA` device and then choose `Next Step` . 5. IAM generates and displays configuration information for the virtual MFA device, including a QR code graphic. The graphic is a representation of the 'secret configuration key' that is available for manual entry on devices that do not support QR codes. 6. Open your virtual MFA application. (For a list of apps that you can use for hosting virtual MFA devices, see [Virtual MFA Applications](http://aws.amazon.com/iam/details/mfa/#Virtual_MFA_Applications).) If the virtual MFA application supports multiple accounts (multiple virtual MFA devices), choose the option to create a new account (a new virtual MFA device). 7. Determine whether the MFA app supports QR codes, and then do one of the following:   - Use the app to scan the QR code. For example, you might choose the camera icon or choose an option similar to Scan code, and then use the device's camera to scan the code.  - In the Manage MFA Device wizard, choose Show secret key for manual configuration, and then type the secret configuration key into your MFA application.  When you are finished, the virtual MFA device starts generating one-time passwords.  In the Manage MFA Device wizard, in the Authentication Code 1 box, type the one-time password that currently appears in the virtual MFA device. Wait up to 30 seconds for the device to generate a new one-time password. Then type the second one-time password into the Authentication Code 2 box. Choose Assign Virtual MFA.",
+          "AuditProcedure": "Perform the following to determine if the 'root' user account has MFA setup:  **From Console:**  1. Login to the AWS Management Console 2. Click `Services`  3. Click `IAM`  4. Click on `Credential Report`  5. This will download a `.csv` file which contains credential usage for all IAM users within an AWS Account - open this file 6. For the `` user, ensure the `mfa_active` field is set to `TRUE` .  **From Command Line:**  1. Run the following command: ```  aws iam get-account-summary | grep \"AccountMFAEnabled\" ``` 2. Ensure the AccountMFAEnabled property is set to 1",
           "AdditionalInformation": "IAM User account \"root\" for us-gov cloud regions does not have console access. This recommendation is not applicable for us-gov cloud regions.",
           "References": "https://docs.aws.amazon.com/IAM/latest/UserGuide/id_root-user.html#id_root-user_manage_mfa:https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa_enable_virtual.html#enable-virt-mfa-for-root"
         }
@@ -394,10 +394,10 @@
           "Profile": "Level 2",
           "AssessmentStatus": "Automated",
           "Description": "The 'root' user account is the most privileged user in an AWS account. MFA adds an extra layer of protection on top of a user name and password. With MFA enabled, when a user signs in to an AWS website, they will be prompted for their user name and password as well as for an authentication code from their AWS MFA device. For Level 2, it is recommended that the 'root' user account be protected with a hardware MFA.",
-          "RationaleStatement": "A hardware MFA has a smaller attack surface than a virtual MFA. For example, a hardware MFA does not suffer the attack surface introduced by the mobile smartphone on which a virtual MFA resides.\n\n**Note**: Using hardware MFA for many, many AWS accounts may create a logistical device management issue. If this is the case, consider implementing this Level 2 recommendation selectively to the highest security AWS accounts and the Level 1 recommendation applied to the remaining accounts.",
+          "RationaleStatement": "A hardware MFA has a smaller attack surface than a virtual MFA. For example, a hardware MFA does not suffer the attack surface introduced by the mobile smartphone on which a virtual MFA resides.  **Note**: Using hardware MFA for many, many AWS accounts may create a logistical device management issue. If this is the case, consider implementing this Level 2 recommendation selectively to the highest security AWS accounts and the Level 1 recommendation applied to the remaining accounts.",
           "ImpactStatement": "",
-          "RemediationProcedure": "Perform the following to establish a hardware MFA for the 'root' user account:\n\n1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).\nNote: to manage MFA devices for the AWS 'root' user account, you must use your 'root' account credentials to sign in to AWS. You cannot manage MFA devices for the 'root' account using other credentials.\n2. Choose `Dashboard` , and under `Security Status` , expand `Activate MFA` on your root account.\n3. Choose `Activate MFA` \n4. In the wizard, choose `A hardware MFA` device and then choose `Next Step` .\n5. In the `Serial Number` box, enter the serial number that is found on the back of the MFA device.\n6. In the `Authentication Code 1` box, enter the six-digit number displayed by the MFA device. You might need to press the button on the front of the device to display the number.\n7. Wait 30 seconds while the device refreshes the code, and then enter the next six-digit number into the `Authentication Code 2` box. You might need to press the button on the front of the device again to display the second number.\n8. Choose `Next Step` . The MFA device is now associated with the AWS account. The next time you use your AWS account credentials to sign in, you must type a code from the hardware MFA device.\n\nRemediation for this recommendation is not available through AWS CLI.",
-          "AuditProcedure": "Perform the following to determine if the 'root' user account has a hardware MFA setup:\n\n1. Run the following command to determine if the 'root' account has MFA setup:\n```\n aws iam get-account-summary | grep \"AccountMFAEnabled\"\n```\n\nThe `AccountMFAEnabled` property is set to `1` will ensure that the 'root' user account has MFA (Virtual or Hardware) Enabled.\nIf `AccountMFAEnabled` property is set to `0` the account is not compliant with this recommendation.\n\n2. If `AccountMFAEnabled` property is set to `1`, determine 'root' account has Hardware MFA enabled.\nRun the following command to list all virtual MFA devices:\n```\n aws iam list-virtual-mfa-devices \n```\nIf the output contains one MFA with the following Serial Number, it means the MFA is virtual, not hardware and the account is not compliant with this recommendation:\n\n `\"SerialNumber\": \"arn:aws:iam::__:mfa/root-account-mfa-device\"`",
+          "RemediationProcedure": "Perform the following to establish a hardware MFA for the 'root' user account:  1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/). Note: to manage MFA devices for the AWS 'root' user account, you must use your 'root' account credentials to sign in to AWS. You cannot manage MFA devices for the 'root' account using other credentials. 2. Choose `Dashboard` , and under `Security Status` , expand `Activate MFA` on your root account. 3. Choose `Activate MFA`  4. In the wizard, choose `A hardware MFA` device and then choose `Next Step` . 5. In the `Serial Number` box, enter the serial number that is found on the back of the MFA device. 6. In the `Authentication Code 1` box, enter the six-digit number displayed by the MFA device. You might need to press the button on the front of the device to display the number. 7. Wait 30 seconds while the device refreshes the code, and then enter the next six-digit number into the `Authentication Code 2` box. You might need to press the button on the front of the device again to display the second number. 8. Choose `Next Step` . The MFA device is now associated with the AWS account. The next time you use your AWS account credentials to sign in, you must type a code from the hardware MFA device.  Remediation for this recommendation is not available through AWS CLI.",
+          "AuditProcedure": "Perform the following to determine if the 'root' user account has a hardware MFA setup:  1. Run the following command to determine if the 'root' account has MFA setup: ```  aws iam get-account-summary | grep \"AccountMFAEnabled\" ```  The `AccountMFAEnabled` property is set to `1` will ensure that the 'root' user account has MFA (Virtual or Hardware) Enabled. If `AccountMFAEnabled` property is set to `0` the account is not compliant with this recommendation.  2. If `AccountMFAEnabled` property is set to `1`, determine 'root' account has Hardware MFA enabled. Run the following command to list all virtual MFA devices: ```  aws iam list-virtual-mfa-devices  ``` If the output contains one MFA with the following Serial Number, it means the MFA is virtual, not hardware and the account is not compliant with this recommendation:   `\"SerialNumber\": \"arn:aws:iam::__:mfa/root-account-mfa-device\"`",
           "AdditionalInformation": "IAM User account 'root' for us-gov cloud regions does not have console access. This control is not applicable for us-gov cloud regions.",
           "References": "https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa_enable_virtual.html:https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa_enable_physical.html#enable-hw-mfa-for-root"
         }
@@ -417,9 +417,9 @@
           "Description": "With the creation of an AWS account, a 'root user' is created that cannot be disabled or deleted. That user has unrestricted access to and control over all resources in the AWS account. It is highly recommended that the use of this account be avoided for everyday tasks.",
           "RationaleStatement": "The 'root user' has unrestricted access to and control over all account resources. Use of it is inconsistent with the principles of least privilege and separation of duties, and can lead to unnecessary harm due to error or account compromise.",
           "ImpactStatement": "",
-          "RemediationProcedure": "If you find that the 'root' user account is being used for daily activity to include administrative tasks that do not require the 'root' user:\n\n1. Change the 'root' user password.\n2. Deactivate or delete any access keys associate with the 'root' user.\n\n**Remember, anyone who has 'root' user credentials for your AWS account has unrestricted access to and control of all the resources in your account, including billing information.",
-          "AuditProcedure": "**From Console:**\n\n1. Login to the AWS Management Console at `https://console.aws.amazon.com/iam/`\n2. In the left pane, click `Credential Report`\n3. Click on `Download Report`\n4. Open of Save the file locally\n5. Locate the `` under the user column\n6. Review `password_last_used, access_key_1_last_used_date, access_key_2_last_used_date` to determine when the 'root user' was last used.\n\n**From Command Line:**\n\nRun the following CLI commands to provide a credential report for determining the last time the 'root user' was used:\n```\naws iam generate-credential-report\n```\n```\naws iam get-credential-report --query 'Content' --output text | base64 -d | cut -d, -f1,5,11,16 | grep -B1 ''\n```\n\nReview `password_last_used`, `access_key_1_last_used_date`, `access_key_2_last_used_date` to determine when the _root user_ was last used.\n\n**Note:** There are a few conditions under which the use of the 'root' user account is required. Please see the reference links for all of the tasks that require use of the 'root' user.",
-          "AdditionalInformation": "The 'root' user for us-gov cloud regions is not enabled by default. However, on request to AWS support, they can enable the 'root' user and grant access only through access-keys (CLI, API methods) for us-gov cloud region. If the 'root' user for us-gov cloud regions is enabled, this recommendation is applicable.\n\nMonitoring usage of the 'root' user can be accomplished by implementing recommendation 3.3 Ensure a log metric filter and alarm exist for usage of the 'root' user.",
+          "RemediationProcedure": "If you find that the 'root' user account is being used for daily activity to include administrative tasks that do not require the 'root' user:  1. Change the 'root' user password. 2. Deactivate or delete any access keys associate with the 'root' user.  **Remember, anyone who has 'root' user credentials for your AWS account has unrestricted access to and control of all the resources in your account, including billing information.",
+          "AuditProcedure": "**From Console:**  1. Login to the AWS Management Console at `https://console.aws.amazon.com/iam/` 2. In the left pane, click `Credential Report` 3. Click on `Download Report` 4. Open of Save the file locally 5. Locate the `` under the user column 6. Review `password_last_used, access_key_1_last_used_date, access_key_2_last_used_date` to determine when the 'root user' was last used.  **From Command Line:**  Run the following CLI commands to provide a credential report for determining the last time the 'root user' was used: ``` aws iam generate-credential-report ``` ``` aws iam get-credential-report --query 'Content' --output text | base64 -d | cut -d, -f1,5,11,16 | grep -B1 '' ```  Review `password_last_used`, `access_key_1_last_used_date`, `access_key_2_last_used_date` to determine when the _root user_ was last used.  **Note:** There are a few conditions under which the use of the 'root' user account is required. Please see the reference links for all of the tasks that require use of the 'root' user.",
+          "AdditionalInformation": "The 'root' user for us-gov cloud regions is not enabled by default. However, on request to AWS support, they can enable the 'root' user and grant access only through access-keys (CLI, API methods) for us-gov cloud region. If the 'root' user for us-gov cloud regions is enabled, this recommendation is applicable.  Monitoring usage of the 'root' user can be accomplished by implementing recommendation 3.3 Ensure a log metric filter and alarm exist for usage of the 'root' user.",
           "References": "https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html:https://docs.aws.amazon.com/IAM/latest/UserGuide/id_root-user.html:https://docs.aws.amazon.com/general/latest/gr/aws_tasks-that-require-root.html"
         }
       ]
@@ -438,8 +438,8 @@
           "Description": "Password policies are, in part, used to enforce password complexity requirements. IAM password policies can be used to ensure password are at least a given length. It is recommended that the password policy require a minimum password length 14.",
           "RationaleStatement": "Setting a password complexity policy increases account resiliency against brute force login attempts.",
           "ImpactStatement": "",
-          "RemediationProcedure": "Perform the following to set the password policy as prescribed:\n\n**From Console:**\n\n1. Login to AWS Console (with appropriate permissions to View Identity Access Management Account Settings)\n2. Go to IAM Service on the AWS Console\n3. Click on Account Settings on the Left Pane\n4. Set \"Minimum password length\" to `14` or greater.\n5. Click \"Apply password policy\"\n\n**From Command Line:**\n```\n aws iam update-account-password-policy --minimum-password-length 14\n```\nNote: All commands starting with \"aws iam update-account-password-policy\" can be combined into a single command.",
-          "AuditProcedure": "Perform the following to ensure the password policy is configured as prescribed:\n\n**From Console:**\n\n1. Login to AWS Console (with appropriate permissions to View Identity Access Management Account Settings)\n2. Go to IAM Service on the AWS Console\n3. Click on Account Settings on the Left Pane\n4. Ensure \"Minimum password length\" is set to 14 or greater.\n\n**From Command Line:**\n```\naws iam get-account-password-policy\n```\nEnsure the output of the above command includes \"MinimumPasswordLength\": 14 (or higher)",
+          "RemediationProcedure": "Perform the following to set the password policy as prescribed:  **From Console:**  1. Login to AWS Console (with appropriate permissions to View Identity Access Management Account Settings) 2. Go to IAM Service on the AWS Console 3. Click on Account Settings on the Left Pane 4. Set \"Minimum password length\" to `14` or greater. 5. Click \"Apply password policy\"  **From Command Line:** ```  aws iam update-account-password-policy --minimum-password-length 14 ``` Note: All commands starting with \"aws iam update-account-password-policy\" can be combined into a single command.",
+          "AuditProcedure": "Perform the following to ensure the password policy is configured as prescribed:  **From Console:**  1. Login to AWS Console (with appropriate permissions to View Identity Access Management Account Settings) 2. Go to IAM Service on the AWS Console 3. Click on Account Settings on the Left Pane 4. Ensure \"Minimum password length\" is set to 14 or greater.  **From Command Line:** ``` aws iam get-account-password-policy ``` Ensure the output of the above command includes \"MinimumPasswordLength\": 14 (or higher)",
           "AdditionalInformation": "",
           "References": "https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_passwords_account-policy.html:https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#configure-strong-password-policy"
         }
@@ -459,8 +459,8 @@
           "Description": "IAM password policies can prevent the reuse of a given password by the same user. It is recommended that the password policy prevent the reuse of passwords.",
           "RationaleStatement": "Preventing password reuse increases account resiliency against brute force login attempts.",
           "ImpactStatement": "",
-          "RemediationProcedure": "Perform the following to set the password policy as prescribed:\n\n**From Console:**\n\n1. Login to AWS Console (with appropriate permissions to View Identity Access Management Account Settings)\n2. Go to IAM Service on the AWS Console\n3. Click on Account Settings on the Left Pane\n4. Check \"Prevent password reuse\"\n5. Set \"Number of passwords to remember\" is set to `24` \n\n**From Command Line:**\n```\n aws iam update-account-password-policy --password-reuse-prevention 24\n```\nNote: All commands starting with \"aws iam update-account-password-policy\" can be combined into a single command.",
-          "AuditProcedure": "Perform the following to ensure the password policy is configured as prescribed:\n\n**From Console:**\n\n1. Login to AWS Console (with appropriate permissions to View Identity Access Management Account Settings)\n2. Go to IAM Service on the AWS Console\n3. Click on Account Settings on the Left Pane\n4. Ensure \"Prevent password reuse\" is checked\n5. Ensure \"Number of passwords to remember\" is set to 24\n\n**From Command Line:**\n```\naws iam get-account-password-policy \n```\nEnsure the output of the above command includes \"PasswordReusePrevention\": 24",
+          "RemediationProcedure": "Perform the following to set the password policy as prescribed:  **From Console:**  1. Login to AWS Console (with appropriate permissions to View Identity Access Management Account Settings) 2. Go to IAM Service on the AWS Console 3. Click on Account Settings on the Left Pane 4. Check \"Prevent password reuse\" 5. Set \"Number of passwords to remember\" is set to `24`   **From Command Line:** ```  aws iam update-account-password-policy --password-reuse-prevention 24 ``` Note: All commands starting with \"aws iam update-account-password-policy\" can be combined into a single command.",
+          "AuditProcedure": "Perform the following to ensure the password policy is configured as prescribed:  **From Console:**  1. Login to AWS Console (with appropriate permissions to View Identity Access Management Account Settings) 2. Go to IAM Service on the AWS Console 3. Click on Account Settings on the Left Pane 4. Ensure \"Prevent password reuse\" is checked 5. Ensure \"Number of passwords to remember\" is set to 24  **From Command Line:** ``` aws iam get-account-password-policy  ``` Ensure the output of the above command includes \"PasswordReusePrevention\": 24",
           "AdditionalInformation": "",
           "References": "https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_passwords_account-policy.html:https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#configure-strong-password-policy"
         }
@@ -480,8 +480,8 @@
           "Description": "Amazon S3 provides a variety of no, or low, cost encryption options to protect data at rest.",
           "RationaleStatement": "Encrypting data at rest reduces the likelihood that it is unintentionally exposed and can nullify the impact of disclosure if the encryption remains unbroken.",
           "ImpactStatement": "Amazon S3 buckets with default bucket encryption using SSE-KMS cannot be used as destination buckets for Amazon S3 server access logging. Only SSE-S3 default encryption is supported for server access log destination buckets.",
-          "RemediationProcedure": "**From Console:**\n\n1. Login to AWS Management Console and open the Amazon S3 console using https://console.aws.amazon.com/s3/ \n2. Select a Bucket.\n3. Click on 'Properties'.\n4. Click edit on `Default Encryption`.\n5. Select either `AES-256`, `AWS-KMS`, `SSE-KMS` or `SSE-S3`.\n6. Click `Save`\n7. Repeat for all the buckets in your AWS account lacking encryption.\n\n**From Command Line:**\n\nRun either \n```\naws s3api put-bucket-encryption --bucket  --server-side-encryption-configuration '{\"Rules\": [{\"ApplyServerSideEncryptionByDefault\": {\"SSEAlgorithm\": \"AES256\"}}]}'\n```\n or \n```\naws s3api put-bucket-encryption --bucket  --server-side-encryption-configuration '{\"Rules\": [{\"ApplyServerSideEncryptionByDefault\": {\"SSEAlgorithm\": \"aws:kms\",\"KMSMasterKeyID\": \"aws/s3\"}}]}'\n```\n\n**Note:** the KMSMasterKeyID can be set to the master key of your choosing; aws/s3 is an AWS preconfigured default.",
-          "AuditProcedure": "**From Console:**\n\n1. Login to AWS Management Console and open the Amazon S3 console using https://console.aws.amazon.com/s3/ \n2. Select a Bucket.\n3. Click on 'Properties'.\n4. Verify that `Default Encryption` is enabled, and displays either `AES-256`, `AWS-KMS`, `SSE-KMS` or `SSE-S3`.\n5. Repeat for all the buckets in your AWS account.\n\n**From Command Line:**\n\n1. Run command to list buckets\n```\naws s3 ls\n```\n2. For each bucket, run \n```\naws s3api get-bucket-encryption --bucket \n```\n3. Verify that either \n```\n\"SSEAlgorithm\": \"AES256\"\n```\n or \n```\n\"SSEAlgorithm\": \"aws:kms\"```\n is displayed.",
+          "RemediationProcedure": "**From Console:**  1. Login to AWS Management Console and open the Amazon S3 console using https://console.aws.amazon.com/s3/  2. Select a Bucket. 3. Click on 'Properties'. 4. Click edit on `Default Encryption`. 5. Select either `AES-256`, `AWS-KMS`, `SSE-KMS` or `SSE-S3`. 6. Click `Save` 7. Repeat for all the buckets in your AWS account lacking encryption.  **From Command Line:**  Run either  ``` aws s3api put-bucket-encryption --bucket  --server-side-encryption-configuration '{\"Rules\": [{\"ApplyServerSideEncryptionByDefault\": {\"SSEAlgorithm\": \"AES256\"}}]}' ```  or  ``` aws s3api put-bucket-encryption --bucket  --server-side-encryption-configuration '{\"Rules\": [{\"ApplyServerSideEncryptionByDefault\": {\"SSEAlgorithm\": \"aws:kms\",\"KMSMasterKeyID\": \"aws/s3\"}}]}' ```  **Note:** the KMSMasterKeyID can be set to the master key of your choosing; aws/s3 is an AWS preconfigured default.",
+          "AuditProcedure": "**From Console:**  1. Login to AWS Management Console and open the Amazon S3 console using https://console.aws.amazon.com/s3/  2. Select a Bucket. 3. Click on 'Properties'. 4. Verify that `Default Encryption` is enabled, and displays either `AES-256`, `AWS-KMS`, `SSE-KMS` or `SSE-S3`. 5. Repeat for all the buckets in your AWS account.  **From Command Line:**  1. Run command to list buckets ``` aws s3 ls ``` 2. For each bucket, run  ``` aws s3api get-bucket-encryption --bucket  ``` 3. Verify that either  ``` \"SSEAlgorithm\": \"AES256\" ```  or  ``` \"SSEAlgorithm\": \"aws:kms\"```  is displayed.",
           "AdditionalInformation": "S3 bucket encryption only applies to objects as they are placed in the bucket. Enabling S3 bucket encryption does **not** encrypt objects previously stored within the bucket.",
           "References": "https://docs.aws.amazon.com/AmazonS3/latest/user-guide/default-bucket-encryption.html:https://docs.aws.amazon.com/AmazonS3/latest/dev/bucket-encryption.html#bucket-encryption-related-resources"
         }
@@ -501,8 +501,8 @@
           "Description": "At the Amazon S3 bucket level, you can configure permissions through a bucket policy making the objects accessible only through HTTPS.",
           "RationaleStatement": "By default, Amazon S3 allows both HTTP and HTTPS requests. To achieve only allowing access to Amazon S3 objects through HTTPS you also have to explicitly deny access to HTTP requests. Bucket policies that allow HTTPS requests without explicitly denying HTTP requests will not comply with this recommendation.",
           "ImpactStatement": "",
-          "RemediationProcedure": "**From Console:**\n\n1. Login to AWS Management Console and open the Amazon S3 console using https://console.aws.amazon.com/s3/\n2. Select the Check box next to the Bucket.\n3. Click on 'Permissions'.\n4. Click 'Bucket Policy'\n5. Add this to the existing policy filling in the required information\n```\n{\n \"Sid\": \",\n \"Effect\": \"Deny\",\n \"Principal\": \"*\",\n \"Action\": \"s3:*\",\n \"Resource\": \"arn:aws:s3:::/*\",\n \"Condition\": {\n \"Bool\": {\n \"aws:SecureTransport\": \"false\"\n }\n }\n }\n```\n6. Save\n7. Repeat for all the buckets in your AWS account that contain sensitive data.\n\n**From Console** \n\nusing AWS Policy Generator:\n\n1. Repeat steps 1-4 above.\n2. Click on `Policy Generator` at the bottom of the Bucket Policy Editor\n3. Select Policy Type\n`S3 Bucket Policy`\n4. Add Statements\n- `Effect` = Deny\n- `Principal` = *\n- `AWS Service` = Amazon S3\n- `Actions` = *\n- `Amazon Resource Name` = \n5. Generate Policy\n6. Copy the text and add it to the Bucket Policy.\n\n**From Command Line:**\n\n1. Export the bucket policy to a json file.\n```\naws s3api get-bucket-policy --bucket  --query Policy --output text > policy.json\n```\n\n2. Modify the policy.json file by adding in this statement:\n```\n{\n \"Sid\": \",\n \"Effect\": \"Deny\",\n \"Principal\": \"*\",\n \"Action\": \"s3:*\",\n \"Resource\": \"arn:aws:s3:::/*\",\n \"Condition\": {\n \"Bool\": {\n \"aws:SecureTransport\": \"false\"\n }\n }\n }\n```\n3. Apply this modified policy back to the S3 bucket:\n```\naws s3api put-bucket-policy --bucket  --policy file://policy.json\n```",
-          "AuditProcedure": "To allow access to HTTPS you can use a condition that checks for the key `\"aws:SecureTransport: true\"`. This means that the request is sent through HTTPS but that HTTP can still be used. So to make sure you do not allow HTTP access confirm that there is a bucket policy that explicitly denies access for HTTP requests and that it contains the key \"aws:SecureTransport\": \"false\".\n\n**From Console:**\n\n1. Login to AWS Management Console and open the Amazon S3 console using https://console.aws.amazon.com/s3/\n2. Select the Check box next to the Bucket.\n3. Click on 'Permissions', then Click on `Bucket Policy`.\n4. Ensure that a policy is listed that matches:\n```\n'{\n \"Sid\": ,\n \"Effect\": \"Deny\",\n \"Principal\": \"*\",\n \"Action\": \"s3:*\",\n \"Resource\": \"arn:aws:s3:::/*\",\n \"Condition\": {\n \"Bool\": {\n \"aws:SecureTransport\": \"false\"\n }'\n```\n`` and `` will be specific to your account\n\n5. Repeat for all the buckets in your AWS account.\n\n**From Command Line:**\n\n1. List all of the S3 Buckets \n```\naws s3 ls\n```\n2. Using the list of buckets run this command on each of them:\n```\naws s3api get-bucket-policy --bucket  | grep aws:SecureTransport\n```\n3. Confirm that `aws:SecureTransport` is set to false `aws:SecureTransport:false`\n4. Confirm that the policy line has Effect set to Deny 'Effect:Deny'",
+          "RemediationProcedure": "**From Console:**  1. Login to AWS Management Console and open the Amazon S3 console using https://console.aws.amazon.com/s3/ 2. Select the Check box next to the Bucket. 3. Click on 'Permissions'. 4. Click 'Bucket Policy' 5. Add this to the existing policy filling in the required information ``` {  \"Sid\": \",  \"Effect\": \"Deny\",  \"Principal\": \"*\",  \"Action\": \"s3:*\",  \"Resource\": \"arn:aws:s3:::/*\",  \"Condition\": {  \"Bool\": {  \"aws:SecureTransport\": \"false\"  }  }  } ``` 6. Save 7. Repeat for all the buckets in your AWS account that contain sensitive data.  **From Console**   using AWS Policy Generator:  1. Repeat steps 1-4 above. 2. Click on `Policy Generator` at the bottom of the Bucket Policy Editor 3. Select Policy Type `S3 Bucket Policy` 4. Add Statements - `Effect` = Deny - `Principal` = * - `AWS Service` = Amazon S3 - `Actions` = * - `Amazon Resource Name` =  5. Generate Policy 6. Copy the text and add it to the Bucket Policy.  **From Command Line:**  1. Export the bucket policy to a json file. ``` aws s3api get-bucket-policy --bucket  --query Policy --output text > policy.json ```  2. Modify the policy.json file by adding in this statement: ``` {  \"Sid\": \",  \"Effect\": \"Deny\",  \"Principal\": \"*\",  \"Action\": \"s3:*\",  \"Resource\": \"arn:aws:s3:::/*\",  \"Condition\": {  \"Bool\": {  \"aws:SecureTransport\": \"false\"  }  }  } ``` 3. Apply this modified policy back to the S3 bucket: ``` aws s3api put-bucket-policy --bucket  --policy file://policy.json ```",
+          "AuditProcedure": "To allow access to HTTPS you can use a condition that checks for the key `\"aws:SecureTransport: true\"`. This means that the request is sent through HTTPS but that HTTP can still be used. So to make sure you do not allow HTTP access confirm that there is a bucket policy that explicitly denies access for HTTP requests and that it contains the key \"aws:SecureTransport\": \"false\".  **From Console:**  1. Login to AWS Management Console and open the Amazon S3 console using https://console.aws.amazon.com/s3/ 2. Select the Check box next to the Bucket. 3. Click on 'Permissions', then Click on `Bucket Policy`. 4. Ensure that a policy is listed that matches: ``` '{  \"Sid\": ,  \"Effect\": \"Deny\",  \"Principal\": \"*\",  \"Action\": \"s3:*\",  \"Resource\": \"arn:aws:s3:::/*\",  \"Condition\": {  \"Bool\": {  \"aws:SecureTransport\": \"false\"  }' ``` `` and `` will be specific to your account  5. Repeat for all the buckets in your AWS account.  **From Command Line:**  1. List all of the S3 Buckets  ``` aws s3 ls ``` 2. Using the list of buckets run this command on each of them: ``` aws s3api get-bucket-policy --bucket  | grep aws:SecureTransport ``` 3. Confirm that `aws:SecureTransport` is set to false `aws:SecureTransport:false` 4. Confirm that the policy line has Effect set to Deny 'Effect:Deny'",
           "AdditionalInformation": "",
           "References": "https://aws.amazon.com/premiumsupport/knowledge-center/s3-bucket-policy-for-config-rule/:https://aws.amazon.com/blogs/security/how-to-use-bucket-policies-and-apply-defense-in-depth-to-help-secure-your-amazon-s3-data/:https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/get-bucket-policy.html"
         }
@@ -522,8 +522,8 @@
           "Description": "Once MFA Delete is enabled on your sensitive and classified S3 bucket it requires the user to have two forms of authentication.",
           "RationaleStatement": "Adding MFA delete to an S3 bucket, requires additional authentication when you change the version state of your bucket or you delete and object version adding another layer of security in the event your security credentials are compromised or unauthorized access is granted.",
           "ImpactStatement": "",
-          "RemediationProcedure": "Perform the steps below to enable MFA delete on an S3 bucket.\n\nNote:\n-You cannot enable MFA Delete using the AWS Management Console. You must use the AWS CLI or API.\n-You must use your 'root' account to enable MFA Delete on S3 buckets.\n\n**From Command line:**\n\n1. Run the s3api put-bucket-versioning command\n\n```\naws s3api put-bucket-versioning --profile my-root-profile --bucket Bucket_Name --versioning-configuration Status=Enabled,MFADelete=Enabled --mfa “arn:aws:iam::aws_account_id:mfa/root-account-mfa-device passcode”\n```",
-          "AuditProcedure": "Perform the steps below to confirm MFA delete is configured on an S3 Bucket\n\n**From Console:**\n\n1. Login to the S3 console at `https://console.aws.amazon.com/s3/`\n\n2. Click the `Check` box next to the Bucket name you want to confirm\n\n3. In the window under `Properties`\n\n4. Confirm that Versioning is `Enabled`\n\n5. Confirm that MFA Delete is `Enabled`\n\n**From Command Line:**\n\n1. Run the `get-bucket-versioning`\n```\naws s3api get-bucket-versioning --bucket my-bucket\n```\n\nOutput example:\n```\n \n Enabled\n Enabled \n\n```\n\nIf the Console or the CLI output does not show Versioning and MFA Delete `enabled` refer to the remediation below.",
+          "RemediationProcedure": "Perform the steps below to enable MFA delete on an S3 bucket.  Note: -You cannot enable MFA Delete using the AWS Management Console. You must use the AWS CLI or API. -You must use your 'root' account to enable MFA Delete on S3 buckets.  **From Command line:**  1. Run the s3api put-bucket-versioning command  ``` aws s3api put-bucket-versioning --profile my-root-profile --bucket Bucket_Name --versioning-configuration Status=Enabled,MFADelete=Enabled --mfa “arn:aws:iam::aws_account_id:mfa/root-account-mfa-device passcode” ```",
+          "AuditProcedure": "Perform the steps below to confirm MFA delete is configured on an S3 Bucket  **From Console:**  1. Login to the S3 console at `https://console.aws.amazon.com/s3/`  2. Click the `Check` box next to the Bucket name you want to confirm  3. In the window under `Properties`  4. Confirm that Versioning is `Enabled`  5. Confirm that MFA Delete is `Enabled`  **From Command Line:**  1. Run the `get-bucket-versioning` ``` aws s3api get-bucket-versioning --bucket my-bucket ```  Output example: ```    Enabled  Enabled   ```  If the Console or the CLI output does not show Versioning and MFA Delete `enabled` refer to the remediation below.",
           "AdditionalInformation": "",
           "References": "https://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html#MultiFactorAuthenticationDelete:https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMFADelete.html:https://aws.amazon.com/blogs/security/securing-access-to-aws-using-mfa-part-3/:https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa_lost-or-broken.html"
         }
@@ -541,10 +541,10 @@
           "Profile": "Level 2",
           "AssessmentStatus": "Manual",
           "Description": "Amazon S3 buckets can contain sensitive data, that for security purposes should be discovered, monitored, classified and protected. Macie along with other 3rd party tools can automatically provide an inventory of Amazon S3 buckets.",
-          "RationaleStatement": "Using a Cloud service or 3rd Party software to continuously monitor and automate the process of data discovery and classification for S3 buckets using machine learning and pattern matching is a strong defense in protecting that information.\n\nAmazon Macie is a fully managed data security and data privacy service that uses machine learning and pattern matching to discover and protect your sensitive data in AWS.",
+          "RationaleStatement": "Using a Cloud service or 3rd Party software to continuously monitor and automate the process of data discovery and classification for S3 buckets using machine learning and pattern matching is a strong defense in protecting that information.  Amazon Macie is a fully managed data security and data privacy service that uses machine learning and pattern matching to discover and protect your sensitive data in AWS.",
           "ImpactStatement": "There is a cost associated with using Amazon Macie. There is also typically a cost associated with 3rd Party tools that perform similar processes and protection.",
-          "RemediationProcedure": "Perform the steps below to enable and configure Amazon Macie\n\n**From Console:**\n\n1. Log on to the Macie console at `https://console.aws.amazon.com/macie/`\n\n2. Click `Get started`.\n\n3. Click `Enable Macie`.\n\nSetup a repository for sensitive data discovery results\n\n1. In the Left pane, under Settings, click `Discovery results`.\n\n2. Make sure `Create bucket` is selected.\n\n3. Create a bucket, enter a name for the bucket. The name must be unique across all S3 buckets. In addition, the name must start with a lowercase letter or a number.\n\n4. Click on `Advanced`.\n\n5. Block all public access, make sure `Yes` is selected.\n\n6. KMS encryption, specify the AWS KMS key that you want to use to encrypt the results. The key must be a symmetric, customer master key (CMK) that's in the same Region as the S3 bucket.\n\n7. Click on `Save`\n\nCreate a job to discover sensitive data\n\n1. In the left pane, click `S3 buckets`. Macie displays a list of all the S3 buckets for your account.\n\n2. Select the `check box` for each bucket that you want Macie to analyze as part of the job\n\n3. Click `Create job`.\n\n3. Click `Quick create`.\n\n4. For the Name and description step, enter a name and, optionally, a description of the job.\n\n5. Then click `Next`.\n\n6. For the Review and create step, click `Submit`.\n\nReview your findings\n\n1. In the left pane, click `Findings`.\n\n2. To view the details of a specific finding, choose any field other than the check box for the finding.\n\nIf you are using a 3rd Party tool to manage and protect your s3 data, follow the Vendor documentation for implementing and configuring that tool.",
-          "AuditProcedure": "Perform the following steps to determine if Macie is running:\n\n**From Console:**\n\n 1. Login to the Macie console at https://console.aws.amazon.com/macie/\n\n 2. In the left hand pane click on By job under findings.\n\n 3. Confirm that you have a Job setup for your S3 Buckets\n\nWhen you log into the Macie console if you aren't taken to the summary page and you don't have a job setup and running then refer to the remediation procedure below.\n\nIf you are using a 3rd Party tool to manage and protect your s3 data you meet this recommendation.",
+          "RemediationProcedure": "Perform the steps below to enable and configure Amazon Macie  **From Console:**  1. Log on to the Macie console at `https://console.aws.amazon.com/macie/`  2. Click `Get started`.  3. Click `Enable Macie`.  Setup a repository for sensitive data discovery results  1. In the Left pane, under Settings, click `Discovery results`.  2. Make sure `Create bucket` is selected.  3. Create a bucket, enter a name for the bucket. The name must be unique across all S3 buckets. In addition, the name must start with a lowercase letter or a number.  4. Click on `Advanced`.  5. Block all public access, make sure `Yes` is selected.  6. KMS encryption, specify the AWS KMS key that you want to use to encrypt the results. The key must be a symmetric, customer master key (CMK) that's in the same Region as the S3 bucket.  7. Click on `Save`  Create a job to discover sensitive data  1. In the left pane, click `S3 buckets`. Macie displays a list of all the S3 buckets for your account.  2. Select the `check box` for each bucket that you want Macie to analyze as part of the job  3. Click `Create job`.  3. Click `Quick create`.  4. For the Name and description step, enter a name and, optionally, a description of the job.  5. Then click `Next`.  6. For the Review and create step, click `Submit`.  Review your findings  1. In the left pane, click `Findings`.  2. To view the details of a specific finding, choose any field other than the check box for the finding.  If you are using a 3rd Party tool to manage and protect your s3 data, follow the Vendor documentation for implementing and configuring that tool.",
+          "AuditProcedure": "Perform the following steps to determine if Macie is running:  **From Console:**   1. Login to the Macie console at https://console.aws.amazon.com/macie/   2. In the left hand pane click on By job under findings.   3. Confirm that you have a Job setup for your S3 Buckets  When you log into the Macie console if you aren't taken to the summary page and you don't have a job setup and running then refer to the remediation procedure below.  If you are using a 3rd Party tool to manage and protect your s3 data you meet this recommendation.",
           "AdditionalInformation": "",
           "References": "https://aws.amazon.com/macie/getting-started/:https://docs.aws.amazon.com/workspaces/latest/adminguide/data-protection.html:https://docs.aws.amazon.com/macie/latest/user/data-classification.html"
         }
@@ -563,10 +563,10 @@
           "Profile": "Level 1",
           "AssessmentStatus": "Automated",
           "Description": "Amazon S3 provides `Block public access (bucket settings)` and `Block public access (account settings)` to help you manage public access to Amazon S3 resources. By default, S3 buckets and objects are created with public access disabled. However, an IAM principal with sufficient S3 permissions can enable public access at the bucket and/or object level. While enabled, `Block public access (bucket settings)` prevents an individual bucket, and its contained objects, from becoming publicly accessible. Similarly, `Block public access (account settings)` prevents all buckets, and contained objects, from becoming publicly accessible across the entire account.",
-          "RationaleStatement": "Amazon S3 `Block public access (bucket settings)` prevents the accidental or malicious public exposure of data contained within the respective bucket(s). \n\nAmazon S3 `Block public access (account settings)` prevents the accidental or malicious public exposure of data contained within all buckets of the respective AWS account.\n\nWhether blocking public access to all or some buckets is an organizational decision that should be based on data sensitivity, least privilege, and use case.",
+          "RationaleStatement": "Amazon S3 `Block public access (bucket settings)` prevents the accidental or malicious public exposure of data contained within the respective bucket(s).   Amazon S3 `Block public access (account settings)` prevents the accidental or malicious public exposure of data contained within all buckets of the respective AWS account.  Whether blocking public access to all or some buckets is an organizational decision that should be based on data sensitivity, least privilege, and use case.",
           "ImpactStatement": "When you apply Block Public Access settings to an account, the settings apply to all AWS Regions globally. The settings might not take effect in all Regions immediately or simultaneously, but they eventually propagate to all Regions.",
-          "RemediationProcedure": "**If utilizing Block Public Access (bucket settings)**\n\n**From Console:**\n\n1. Login to AWS Management Console and open the Amazon S3 console using https://console.aws.amazon.com/s3/ \n2. Select the Check box next to the Bucket.\n3. Click on 'Edit public access settings'.\n4. Click 'Block all public access'\n5. Repeat for all the buckets in your AWS account that contain sensitive data.\n\n**From Command Line:**\n\n1. List all of the S3 Buckets\n```\naws s3 ls\n```\n2. Set the Block Public Access to true on that bucket\n```\naws s3api put-public-access-block --bucket  --public-access-block-configuration \"BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPublicBuckets=true\"\n```\n\n**If utilizing Block Public Access (account settings)**\n\n**From Console:**\n\nIf the output reads `true` for the separate configuration settings then it is set on the account.\n\n1. Login to AWS Management Console and open the Amazon S3 console using https://console.aws.amazon.com/s3/ \n2. Choose `Block Public Access (account settings)`\n3. Choose `Edit` to change the block public access settings for all the buckets in your AWS account\n4. Choose the settings you want to change, and then choose `Save`. For details about each setting, pause on the `i` icons.\n5. When you're asked for confirmation, enter `confirm`. Then Click `Confirm` to save your changes.\n\n**From Command Line:**\n\nTo set Block Public access settings for this account, run the following command:\n```\naws s3control put-public-access-block\n--public-access-block-configuration BlockPublicAcls=true, IgnorePublicAcls=true, BlockPublicPolicy=true, RestrictPublicBuckets=true\n--account-id \n```",
-          "AuditProcedure": "**If utilizing Block Public Access (bucket settings)**\n\n**From Console:**\n\n1. Login to AWS Management Console and open the Amazon S3 console using https://console.aws.amazon.com/s3/ \n2. Select the Check box next to the Bucket.\n3. Click on 'Edit public access settings'.\n4. Ensure that block public access settings are set appropriately for this bucket\n5. Repeat for all the buckets in your AWS account.\n\n**From Command Line:**\n\n1. List all of the S3 Buckets\n```\naws s3 ls\n```\n2. Find the public access setting on that bucket\n```\naws s3api get-public-access-block --bucket \n```\nOutput if Block Public access is enabled:\n\n```\n{\n \"PublicAccessBlockConfiguration\": {\n \"BlockPublicAcls\": true,\n \"IgnorePublicAcls\": true,\n \"BlockPublicPolicy\": true,\n \"RestrictPublicBuckets\": true\n }\n}\n```\n\nIf the output reads `false` for the separate configuration settings then proceed to the remediation.\n\n**If utilizing Block Public Access (account settings)**\n\n**From Console:**\n\n1. Login to AWS Management Console and open the Amazon S3 console using https://console.aws.amazon.com/s3/ \n2. Choose `Block public access (account settings)`\n3. Ensure that block public access settings are set appropriately for your AWS account.\n\n**From Command Line:**\n\nTo check Public access settings for this account status, run the following command,\n`aws s3control get-public-access-block --account-id  --region `\n\nOutput if Block Public access is enabled:\n\n```\n{\n \"PublicAccessBlockConfiguration\": {\n \"IgnorePublicAcls\": true, \n \"BlockPublicPolicy\": true, \n \"BlockPublicAcls\": true, \n \"RestrictPublicBuckets\": true\n }\n}\n```\n\nIf the output reads `false` for the separate configuration settings then proceed to the remediation.",
+          "RemediationProcedure": "**If utilizing Block Public Access (bucket settings)**  **From Console:**  1. Login to AWS Management Console and open the Amazon S3 console using https://console.aws.amazon.com/s3/  2. Select the Check box next to the Bucket. 3. Click on 'Edit public access settings'. 4. Click 'Block all public access' 5. Repeat for all the buckets in your AWS account that contain sensitive data.  **From Command Line:**  1. List all of the S3 Buckets ``` aws s3 ls ``` 2. Set the Block Public Access to true on that bucket ``` aws s3api put-public-access-block --bucket  --public-access-block-configuration \"BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPublicBuckets=true\" ```  **If utilizing Block Public Access (account settings)**  **From Console:**  If the output reads `true` for the separate configuration settings then it is set on the account.  1. Login to AWS Management Console and open the Amazon S3 console using https://console.aws.amazon.com/s3/  2. Choose `Block Public Access (account settings)` 3. Choose `Edit` to change the block public access settings for all the buckets in your AWS account 4. Choose the settings you want to change, and then choose `Save`. For details about each setting, pause on the `i` icons. 5. When you're asked for confirmation, enter `confirm`. Then Click `Confirm` to save your changes.  **From Command Line:**  To set Block Public access settings for this account, run the following command: ``` aws s3control put-public-access-block --public-access-block-configuration BlockPublicAcls=true, IgnorePublicAcls=true, BlockPublicPolicy=true, RestrictPublicBuckets=true --account-id  ```",
+          "AuditProcedure": "**If utilizing Block Public Access (bucket settings)**  **From Console:**  1. Login to AWS Management Console and open the Amazon S3 console using https://console.aws.amazon.com/s3/  2. Select the Check box next to the Bucket. 3. Click on 'Edit public access settings'. 4. Ensure that block public access settings are set appropriately for this bucket 5. Repeat for all the buckets in your AWS account.  **From Command Line:**  1. List all of the S3 Buckets ``` aws s3 ls ``` 2. Find the public access setting on that bucket ``` aws s3api get-public-access-block --bucket  ``` Output if Block Public access is enabled:  ``` {  \"PublicAccessBlockConfiguration\": {  \"BlockPublicAcls\": true,  \"IgnorePublicAcls\": true,  \"BlockPublicPolicy\": true,  \"RestrictPublicBuckets\": true  } } ```  If the output reads `false` for the separate configuration settings then proceed to the remediation.  **If utilizing Block Public Access (account settings)**  **From Console:**  1. Login to AWS Management Console and open the Amazon S3 console using https://console.aws.amazon.com/s3/  2. Choose `Block public access (account settings)` 3. Ensure that block public access settings are set appropriately for your AWS account.  **From Command Line:**  To check Public access settings for this account status, run the following command, `aws s3control get-public-access-block --account-id  --region `  Output if Block Public access is enabled:  ``` {  \"PublicAccessBlockConfiguration\": {  \"IgnorePublicAcls\": true,   \"BlockPublicPolicy\": true,   \"BlockPublicAcls\": true,   \"RestrictPublicBuckets\": true  } } ```  If the output reads `false` for the separate configuration settings then proceed to the remediation.",
           "AdditionalInformation": "",
           "References": "https://docs.aws.amazon.com/AmazonS3/latest/user-guide/block-public-access-account.html"
         }
@@ -586,8 +586,8 @@
           "Description": "Elastic Compute Cloud (EC2) supports encryption at rest when using the Elastic Block Store (EBS) service. While disabled by default, forcing encryption at EBS volume creation is supported.",
           "RationaleStatement": "Encrypting data at rest reduces the likelihood that it is unintentionally exposed and can nullify the impact of disclosure if the encryption remains unbroken.",
           "ImpactStatement": "Losing access or removing the KMS key in use by the EBS volumes will result in no longer being able to access the volumes.",
-          "RemediationProcedure": "**From Console:**\n\n1. Login to AWS Management Console and open the Amazon EC2 console using https://console.aws.amazon.com/ec2/ \n2. Under `Account attributes`, click `EBS encryption`.\n3. Click `Manage`.\n4. Click the `Enable` checkbox.\n5. Click `Update EBS encryption`\n6. Repeat for every region requiring the change.\n\n**Note:** EBS volume encryption is configured per region.\n\n**From Command Line:**\n\n1. Run \n```\naws --region  ec2 enable-ebs-encryption-by-default\n```\n2. Verify that `\"EbsEncryptionByDefault\": true` is displayed.\n3. Repeat every region requiring the change.\n\n**Note:** EBS volume encryption is configured per region.",
-          "AuditProcedure": "**From Console:**\n\n1. Login to AWS Management Console and open the Amazon EC2 console using https://console.aws.amazon.com/ec2/ \n2. Under `Account attributes`, click `EBS encryption`.\n3. Verify `Always encrypt new EBS volumes` displays `Enabled`.\n4. Review every region in-use.\n\n**Note:** EBS volume encryption is configured per region.\n\n**From Command Line:**\n\n1. Run \n```\naws --region  ec2 get-ebs-encryption-by-default\n```\n2. Verify that `\"EbsEncryptionByDefault\": true` is displayed.\n3. Review every region in-use.\n\n**Note:** EBS volume encryption is configured per region.",
+          "RemediationProcedure": "**From Console:**  1. Login to AWS Management Console and open the Amazon EC2 console using https://console.aws.amazon.com/ec2/  2. Under `Account attributes`, click `EBS encryption`. 3. Click `Manage`. 4. Click the `Enable` checkbox. 5. Click `Update EBS encryption` 6. Repeat for every region requiring the change.  **Note:** EBS volume encryption is configured per region.  **From Command Line:**  1. Run  ``` aws --region  ec2 enable-ebs-encryption-by-default ``` 2. Verify that `\"EbsEncryptionByDefault\": true` is displayed. 3. Repeat every region requiring the change.  **Note:** EBS volume encryption is configured per region.",
+          "AuditProcedure": "**From Console:**  1. Login to AWS Management Console and open the Amazon EC2 console using https://console.aws.amazon.com/ec2/  2. Under `Account attributes`, click `EBS encryption`. 3. Verify `Always encrypt new EBS volumes` displays `Enabled`. 4. Review every region in-use.  **Note:** EBS volume encryption is configured per region.  **From Command Line:**  1. Run  ``` aws --region  ec2 get-ebs-encryption-by-default ``` 2. Verify that `\"EbsEncryptionByDefault\": true` is displayed. 3. Review every region in-use.  **Note:** EBS volume encryption is configured per region.",
           "AdditionalInformation": "Default EBS volume encryption only applies to newly created EBS volumes. Existing EBS volumes are **not** converted automatically.",
           "References": "https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html:https://aws.amazon.com/blogs/aws/new-opt-in-to-default-encryption-for-new-ebs-volumes/"
         }
@@ -607,8 +607,8 @@
           "Description": "Amazon RDS encrypted DB instances use the industry standard AES-256 encryption algorithm to encrypt your data on the server that hosts your Amazon RDS DB instances. After your data is encrypted, Amazon RDS handles authentication of access and decryption of your data transparently with a minimal impact on performance.",
           "RationaleStatement": "Databases are likely to hold sensitive and critical data, it is highly recommended to implement encryption in order to protect your data from unauthorized access or disclosure. With RDS encryption enabled, the data stored on the instance's underlying storage, the automated backups, read replicas, and snapshots, are all encrypted.",
           "ImpactStatement": "",
-          "RemediationProcedure": "**From Console:**\n\n1. Login to the AWS Management Console and open the RDS dashboard at https://console.aws.amazon.com/rds/.\n2. In the left navigation panel, click on `Databases`\n3. Select the Database instance that needs to be encrypted.\n4. Click on `Actions` button placed at the top right and select `Take Snapshot`.\n5. On the Take Snapshot page, enter a database name of which you want to take a snapshot in the `Snapshot Name` field and click on `Take Snapshot`.\n6. Select the newly created snapshot and click on the `Action` button placed at the top right and select `Copy snapshot` from the Action menu.\n7. On the Make Copy of DB Snapshot page, perform the following:\n\n- In the New DB Snapshot Identifier field, Enter a name for the `new snapshot`.\n- Check `Copy Tags`, New snapshot must have the same tags as the source snapshot.\n- Select `Yes` from the `Enable Encryption` dropdown list to enable encryption, You can choose to use the AWS default encryption key or custom key from Master Key dropdown list.\n\n8. Click `Copy Snapshot` to create an encrypted copy of the selected instance snapshot.\n9. Select the new Snapshot Encrypted Copy and click on the `Action` button placed at the top right and select `Restore Snapshot` button from the Action menu, This will restore the encrypted snapshot to a new database instance.\n10. On the Restore DB Instance page, enter a unique name for the new database instance in the DB Instance Identifier field.\n11. Review the instance configuration details and click `Restore DB Instance`.\n12. As the new instance provisioning process is completed can update application configuration to refer to the endpoint of the new Encrypted database instance Once the database endpoint is changed at the application level, can remove the unencrypted instance.\n\n**From Command Line:**\n\n1. Run `describe-db-instances` command to list all RDS database names available in the selected AWS region, The command output should return the database instance identifier.\n```\naws rds describe-db-instances --region  --query 'DBInstances[*].DBInstanceIdentifier'\n```\n2. Run `create-db-snapshot` command to create a snapshot for the selected database instance, The command output will return the `new snapshot` with name DB Snapshot Name.\n```\naws rds create-db-snapshot --region  --db-snapshot-identifier  --db-instance-identifier \n```\n3. Now run `list-aliases` command to list the KMS keys aliases available in a specified region, The command output should return each `key alias currently available`. For our RDS encryption activation process, locate the ID of the AWS default KMS key.\n```\naws kms list-aliases --region \n```\n4. Run `copy-db-snapshot` command using the default KMS key ID for RDS instances returned earlier to create an encrypted copy of the database instance snapshot, The command output will return the `encrypted instance snapshot configuration`.\n```\naws rds copy-db-snapshot --region  --source-db-snapshot-identifier  --target-db-snapshot-identifier  --copy-tags --kms-key-id \n```\n5. Run `restore-db-instance-from-db-snapshot` command to restore the encrypted snapshot created at the previous step to a new database instance, If successful, the command output should return the new encrypted database instance configuration.\n```\naws rds restore-db-instance-from-db-snapshot --region  --db-instance-identifier  --db-snapshot-identifier \n```\n6. Run `describe-db-instances` command to list all RDS database names, available in the selected AWS region, Output will return database instance identifier name Select encrypted database name that we just created DB-Name-Encrypted.\n```\naws rds describe-db-instances --region  --query 'DBInstances[*].DBInstanceIdentifier'\n```\n7. Run again `describe-db-instances` command using the RDS instance identifier returned earlier, to determine if the selected database instance is encrypted, The command output should return the encryption status `True`.\n```\naws rds describe-db-instances --region  --db-instance-identifier  --query 'DBInstances[*].StorageEncrypted'\n```",
-          "AuditProcedure": "**From Console:**\n\n1. Login to the AWS Management Console and open the RDS dashboard at https://console.aws.amazon.com/rds/\n2. In the navigation pane, under RDS dashboard, click `Databases`.\n3. Select the RDS Instance that you want to examine\n4. Click `Instance Name` to see details, then click on `Configuration` tab.\n5. Under Configuration Details section, In Storage pane search for the `Encryption Enabled` Status.\n6. If the current status is set to `Disabled`, Encryption is not enabled for the selected RDS Instance database instance.\n7. Repeat steps 3 to 7 to verify encryption status of other RDS Instance in same region.\n8. Change region from the top of the navigation bar and repeat audit for other regions.\n\n**From Command Line:**\n\n1. Run `describe-db-instances` command to list all RDS Instance database names, available in the selected AWS region, Output will return each Instance database identifier-name.\n ```\naws rds describe-db-instances --region  --query 'DBInstances[*].DBInstanceIdentifier'\n```\n2. Run again `describe-db-instances` command using the RDS Instance identifier returned earlier, to determine if the selected database instance is encrypted, The command output should return the encryption status `True` Or `False`.\n```\naws rds describe-db-instances --region  --db-instance-identifier  --query 'DBInstances[*].StorageEncrypted'\n```\n3. If the StorageEncrypted parameter value is `False`, Encryption is not enabled for the selected RDS database instance.\n4. Repeat steps 1 to 3 for auditing each RDS Instance and change Region to verify for other regions",
+          "RemediationProcedure": "**From Console:**  1. Login to the AWS Management Console and open the RDS dashboard at https://console.aws.amazon.com/rds/. 2. In the left navigation panel, click on `Databases` 3. Select the Database instance that needs to be encrypted. 4. Click on `Actions` button placed at the top right and select `Take Snapshot`. 5. On the Take Snapshot page, enter a database name of which you want to take a snapshot in the `Snapshot Name` field and click on `Take Snapshot`. 6. Select the newly created snapshot and click on the `Action` button placed at the top right and select `Copy snapshot` from the Action menu. 7. On the Make Copy of DB Snapshot page, perform the following:  - In the New DB Snapshot Identifier field, Enter a name for the `new snapshot`. - Check `Copy Tags`, New snapshot must have the same tags as the source snapshot. - Select `Yes` from the `Enable Encryption` dropdown list to enable encryption, You can choose to use the AWS default encryption key or custom key from Master Key dropdown list.  8. Click `Copy Snapshot` to create an encrypted copy of the selected instance snapshot. 9. Select the new Snapshot Encrypted Copy and click on the `Action` button placed at the top right and select `Restore Snapshot` button from the Action menu, This will restore the encrypted snapshot to a new database instance. 10. On the Restore DB Instance page, enter a unique name for the new database instance in the DB Instance Identifier field. 11. Review the instance configuration details and click `Restore DB Instance`. 12. As the new instance provisioning process is completed can update application configuration to refer to the endpoint of the new Encrypted database instance Once the database endpoint is changed at the application level, can remove the unencrypted instance.  **From Command Line:**  1. Run `describe-db-instances` command to list all RDS database names available in the selected AWS region, The command output should return the database instance identifier. ``` aws rds describe-db-instances --region  --query 'DBInstances[*].DBInstanceIdentifier' ``` 2. Run `create-db-snapshot` command to create a snapshot for the selected database instance, The command output will return the `new snapshot` with name DB Snapshot Name. ``` aws rds create-db-snapshot --region  --db-snapshot-identifier  --db-instance-identifier  ``` 3. Now run `list-aliases` command to list the KMS keys aliases available in a specified region, The command output should return each `key alias currently available`. For our RDS encryption activation process, locate the ID of the AWS default KMS key. ``` aws kms list-aliases --region  ``` 4. Run `copy-db-snapshot` command using the default KMS key ID for RDS instances returned earlier to create an encrypted copy of the database instance snapshot, The command output will return the `encrypted instance snapshot configuration`. ``` aws rds copy-db-snapshot --region  --source-db-snapshot-identifier  --target-db-snapshot-identifier  --copy-tags --kms-key-id  ``` 5. Run `restore-db-instance-from-db-snapshot` command to restore the encrypted snapshot created at the previous step to a new database instance, If successful, the command output should return the new encrypted database instance configuration. ``` aws rds restore-db-instance-from-db-snapshot --region  --db-instance-identifier  --db-snapshot-identifier  ``` 6. Run `describe-db-instances` command to list all RDS database names, available in the selected AWS region, Output will return database instance identifier name Select encrypted database name that we just created DB-Name-Encrypted. ``` aws rds describe-db-instances --region  --query 'DBInstances[*].DBInstanceIdentifier' ``` 7. Run again `describe-db-instances` command using the RDS instance identifier returned earlier, to determine if the selected database instance is encrypted, The command output should return the encryption status `True`. ``` aws rds describe-db-instances --region  --db-instance-identifier  --query 'DBInstances[*].StorageEncrypted' ```",
+          "AuditProcedure": "**From Console:**  1. Login to the AWS Management Console and open the RDS dashboard at https://console.aws.amazon.com/rds/ 2. In the navigation pane, under RDS dashboard, click `Databases`. 3. Select the RDS Instance that you want to examine 4. Click `Instance Name` to see details, then click on `Configuration` tab. 5. Under Configuration Details section, In Storage pane search for the `Encryption Enabled` Status. 6. If the current status is set to `Disabled`, Encryption is not enabled for the selected RDS Instance database instance. 7. Repeat steps 3 to 7 to verify encryption status of other RDS Instance in same region. 8. Change region from the top of the navigation bar and repeat audit for other regions.  **From Command Line:**  1. Run `describe-db-instances` command to list all RDS Instance database names, available in the selected AWS region, Output will return each Instance database identifier-name.  ``` aws rds describe-db-instances --region  --query 'DBInstances[*].DBInstanceIdentifier' ``` 2. Run again `describe-db-instances` command using the RDS Instance identifier returned earlier, to determine if the selected database instance is encrypted, The command output should return the encryption status `True` Or `False`. ``` aws rds describe-db-instances --region  --db-instance-identifier  --query 'DBInstances[*].StorageEncrypted' ``` 3. If the StorageEncrypted parameter value is `False`, Encryption is not enabled for the selected RDS database instance. 4. Repeat steps 1 to 3 for auditing each RDS Instance and change Region to verify for other regions",
           "AdditionalInformation": "",
           "References": "https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Encryption.html:https://aws.amazon.com/blogs/database/selecting-the-right-encryption-options-for-amazon-rds-and-amazon-aurora-database-engines/#:~:text=With%20RDS%2Dencrypted%20resources%2C%20data,transparent%20to%20your%20database%20engine.:https://aws.amazon.com/rds/features/security/"
         }
@@ -628,8 +628,8 @@
           "Description": "Ensure that RDS database instances have the Auto Minor Version Upgrade flag enabled in order to receive automatically minor engine upgrades during the specified maintenance window. So, RDS instances can get the new features, bug fixes, and security patches for their database engines.",
           "RationaleStatement": "AWS RDS will occasionally deprecate minor engine versions and provide new ones for an upgrade. When the last version number within the release is replaced, the version changed is considered minor. With Auto Minor Version Upgrade feature enabled, the version upgrades will occur automatically during the specified maintenance window so your RDS instances can get the new features, bug fixes, and security patches for their database engines.",
           "ImpactStatement": "",
-          "RemediationProcedure": "**From Console:**\n\n1. Log in to the AWS management console and navigate to the RDS dashboard at https://console.aws.amazon.com/rds/.\n2. In the left navigation panel, click on `Databases`.\n3. Select the RDS instance that wants to update.\n4. Click on the `Modify` button placed on the top right side.\n5. On the `Modify DB Instance: ` page, In the `Maintenance` section, select `Auto minor version upgrade` click on the `Yes` radio button.\n6. At the bottom of the page click on `Continue`, check to Apply Immediately to apply the changes immediately, or select `Apply during the next scheduled maintenance window` to avoid any downtime.\n7. Review the changes and click on `Modify DB Instance`. The instance status should change from available to modifying and back to available. Once the feature is enabled, the `Auto Minor Version Upgrade` status should change to `Yes`.\n\n**From Command Line:**\n\n1. Run `describe-db-instances` command to list all RDS database instance names, available in the selected AWS region:\n```\naws rds describe-db-instances --region  --query 'DBInstances[*].DBInstanceIdentifier'\n```\n2. The command output should return each database instance identifier.\n3. Run the `modify-db-instance` command to modify the selected RDS instance configuration this command will apply the changes immediately, Remove `--apply-immediately` to apply changes during the next scheduled maintenance window and avoid any downtime:\n```\naws rds modify-db-instance --region  --db-instance-identifier  --auto-minor-version-upgrade --apply-immediately\n```\n4. The command output should reveal the new configuration metadata for the RDS instance and check `AutoMinorVersionUpgrade` parameter value.\n5. Run `describe-db-instances` command to check if the Auto Minor Version Upgrade feature has been successfully enable:\n```\naws rds describe-db-instances --region  --db-instance-identifier  --query 'DBInstances[*].AutoMinorVersionUpgrade'\n```\n6. The command output should return the feature current status set to `true`, the feature is `enabled` and the minor engine upgrades will be applied to the selected RDS instance.",
-          "AuditProcedure": "**From Console:**\n\n1. Log in to the AWS management console and navigate to the RDS dashboard at https://console.aws.amazon.com/rds/.\n2. In the left navigation panel, click on `Databases`.\n3. Select the RDS instance that wants to examine.\n4. Click on the `Maintenance and backups` panel.\n5. Under the `Maintenance` section, search for the Auto Minor Version Upgrade status.\n- If the current status is set to `Disabled`, means the feature is not set and the minor engine upgrades released will not be applied to the selected RDS instance\n\n**From Command Line:**\n\n1. Run `describe-db-instances` command to list all RDS database names, available in the selected AWS region:\n```\naws rds describe-db-instances --region  --query 'DBInstances[*].DBInstanceIdentifier'\n```\n2. The command output should return each database instance identifier.\n3. Run again `describe-db-instances` command using the RDS instance identifier returned earlier to determine the Auto Minor Version Upgrade status for the selected instance:\n```\naws rds describe-db-instances --region  --db-instance-identifier  --query 'DBInstances[*].AutoMinorVersionUpgrade'\n```\n4. The command output should return the feature current status. If the current status is set to `true`, the feature is enabled and the minor engine upgrades will be applied to the selected RDS instance.",
+          "RemediationProcedure": "**From Console:**  1. Log in to the AWS management console and navigate to the RDS dashboard at https://console.aws.amazon.com/rds/. 2. In the left navigation panel, click on `Databases`. 3. Select the RDS instance that wants to update. 4. Click on the `Modify` button placed on the top right side. 5. On the `Modify DB Instance: ` page, In the `Maintenance` section, select `Auto minor version upgrade` click on the `Yes` radio button. 6. At the bottom of the page click on `Continue`, check to Apply Immediately to apply the changes immediately, or select `Apply during the next scheduled maintenance window` to avoid any downtime. 7. Review the changes and click on `Modify DB Instance`. The instance status should change from available to modifying and back to available. Once the feature is enabled, the `Auto Minor Version Upgrade` status should change to `Yes`.  **From Command Line:**  1. Run `describe-db-instances` command to list all RDS database instance names, available in the selected AWS region: ``` aws rds describe-db-instances --region  --query 'DBInstances[*].DBInstanceIdentifier' ``` 2. The command output should return each database instance identifier. 3. Run the `modify-db-instance` command to modify the selected RDS instance configuration this command will apply the changes immediately, Remove `--apply-immediately` to apply changes during the next scheduled maintenance window and avoid any downtime: ``` aws rds modify-db-instance --region  --db-instance-identifier  --auto-minor-version-upgrade --apply-immediately ``` 4. The command output should reveal the new configuration metadata for the RDS instance and check `AutoMinorVersionUpgrade` parameter value. 5. Run `describe-db-instances` command to check if the Auto Minor Version Upgrade feature has been successfully enable: ``` aws rds describe-db-instances --region  --db-instance-identifier  --query 'DBInstances[*].AutoMinorVersionUpgrade' ``` 6. The command output should return the feature current status set to `true`, the feature is `enabled` and the minor engine upgrades will be applied to the selected RDS instance.",
+          "AuditProcedure": "**From Console:**  1. Log in to the AWS management console and navigate to the RDS dashboard at https://console.aws.amazon.com/rds/. 2. In the left navigation panel, click on `Databases`. 3. Select the RDS instance that wants to examine. 4. Click on the `Maintenance and backups` panel. 5. Under the `Maintenance` section, search for the Auto Minor Version Upgrade status. - If the current status is set to `Disabled`, means the feature is not set and the minor engine upgrades released will not be applied to the selected RDS instance  **From Command Line:**  1. Run `describe-db-instances` command to list all RDS database names, available in the selected AWS region: ``` aws rds describe-db-instances --region  --query 'DBInstances[*].DBInstanceIdentifier' ``` 2. The command output should return each database instance identifier. 3. Run again `describe-db-instances` command using the RDS instance identifier returned earlier to determine the Auto Minor Version Upgrade status for the selected instance: ``` aws rds describe-db-instances --region  --db-instance-identifier  --query 'DBInstances[*].AutoMinorVersionUpgrade' ``` 4. The command output should return the feature current status. If the current status is set to `true`, the feature is enabled and the minor engine upgrades will be applied to the selected RDS instance.",
           "AdditionalInformation": "",
           "References": "https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_RDS_Managing.html:https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.Upgrading.html:https://aws.amazon.com/rds/faqs/"
         }
@@ -649,8 +649,8 @@
           "Description": "Ensure and verify that RDS database instances provisioned in your AWS account do restrict unauthorized access in order to minimize security risks. To restrict access to any publicly accessible RDS database instance, you must disable the database Publicly Accessible flag and update the VPC security group associated with the instance.",
           "RationaleStatement": "Ensure that no public-facing RDS database instances are provisioned in your AWS account and restrict unauthorized access in order to minimize security risks. When the RDS instance allows unrestricted access (0.0.0.0/0), everyone and everything on the Internet can establish a connection to your database and this can increase the opportunity for malicious activities such as brute force attacks, PostgreSQL injections, or DoS/DDoS attacks.",
           "ImpactStatement": "",
-          "RemediationProcedure": "**From Console:**\n\n1. Log in to the AWS management console and navigate to the RDS dashboard at https://console.aws.amazon.com/rds/.\n2. Under the navigation panel, On RDS Dashboard, click `Databases`.\n3. Select the RDS instance that you want to update.\n4. Click `Modify` from the dashboard top menu.\n5. On the Modify DB Instance panel, under the `Connectivity` section, click on `Additional connectivity configuration` and update the value for `Publicly Accessible` to Not publicly accessible to restrict public access. Follow the below steps to update subnet configurations:\n- Select the `Connectivity and security` tab, and click on the VPC attribute value inside the `Networking` section.\n- Select the `Details` tab from the VPC dashboard bottom panel and click on Route table configuration attribute value.\n- On the Route table details page, select the Routes tab from the dashboard bottom panel and click on `Edit routes`.\n- On the Edit routes page, update the Destination of Target which is set to `igw-xxxxx` and click on `Save` routes.\n6. On the Modify DB Instance panel Click on `Continue` and In the Scheduling of modifications section, perform one of the following actions based on your requirements:\n- Select Apply during the next scheduled maintenance window to apply the changes automatically during the next scheduled maintenance window.\n- Select Apply immediately to apply the changes right away. With this option, any pending modifications will be asynchronously applied as soon as possible, regardless of the maintenance window setting for this RDS database instance. Note that any changes available in the pending modifications queue are also applied. If any of the pending modifications require downtime, choosing this option can cause unexpected downtime for the application.\n7. Repeat steps 3 to 6 for each RDS instance available in the current region.\n8. Change the AWS region from the navigation bar to repeat the process for other regions.\n\n**From Command Line:**\n\n1. Run `describe-db-instances` command to list all RDS database names identifiers, available in the selected AWS region:\n```\naws rds describe-db-instances --region  --query 'DBInstances[*].DBInstanceIdentifier'\n```\n2. The command output should return each database instance identifier.\n3. Run `modify-db-instance` command to modify the selected RDS instance configuration. Then use the following command to disable the `Publicly Accessible` flag for the selected RDS instances. This command use the apply-immediately flag. If you want `to avoid any downtime --no-apply-immediately flag can be used`:\n```\naws rds modify-db-instance --region  --db-instance-identifier  --no-publicly-accessible --apply-immediately\n```\n4. The command output should reveal the `PubliclyAccessible` configuration under pending values and should get applied at the specified time.\n5. Updating the Internet Gateway Destination via AWS CLI is not currently supported To update information about Internet Gateway use the AWS Console Procedure.\n6. Repeat steps 1 to 5 for each RDS instance provisioned in the current region.\n7. Change the AWS region by using the --region filter to repeat the process for other regions.",
-          "AuditProcedure": "**From Console:**\n\n1. Log in to the AWS management console and navigate to the RDS dashboard at https://console.aws.amazon.com/rds/.\n2. Under the navigation panel, On RDS Dashboard, click `Databases`.\n3. Select the RDS instance that you want to examine.\n4. Click `Instance Name` from the dashboard, Under `Connectivity and Security.\n5. On the `Security`, check if the Publicly Accessible flag status is set to `Yes`, follow the below-mentioned steps to check database subnet access.\n- In the `networking` section, click the subnet link available under `Subnets`\n- The link will redirect you to the VPC Subnets page.\n- Select the subnet listed on the page and click the `Route Table` tab from the dashboard bottom panel. If the route table contains any entries with the destination `CIDR block set to 0.0.0.0/0` and with an `Internet Gateway` attached.\n- The selected RDS database instance was provisioned inside a public subnet, therefore is not running within a logically isolated environment and can be accessible from the Internet.\n6. Repeat steps no. 4 and 5 to determine the type (public or private) and subnet for other RDS database instances provisioned in the current region.\n8. Change the AWS region from the navigation bar and repeat the audit process for other regions.\n\n**From Command Line:**\n\n1. Run `describe-db-instances` command to list all RDS database names, available in the selected AWS region:\n```\naws rds describe-db-instances --region  --query 'DBInstances[*].DBInstanceIdentifier'\n```\n2. The command output should return each database instance `identifier`.\n3. Run again `describe-db-instances` command using the `PubliclyAccessible` parameter as query filter to reveal the database instance Publicly Accessible flag status:\n```\naws rds describe-db-instances --region  --db-instance-identifier  --query 'DBInstances[*].PubliclyAccessible'\n```\n4. Check for the Publicly Accessible parameter status, If the Publicly Accessible flag is set to `Yes`. Then selected RDS database instance is publicly accessible and insecure, follow the below-mentioned steps to check database subnet access\n5. Run again `describe-db-instances` command using the RDS database instance identifier that you want to check and appropriate filtering to describe the VPC subnet(s) associated with the selected instance:\n```\naws rds describe-db-instances --region  --db-instance-identifier  --query 'DBInstances[*].DBSubnetGroup.Subnets[]'\n```\n- The command output should list the subnets available in the selected database subnet group.\n6. Run `describe-route-tables` command using the ID of the subnet returned at the previous step to describe the routes of the VPC route table associated with the selected subnet:\n```\naws ec2 describe-route-tables --region  --filters \"Name=association.subnet-id,Values=\" --query 'RouteTables[*].Routes[]'\n```\n- If the command returns the route table associated with database instance subnet ID. Check the `GatewayId` and `DestinationCidrBlock` attributes values returned in the output. If the route table contains any entries with the `GatewayId` value set to `igw-xxxxxxxx` and the `DestinationCidrBlock` value set to `0.0.0.0/0`, the selected RDS database instance was provisioned inside a public subnet.\n- Or\n- If the command returns empty results, the route table is implicitly associated with subnet, therefore the audit process continues with the next step\n7. Run again `describe-db-instances` command using the RDS database instance identifier that you want to check and appropriate filtering to describe the VPC ID associated with the selected instance:\n```\naws rds describe-db-instances --region  --db-instance-identifier  --query 'DBInstances[*].DBSubnetGroup.VpcId'\n```\n- The command output should show the VPC ID in the selected database subnet group\n8. Now run `describe-route-tables` command using the ID of the VPC returned at the previous step to describe the routes of the VPC main route table implicitly associated with the selected subnet:\n```\naws ec2 describe-route-tables --region  --filters \"Name=vpc-id,Values=\" \"Name=association.main,Values=true\" --query 'RouteTables[*].Routes[]'\n```\n- The command output returns the VPC main route table implicitly associated with database instance subnet ID. Check the `GatewayId` and `DestinationCidrBlock` attributes values returned in the output. If the route table contains any entries with the `GatewayId` value set to `igw-xxxxxxxx` and the `DestinationCidrBlock` value set to `0.0.0.0/0`, the selected RDS database instance was provisioned inside a public subnet, therefore is not running within a logically isolated environment and does not adhere to AWS security best practices.",
+          "RemediationProcedure": "**From Console:**  1. Log in to the AWS management console and navigate to the RDS dashboard at https://console.aws.amazon.com/rds/. 2. Under the navigation panel, On RDS Dashboard, click `Databases`. 3. Select the RDS instance that you want to update. 4. Click `Modify` from the dashboard top menu. 5. On the Modify DB Instance panel, under the `Connectivity` section, click on `Additional connectivity configuration` and update the value for `Publicly Accessible` to Not publicly accessible to restrict public access. Follow the below steps to update subnet configurations: - Select the `Connectivity and security` tab, and click on the VPC attribute value inside the `Networking` section. - Select the `Details` tab from the VPC dashboard bottom panel and click on Route table configuration attribute value. - On the Route table details page, select the Routes tab from the dashboard bottom panel and click on `Edit routes`. - On the Edit routes page, update the Destination of Target which is set to `igw-xxxxx` and click on `Save` routes. 6. On the Modify DB Instance panel Click on `Continue` and In the Scheduling of modifications section, perform one of the following actions based on your requirements: - Select Apply during the next scheduled maintenance window to apply the changes automatically during the next scheduled maintenance window. - Select Apply immediately to apply the changes right away. With this option, any pending modifications will be asynchronously applied as soon as possible, regardless of the maintenance window setting for this RDS database instance. Note that any changes available in the pending modifications queue are also applied. If any of the pending modifications require downtime, choosing this option can cause unexpected downtime for the application. 7. Repeat steps 3 to 6 for each RDS instance available in the current region. 8. Change the AWS region from the navigation bar to repeat the process for other regions.  **From Command Line:**  1. Run `describe-db-instances` command to list all RDS database names identifiers, available in the selected AWS region: ``` aws rds describe-db-instances --region  --query 'DBInstances[*].DBInstanceIdentifier' ``` 2. The command output should return each database instance identifier. 3. Run `modify-db-instance` command to modify the selected RDS instance configuration. Then use the following command to disable the `Publicly Accessible` flag for the selected RDS instances. This command use the apply-immediately flag. If you want `to avoid any downtime --no-apply-immediately flag can be used`: ``` aws rds modify-db-instance --region  --db-instance-identifier  --no-publicly-accessible --apply-immediately ``` 4. The command output should reveal the `PubliclyAccessible` configuration under pending values and should get applied at the specified time. 5. Updating the Internet Gateway Destination via AWS CLI is not currently supported To update information about Internet Gateway use the AWS Console Procedure. 6. Repeat steps 1 to 5 for each RDS instance provisioned in the current region. 7. Change the AWS region by using the --region filter to repeat the process for other regions.",
+          "AuditProcedure": "**From Console:**  1. Log in to the AWS management console and navigate to the RDS dashboard at https://console.aws.amazon.com/rds/. 2. Under the navigation panel, On RDS Dashboard, click `Databases`. 3. Select the RDS instance that you want to examine. 4. Click `Instance Name` from the dashboard, Under `Connectivity and Security. 5. On the `Security`, check if the Publicly Accessible flag status is set to `Yes`, follow the below-mentioned steps to check database subnet access. - In the `networking` section, click the subnet link available under `Subnets` - The link will redirect you to the VPC Subnets page. - Select the subnet listed on the page and click the `Route Table` tab from the dashboard bottom panel. If the route table contains any entries with the destination `CIDR block set to 0.0.0.0/0` and with an `Internet Gateway` attached. - The selected RDS database instance was provisioned inside a public subnet, therefore is not running within a logically isolated environment and can be accessible from the Internet. 6. Repeat steps no. 4 and 5 to determine the type (public or private) and subnet for other RDS database instances provisioned in the current region. 8. Change the AWS region from the navigation bar and repeat the audit process for other regions.  **From Command Line:**  1. Run `describe-db-instances` command to list all RDS database names, available in the selected AWS region: ``` aws rds describe-db-instances --region  --query 'DBInstances[*].DBInstanceIdentifier' ``` 2. The command output should return each database instance `identifier`. 3. Run again `describe-db-instances` command using the `PubliclyAccessible` parameter as query filter to reveal the database instance Publicly Accessible flag status: ``` aws rds describe-db-instances --region  --db-instance-identifier  --query 'DBInstances[*].PubliclyAccessible' ``` 4. Check for the Publicly Accessible parameter status, If the Publicly Accessible flag is set to `Yes`. Then selected RDS database instance is publicly accessible and insecure, follow the below-mentioned steps to check database subnet access 5. Run again `describe-db-instances` command using the RDS database instance identifier that you want to check and appropriate filtering to describe the VPC subnet(s) associated with the selected instance: ``` aws rds describe-db-instances --region  --db-instance-identifier  --query 'DBInstances[*].DBSubnetGroup.Subnets[]' ``` - The command output should list the subnets available in the selected database subnet group. 6. Run `describe-route-tables` command using the ID of the subnet returned at the previous step to describe the routes of the VPC route table associated with the selected subnet: ``` aws ec2 describe-route-tables --region  --filters \"Name=association.subnet-id,Values=\" --query 'RouteTables[*].Routes[]' ``` - If the command returns the route table associated with database instance subnet ID. Check the `GatewayId` and `DestinationCidrBlock` attributes values returned in the output. If the route table contains any entries with the `GatewayId` value set to `igw-xxxxxxxx` and the `DestinationCidrBlock` value set to `0.0.0.0/0`, the selected RDS database instance was provisioned inside a public subnet. - Or - If the command returns empty results, the route table is implicitly associated with subnet, therefore the audit process continues with the next step 7. Run again `describe-db-instances` command using the RDS database instance identifier that you want to check and appropriate filtering to describe the VPC ID associated with the selected instance: ``` aws rds describe-db-instances --region  --db-instance-identifier  --query 'DBInstances[*].DBSubnetGroup.VpcId' ``` - The command output should show the VPC ID in the selected database subnet group 8. Now run `describe-route-tables` command using the ID of the VPC returned at the previous step to describe the routes of the VPC main route table implicitly associated with the selected subnet: ``` aws ec2 describe-route-tables --region  --filters \"Name=vpc-id,Values=\" \"Name=association.main,Values=true\" --query 'RouteTables[*].Routes[]' ``` - The command output returns the VPC main route table implicitly associated with database instance subnet ID. Check the `GatewayId` and `DestinationCidrBlock` attributes values returned in the output. If the route table contains any entries with the `GatewayId` value set to `igw-xxxxxxxx` and the `DestinationCidrBlock` value set to `0.0.0.0/0`, the selected RDS database instance was provisioned inside a public subnet, therefore is not running within a logically isolated environment and does not adhere to AWS security best practices.",
           "AdditionalInformation": "",
           "References": "https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.html:https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Scenario2.html:https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_VPC.WorkingWithRDSInstanceinaVPC.html:https://aws.amazon.com/rds/faqs/"
         }
@@ -670,8 +670,8 @@
           "Description": "EFS data should be encrypted at rest using AWS KMS (Key Management Service).",
           "RationaleStatement": "Data should be encrypted at rest to reduce the risk of a data breach via direct access to the storage device.",
           "ImpactStatement": "",
-          "RemediationProcedure": "**It is important to note that EFS file system data at rest encryption must be turned on when creating the file system.**\n\nIf an EFS file system has been created without data at rest encryption enabled then you must create another EFS file system with the correct configuration and transfer the data.\n\n**Steps to create an EFS file system with data encrypted at rest:**\n\n**From Console:**\n1. Login to the AWS Management Console and Navigate to `Elastic File System (EFS)` dashboard.\n2. Select `File Systems` from the left navigation panel.\n3. Click `Create File System` button from the dashboard top menu to start the file system setup process.\n4. On the `Configure file system access` configuration page, perform the following actions.\n- Choose the right VPC from the VPC dropdown list.\n- Within Create mount targets section, select the checkboxes for all of the Availability Zones (AZs) within the selected VPC. These will be your mount targets.\n- Click `Next step` to continue.\n\n5. Perform the following on the `Configure optional settings` page.\n- Create `tags` to describe your new file system.\n- Choose `performance mode` based on your requirements.\n- Check `Enable encryption` checkbox and choose `aws/elasticfilesystem` from Select KMS master key dropdown list to enable encryption for the new file system using the default master key provided and managed by AWS KMS.\n- Click `Next step` to continue.\n\n6. Review the file system configuration details on the `review and create` page and then click `Create File System` to create your new AWS EFS file system.\n7. Copy the data from the old unencrypted EFS file system onto the newly create encrypted file system.\n8. Remove the unencrypted file system as soon as your data migration to the newly create encrypted file system is completed.\n9. Change the AWS region from the navigation bar and repeat the entire process for other aws regions.\n\n**From CLI:**\n1. Run describe-file-systems command to describe the configuration information available for the selected (unencrypted) file system (see Audit section to identify the right resource):\n```\naws efs describe-file-systems --region  --file-system-id \n```\n2. The command output should return the requested configuration information.\n3. To provision a new AWS EFS file system, you need to generate a universally unique identifier (UUID) in order to create the token required by the create-file-system command. To create the required token, you can use a randomly generated UUID from \"https://www.uuidgenerator.net\".\n4. Run create-file-system command using the unique token created at the previous step.\n```\naws efs create-file-system --region  --creation-token  --performance-mode generalPurpose --encrypted\n```\n5. The command output should return the new file system configuration metadata.\n6. Run create-mount-target command using the newly created EFS file system ID returned at the previous step as identifier and the ID of the Availability Zone (AZ) that will represent the mount target:\n```\naws efs create-mount-target --region  --file-system-id  --subnet-id \n```\n7. The command output should return the new mount target metadata.\n8. Now you can mount your file system from an EC2 instance.\n9. Copy the data from the old unencrypted EFS file system onto the newly create encrypted file system.\n10. Remove the unencrypted file system as soon as your data migration to the newly create encrypted file system is completed.\n```\naws efs delete-file-system --region  --file-system-id \n```\n11. Change the AWS region by updating the --region and repeat the entire process for other aws regions.",
-          "AuditProcedure": "**From Console:**\n1. Login to the AWS Management Console and Navigate to `Elastic File System (EFS) dashboard.\n2. Select `File Systems` from the left navigation panel.\n3. Each item on the list has a visible Encrypted field that displays data at rest encryption status.\n4. Validate that this field reads `Encrypted` for all EFS file systems in all AWS regions.\n\n**From CLI:**\n1. Run describe-file-systems command using custom query filters to list the identifiers of all AWS EFS file systems currently available within the selected region:\n```\naws efs describe-file-systems --region  --output table --query 'FileSystems[*].FileSystemId'\n```\n2. The command output should return a table with the requested file system IDs.\n3. Run describe-file-systems command using the ID of the file system that you want to examine as identifier and the necessary query filters:\n```\naws efs describe-file-systems --region  --file-system-id  --query 'FileSystems[*].Encrypted'\n```\n4. The command output should return the file system encryption status true or false. If the returned value is `false`, the selected AWS EFS file system is not encrypted and if the returned value is `true`, the selected AWS EFS file system is encrypted.",
+          "RemediationProcedure": "**It is important to note that EFS file system data at rest encryption must be turned on when creating the file system.**  If an EFS file system has been created without data at rest encryption enabled then you must create another EFS file system with the correct configuration and transfer the data.  **Steps to create an EFS file system with data encrypted at rest:**  **From Console:** 1. Login to the AWS Management Console and Navigate to `Elastic File System (EFS)` dashboard. 2. Select `File Systems` from the left navigation panel. 3. Click `Create File System` button from the dashboard top menu to start the file system setup process. 4. On the `Configure file system access` configuration page, perform the following actions. - Choose the right VPC from the VPC dropdown list. - Within Create mount targets section, select the checkboxes for all of the Availability Zones (AZs) within the selected VPC. These will be your mount targets. - Click `Next step` to continue.  5. Perform the following on the `Configure optional settings` page. - Create `tags` to describe your new file system. - Choose `performance mode` based on your requirements. - Check `Enable encryption` checkbox and choose `aws/elasticfilesystem` from Select KMS master key dropdown list to enable encryption for the new file system using the default master key provided and managed by AWS KMS. - Click `Next step` to continue.  6. Review the file system configuration details on the `review and create` page and then click `Create File System` to create your new AWS EFS file system. 7. Copy the data from the old unencrypted EFS file system onto the newly create encrypted file system. 8. Remove the unencrypted file system as soon as your data migration to the newly create encrypted file system is completed. 9. Change the AWS region from the navigation bar and repeat the entire process for other aws regions.  **From CLI:** 1. Run describe-file-systems command to describe the configuration information available for the selected (unencrypted) file system (see Audit section to identify the right resource): ``` aws efs describe-file-systems --region  --file-system-id  ``` 2. The command output should return the requested configuration information. 3. To provision a new AWS EFS file system, you need to generate a universally unique identifier (UUID) in order to create the token required by the create-file-system command. To create the required token, you can use a randomly generated UUID from \"https://www.uuidgenerator.net\". 4. Run create-file-system command using the unique token created at the previous step. ``` aws efs create-file-system --region  --creation-token  --performance-mode generalPurpose --encrypted ``` 5. The command output should return the new file system configuration metadata. 6. Run create-mount-target command using the newly created EFS file system ID returned at the previous step as identifier and the ID of the Availability Zone (AZ) that will represent the mount target: ``` aws efs create-mount-target --region  --file-system-id  --subnet-id  ``` 7. The command output should return the new mount target metadata. 8. Now you can mount your file system from an EC2 instance. 9. Copy the data from the old unencrypted EFS file system onto the newly create encrypted file system. 10. Remove the unencrypted file system as soon as your data migration to the newly create encrypted file system is completed. ``` aws efs delete-file-system --region  --file-system-id  ``` 11. Change the AWS region by updating the --region and repeat the entire process for other aws regions.",
+          "AuditProcedure": "**From Console:** 1. Login to the AWS Management Console and Navigate to `Elastic File System (EFS) dashboard. 2. Select `File Systems` from the left navigation panel. 3. Each item on the list has a visible Encrypted field that displays data at rest encryption status. 4. Validate that this field reads `Encrypted` for all EFS file systems in all AWS regions.  **From CLI:** 1. Run describe-file-systems command using custom query filters to list the identifiers of all AWS EFS file systems currently available within the selected region: ``` aws efs describe-file-systems --region  --output table --query 'FileSystems[*].FileSystemId' ``` 2. The command output should return a table with the requested file system IDs. 3. Run describe-file-systems command using the ID of the file system that you want to examine as identifier and the necessary query filters: ``` aws efs describe-file-systems --region  --file-system-id  --query 'FileSystems[*].Encrypted' ``` 4. The command output should return the file system encryption status true or false. If the returned value is `false`, the selected AWS EFS file system is not encrypted and if the returned value is `true`, the selected AWS EFS file system is encrypted.",
           "AdditionalInformation": "",
           "References": "https://docs.aws.amazon.com/efs/latest/ug/encryption-at-rest.html:https://awscli.amazonaws.com/v2/documentation/api/latest/reference/efs/index.html#efs"
         }
@@ -689,10 +689,10 @@
           "Profile": "Level 1",
           "AssessmentStatus": "Automated",
           "Description": "AWS CloudTrail is a web service that records AWS API calls for your account and delivers log files to you. The recorded information includes the identity of the API caller, the time of the API call, the source IP address of the API caller, the request parameters, and the response elements returned by the AWS service. CloudTrail provides a history of AWS API calls for an account, including API calls made via the Management Console, SDKs, command line tools, and higher-level AWS services (such as CloudFormation).",
-          "RationaleStatement": "The AWS API call history produced by CloudTrail enables security analysis, resource change tracking, and compliance auditing. Additionally, \n\n- ensuring that a multi-regions trail exists will ensure that unexpected activity occurring in otherwise unused regions is detected\n\n- ensuring that a multi-regions trail exists will ensure that `Global Service Logging` is enabled for a trail by default to capture recording of events generated on \nAWS global services\n\n- for a multi-regions trail, ensuring that management events configured for all type of Read/Writes ensures recording of management operations that are performed on all resources in an AWS account",
-          "ImpactStatement": "S3 lifecycle features can be used to manage the accumulation and management of logs over time. See the following AWS resource for more information on these features:\n\n1. https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html",
-          "RemediationProcedure": "Perform the following to enable global (Multi-region) CloudTrail logging:\n\n**From Console:**\n\n1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/cloudtrail](https://console.aws.amazon.com/cloudtrail)\n2. Click on _Trails_ on the left navigation pane\n3. Click `Get Started Now` , if presented\n - Click `Add new trail` \n - Enter a trail name in the `Trail name` box\n - Set the `Apply trail to all regions` option to `Yes` \n - Specify an S3 bucket name in the `S3 bucket` box\n - Click `Create` \n4. If 1 or more trails already exist, select the target trail to enable for global logging\n5. Click the edit icon (pencil) next to `Apply trail to all regions` , Click `Yes` and Click `Save`.\n6. Click the edit icon (pencil) next to `Management Events` click `All` for setting `Read/Write Events` and Click `Save`.\n\n**From Command Line:**\n```\naws cloudtrail create-trail --name  --bucket-name  --is-multi-region-trail \naws cloudtrail update-trail --name  --is-multi-region-trail\n```\n\nNote: Creating CloudTrail via CLI without providing any overriding options configures `Management Events` to set `All` type of `Read/Writes` by default.",
-          "AuditProcedure": "Perform the following to determine if CloudTrail is enabled for all regions:\n\n**From Console:**\n\n1. Sign in to the AWS Management Console and open the CloudTrail console at [https://console.aws.amazon.com/cloudtrail](https://console.aws.amazon.com/cloudtrail)\n2. Click on `Trails` on the left navigation pane\n - You will be presented with a list of trails across all regions\n3. Ensure at least one Trail has `All` specified in the `Region` column\n4. Click on a trail via the link in the _Name_ column\n5. Ensure `Logging` is set to `ON` \n6. Ensure `Apply trail to all regions` is set to `Yes`\n7. In section `Management Events` ensure `Read/Write Events` set to `ALL`\n\n**From Command Line:**\n```\n aws cloudtrail describe-trails\n```\nEnsure `IsMultiRegionTrail` is set to `true` \n```\naws cloudtrail get-trail-status --name \n```\nEnsure `IsLogging` is set to `true`\n```\naws cloudtrail get-event-selectors --trail-name \n```\nEnsure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`",
+          "RationaleStatement": "The AWS API call history produced by CloudTrail enables security analysis, resource change tracking, and compliance auditing. Additionally,   - ensuring that a multi-regions trail exists will ensure that unexpected activity occurring in otherwise unused regions is detected  - ensuring that a multi-regions trail exists will ensure that `Global Service Logging` is enabled for a trail by default to capture recording of events generated on  AWS global services  - for a multi-regions trail, ensuring that management events configured for all type of Read/Writes ensures recording of management operations that are performed on all resources in an AWS account",
+          "ImpactStatement": "S3 lifecycle features can be used to manage the accumulation and management of logs over time. See the following AWS resource for more information on these features:  1. https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html",
+          "RemediationProcedure": "Perform the following to enable global (Multi-region) CloudTrail logging:  **From Console:**  1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/cloudtrail](https://console.aws.amazon.com/cloudtrail) 2. Click on _Trails_ on the left navigation pane 3. Click `Get Started Now` , if presented  - Click `Add new trail`   - Enter a trail name in the `Trail name` box  - Set the `Apply trail to all regions` option to `Yes`   - Specify an S3 bucket name in the `S3 bucket` box  - Click `Create`  4. If 1 or more trails already exist, select the target trail to enable for global logging 5. Click the edit icon (pencil) next to `Apply trail to all regions` , Click `Yes` and Click `Save`. 6. Click the edit icon (pencil) next to `Management Events` click `All` for setting `Read/Write Events` and Click `Save`.  **From Command Line:** ``` aws cloudtrail create-trail --name  --bucket-name  --is-multi-region-trail  aws cloudtrail update-trail --name  --is-multi-region-trail ```  Note: Creating CloudTrail via CLI without providing any overriding options configures `Management Events` to set `All` type of `Read/Writes` by default.",
+          "AuditProcedure": "Perform the following to determine if CloudTrail is enabled for all regions:  **From Console:**  1. Sign in to the AWS Management Console and open the CloudTrail console at [https://console.aws.amazon.com/cloudtrail](https://console.aws.amazon.com/cloudtrail) 2. Click on `Trails` on the left navigation pane  - You will be presented with a list of trails across all regions 3. Ensure at least one Trail has `All` specified in the `Region` column 4. Click on a trail via the link in the _Name_ column 5. Ensure `Logging` is set to `ON`  6. Ensure `Apply trail to all regions` is set to `Yes` 7. In section `Management Events` ensure `Read/Write Events` set to `ALL`  **From Command Line:** ```  aws cloudtrail describe-trails ``` Ensure `IsMultiRegionTrail` is set to `true`  ``` aws cloudtrail get-trail-status --name  ``` Ensure `IsLogging` is set to `true` ``` aws cloudtrail get-event-selectors --trail-name  ``` Ensure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`",
           "AdditionalInformation": "",
           "References": "https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-concepts.html#cloudtrail-concepts-management-events:https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-management-and-data-events-with-cloudtrail.html?icmpid=docs_cloudtrail_console#logging-management-events:https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-supported-services.html#cloud-trail-supported-services-data-events"
         }
@@ -712,8 +712,8 @@
           "Description": "S3 object-level API operations such as GetObject, DeleteObject, and PutObject are called data events. By default, CloudTrail trails don't log data events and so it is recommended to enable Object-level logging for S3 buckets.",
           "RationaleStatement": "Enabling object-level logging will help you meet data compliance requirements within your organization, perform comprehensive security analysis, monitor specific patterns of user behavior in your AWS account or take immediate actions on any object-level API activity within your S3 Buckets using Amazon CloudWatch Events.",
           "ImpactStatement": "",
-          "RemediationProcedure": "**From Console:**\n\n1. Login to the AWS Management Console and navigate to S3 dashboard at `https://console.aws.amazon.com/s3/`\n2. In the left navigation panel, click `buckets` and then click on the S3 Bucket Name that you want to examine.\n3. Click `Properties` tab to see in detail bucket configuration.\n4. Click on the `Object-level` logging setting, enter the CloudTrail name for the recording activity. You can choose an existing Cloudtrail or create a new one by navigating to the Cloudtrail console link `https://console.aws.amazon.com/cloudtrail/`\n5. Once the Cloudtrail is selected, check the `Write` event checkbox, so that `object-level` logging for Write events is enabled.\n6. Repeat steps 2 to 5 to enable object-level logging of write events for other S3 buckets.\n\n**From Command Line:**\n\n1. To enable `object-level` data events logging for S3 buckets within your AWS account, run `put-event-selectors` command using the name of the trail that you want to reconfigure as identifier:\n```\naws cloudtrail put-event-selectors --region  --trail-name  --event-selectors '[{ \"ReadWriteType\": \"WriteOnly\", \"IncludeManagementEvents\":true, \"DataResources\": [{ \"Type\": \"AWS::S3::Object\", \"Values\": [\"arn:aws:s3:::/\"] }] }]'\n```\n2. The command output will be `object-level` event trail configuration.\n3. If you want to enable it for all buckets at once then change Values parameter to `[\"arn:aws:s3\"]` in command given above.\n4. Repeat step 1 for each s3 bucket to update `object-level` logging of write events.\n5. Change the AWS region by updating the `--region` command parameter and perform the process for other regions.",
-          "AuditProcedure": "**From Console:**\n\n1. Login to the AWS Management Console and navigate to CloudTrail dashboard at `https://console.aws.amazon.com/cloudtrail/`\n2. In the left panel, click `Trails` and then click on the CloudTrail Name that you want to examine.\n3. Review `General details`\n4. Confirm that `Multi-region trail` is set to `Yes`\n5. Scroll down to `Data events`\n6. Confirm that it reads:\nData events: S3\nBucket Name: All current and future S3 buckets\nRead: Enabled\nWrite: Enabled\n7. Repeat steps 2 to 6 to verify that Multi-region trail and Data events logging of S3 buckets in CloudTrail.\nIf the CloudTrails do not have multi-region and data events configured for S3 refer to the remediation below.\n\n**From Command Line:**\n\n1. Run `list-trails` command to list the names of all Amazon CloudTrail trails currently available in all AWS regions:\n```\naws cloudtrail list-trails\n```\n2. The command output will be a list of all the trail names to include.\n\"TrailARN\": \"arn:aws:cloudtrail:::trail/\",\n\"Name\": \"\",\n\"HomeRegion\": \"\"\n3. Next run 'get-trail- command to determine Multi-region.\n```\naws cloudtrail get-trail --name  --region \n```\n4. The command output should include:\n\"IsMultiRegionTrail\": true,\n5. Next run `get-event-selectors` command using the `Name` of the trail and the `region` returned in step 2 to determine if Data events logging feature is enabled within the selected CloudTrail trail for all S3 buckets:\n```\naws cloudtrail get-event-selectors --region  --trail-name  --query EventSelectors[*].DataResources[]\n```\n6. The command output should be an array that contains the configuration of the AWS resource(S3 bucket) defined for the Data events selector.\n\"Type\": \"AWS::S3::Object\",\n \"Values\": [\n \"arn:aws:s3\"\n7. If the `get-event-selectors` command returns an empty array '[]', the Data events are not included in the selected AWS Cloudtrail trail logging configuration, therefore the S3 object-level API operations performed within your AWS account are not recorded.\n8. Repeat steps 1 to 5 for auditing each CloudTrail to determine if Data events for S3 are covered.\nIf Multi-region is not set to true and the Data events does not show S3 defined as shown refer to the remediation procedure below.",
+          "RemediationProcedure": "**From Console:**  1. Login to the AWS Management Console and navigate to S3 dashboard at `https://console.aws.amazon.com/s3/` 2. In the left navigation panel, click `buckets` and then click on the S3 Bucket Name that you want to examine. 3. Click `Properties` tab to see in detail bucket configuration. 4. Click on the `Object-level` logging setting, enter the CloudTrail name for the recording activity. You can choose an existing Cloudtrail or create a new one by navigating to the Cloudtrail console link `https://console.aws.amazon.com/cloudtrail/` 5. Once the Cloudtrail is selected, check the `Write` event checkbox, so that `object-level` logging for Write events is enabled. 6. Repeat steps 2 to 5 to enable object-level logging of write events for other S3 buckets.  **From Command Line:**  1. To enable `object-level` data events logging for S3 buckets within your AWS account, run `put-event-selectors` command using the name of the trail that you want to reconfigure as identifier: ``` aws cloudtrail put-event-selectors --region  --trail-name  --event-selectors '[{ \"ReadWriteType\": \"WriteOnly\", \"IncludeManagementEvents\":true, \"DataResources\": [{ \"Type\": \"AWS::S3::Object\", \"Values\": [\"arn:aws:s3:::/\"] }] }]' ``` 2. The command output will be `object-level` event trail configuration. 3. If you want to enable it for all buckets at once then change Values parameter to `[\"arn:aws:s3\"]` in command given above. 4. Repeat step 1 for each s3 bucket to update `object-level` logging of write events. 5. Change the AWS region by updating the `--region` command parameter and perform the process for other regions.",
+          "AuditProcedure": "**From Console:**  1. Login to the AWS Management Console and navigate to CloudTrail dashboard at `https://console.aws.amazon.com/cloudtrail/` 2. In the left panel, click `Trails` and then click on the CloudTrail Name that you want to examine. 3. Review `General details` 4. Confirm that `Multi-region trail` is set to `Yes` 5. Scroll down to `Data events` 6. Confirm that it reads: Data events: S3 Bucket Name: All current and future S3 buckets Read: Enabled Write: Enabled 7. Repeat steps 2 to 6 to verify that Multi-region trail and Data events logging of S3 buckets in CloudTrail. If the CloudTrails do not have multi-region and data events configured for S3 refer to the remediation below.  **From Command Line:**  1. Run `list-trails` command to list the names of all Amazon CloudTrail trails currently available in all AWS regions: ``` aws cloudtrail list-trails ``` 2. The command output will be a list of all the trail names to include. \"TrailARN\": \"arn:aws:cloudtrail:::trail/\", \"Name\": \"\", \"HomeRegion\": \"\" 3. Next run 'get-trail- command to determine Multi-region. ``` aws cloudtrail get-trail --name  --region  ``` 4. The command output should include: \"IsMultiRegionTrail\": true, 5. Next run `get-event-selectors` command using the `Name` of the trail and the `region` returned in step 2 to determine if Data events logging feature is enabled within the selected CloudTrail trail for all S3 buckets: ``` aws cloudtrail get-event-selectors --region  --trail-name  --query EventSelectors[*].DataResources[] ``` 6. The command output should be an array that contains the configuration of the AWS resource(S3 bucket) defined for the Data events selector. \"Type\": \"AWS::S3::Object\",  \"Values\": [  \"arn:aws:s3\" 7. If the `get-event-selectors` command returns an empty array '[]', the Data events are not included in the selected AWS Cloudtrail trail logging configuration, therefore the S3 object-level API operations performed within your AWS account are not recorded. 8. Repeat steps 1 to 5 for auditing each CloudTrail to determine if Data events for S3 are covered. If Multi-region is not set to true and the Data events does not show S3 defined as shown refer to the remediation procedure below.",
           "AdditionalInformation": "",
           "References": "https://docs.aws.amazon.com/AmazonS3/latest/user-guide/enable-cloudtrail-events.html"
         }
@@ -733,8 +733,8 @@
           "Description": "S3 object-level API operations such as GetObject, DeleteObject, and PutObject are called data events. By default, CloudTrail trails don't log data events and so it is recommended to enable Object-level logging for S3 buckets.",
           "RationaleStatement": "Enabling object-level logging will help you meet data compliance requirements within your organization, perform comprehensive security analysis, monitor specific patterns of user behavior in your AWS account or take immediate actions on any object-level API activity using Amazon CloudWatch Events.",
           "ImpactStatement": "",
-          "RemediationProcedure": "**From Console:**\n\n1. Login to the AWS Management Console and navigate to S3 dashboard at `https://console.aws.amazon.com/s3/`\n2. In the left navigation panel, click `buckets` and then click on the S3 Bucket Name that you want to examine.\n3. Click `Properties` tab to see in detail bucket configuration.\n4. Click on the `Object-level` logging setting, enter the CloudTrail name for the recording activity. You can choose an existing Cloudtrail or create a new one by navigating to the Cloudtrail console link `https://console.aws.amazon.com/cloudtrail/`\n5. Once the Cloudtrail is selected, check the Read event checkbox, so that `object-level` logging for `Read` events is enabled.\n6. Repeat steps 2 to 5 to enable `object-level` logging of read events for other S3 buckets.\n\n**From Command Line:**\n1. To enable `object-level` data events logging for S3 buckets within your AWS account, run `put-event-selectors` command using the name of the trail that you want to reconfigure as identifier:\n```\naws cloudtrail put-event-selectors --region  --trail-name  --event-selectors '[{ \"ReadWriteType\": \"ReadOnly\", \"IncludeManagementEvents\":true, \"DataResources\": [{ \"Type\": \"AWS::S3::Object\", \"Values\": [\"arn:aws:s3:::/\"] }] }]'\n```\n2. The command output will be `object-level` event trail configuration.\n3. If you want to enable it for all buckets at ones then change Values parameter to `[\"arn:aws:s3\"]` in command given above.\n4. Repeat step 1 for each s3 bucket to update `object-level` logging of read events.\n5. Change the AWS region by updating the `--region` command parameter and perform the process for other regions.",
-          "AuditProcedure": "**From Console:**\n\n1. Login to the AWS Management Console and navigate to S3 dashboard at `https://console.aws.amazon.com/s3/`\n2. In the left navigation panel, click `buckets` and then click on the S3 Bucket Name that you want to examine.\n3. Click `Properties` tab to see in detail bucket configuration.\n4. If the current status for `Object-level` logging is set to `Disabled`, then object-level logging of read events for the selected s3 bucket is not set.\n5. If the current status for `Object-level` logging is set to `Enabled`, but the Read event check-box is unchecked, then object-level logging of read events for the selected s3 bucket is not set.\n6. Repeat steps 2 to 5 to verify `object-level` logging for `read` events of your other S3 buckets.\n\n**From Command Line:**\n1. Run `describe-trails` command to list the names of all Amazon CloudTrail trails currently available in the selected AWS region:\n```\naws cloudtrail describe-trails --region  --output table --query trailList[*].Name\n```\n2. The command output will be table of the requested trail names.\n3. Run `get-event-selectors` command using the name of the trail returned at the previous step and custom query filters to determine if Data events logging feature is enabled within the selected CloudTrail trail configuration for s3 bucket resources:\n```\naws cloudtrail get-event-selectors --region  --trail-name  --query EventSelectors[*].DataResources[]\n```\n4. The command output should be an array that contains the configuration of the AWS resource(S3 bucket) defined for the Data events selector.\n5. If the `get-event-selectors` command returns an empty array, the Data events are not included into the selected AWS Cloudtrail trail logging configuration, therefore the S3 object-level API operations performed within your AWS account are not recorded.\n6. Repeat steps 1 to 5 for auditing each s3 bucket to identify other trails that are missing the capability to log Data events.\n7. Change the AWS region by updating the `--region` command parameter and perform the audit process for other regions.",
+          "RemediationProcedure": "**From Console:**  1. Login to the AWS Management Console and navigate to S3 dashboard at `https://console.aws.amazon.com/s3/` 2. In the left navigation panel, click `buckets` and then click on the S3 Bucket Name that you want to examine. 3. Click `Properties` tab to see in detail bucket configuration. 4. Click on the `Object-level` logging setting, enter the CloudTrail name for the recording activity. You can choose an existing Cloudtrail or create a new one by navigating to the Cloudtrail console link `https://console.aws.amazon.com/cloudtrail/` 5. Once the Cloudtrail is selected, check the Read event checkbox, so that `object-level` logging for `Read` events is enabled. 6. Repeat steps 2 to 5 to enable `object-level` logging of read events for other S3 buckets.  **From Command Line:** 1. To enable `object-level` data events logging for S3 buckets within your AWS account, run `put-event-selectors` command using the name of the trail that you want to reconfigure as identifier: ``` aws cloudtrail put-event-selectors --region  --trail-name  --event-selectors '[{ \"ReadWriteType\": \"ReadOnly\", \"IncludeManagementEvents\":true, \"DataResources\": [{ \"Type\": \"AWS::S3::Object\", \"Values\": [\"arn:aws:s3:::/\"] }] }]' ``` 2. The command output will be `object-level` event trail configuration. 3. If you want to enable it for all buckets at ones then change Values parameter to `[\"arn:aws:s3\"]` in command given above. 4. Repeat step 1 for each s3 bucket to update `object-level` logging of read events. 5. Change the AWS region by updating the `--region` command parameter and perform the process for other regions.",
+          "AuditProcedure": "**From Console:**  1. Login to the AWS Management Console and navigate to S3 dashboard at `https://console.aws.amazon.com/s3/` 2. In the left navigation panel, click `buckets` and then click on the S3 Bucket Name that you want to examine. 3. Click `Properties` tab to see in detail bucket configuration. 4. If the current status for `Object-level` logging is set to `Disabled`, then object-level logging of read events for the selected s3 bucket is not set. 5. If the current status for `Object-level` logging is set to `Enabled`, but the Read event check-box is unchecked, then object-level logging of read events for the selected s3 bucket is not set. 6. Repeat steps 2 to 5 to verify `object-level` logging for `read` events of your other S3 buckets.  **From Command Line:** 1. Run `describe-trails` command to list the names of all Amazon CloudTrail trails currently available in the selected AWS region: ``` aws cloudtrail describe-trails --region  --output table --query trailList[*].Name ``` 2. The command output will be table of the requested trail names. 3. Run `get-event-selectors` command using the name of the trail returned at the previous step and custom query filters to determine if Data events logging feature is enabled within the selected CloudTrail trail configuration for s3 bucket resources: ``` aws cloudtrail get-event-selectors --region  --trail-name  --query EventSelectors[*].DataResources[] ``` 4. The command output should be an array that contains the configuration of the AWS resource(S3 bucket) defined for the Data events selector. 5. If the `get-event-selectors` command returns an empty array, the Data events are not included into the selected AWS Cloudtrail trail logging configuration, therefore the S3 object-level API operations performed within your AWS account are not recorded. 6. Repeat steps 1 to 5 for auditing each s3 bucket to identify other trails that are missing the capability to log Data events. 7. Change the AWS region by updating the `--region` command parameter and perform the audit process for other regions.",
           "AdditionalInformation": "",
           "References": "https://docs.aws.amazon.com/AmazonS3/latest/user-guide/enable-cloudtrail-events.html"
         }
@@ -754,8 +754,8 @@
           "Description": "CloudTrail log file validation creates a digitally signed digest file containing a hash of each log that CloudTrail writes to S3. These digest files can be used to determine whether a log file was changed, deleted, or unchanged after CloudTrail delivered the log. It is recommended that file validation be enabled on all CloudTrails.",
           "RationaleStatement": "Enabling log file validation will provide additional integrity checking of CloudTrail logs.",
           "ImpactStatement": "",
-          "RemediationProcedure": "Perform the following to enable log file validation on a given trail:\n\n**From Console:**\n\n1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/cloudtrail](https://console.aws.amazon.com/cloudtrail)\n2. Click on `Trails` on the left navigation pane\n3. Click on target trail\n4. Within the `General details` section click `edit`\n5. Under the `Advanced settings` section\n6. Check the enable box under `Log file validation` \n7. Click `Save changes` \n\n**From Command Line:**\n```\naws cloudtrail update-trail --name  --enable-log-file-validation\n```\nNote that periodic validation of logs using these digests can be performed by running the following command:\n```\naws cloudtrail validate-logs --trail-arn  --start-time  --end-time \n```",
-          "AuditProcedure": "Perform the following on each trail to determine if log file validation is enabled:\n\n**From Console:**\n\n1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/cloudtrail](https://console.aws.amazon.com/cloudtrail)\n2. Click on `Trails` on the left navigation pane\n3. For Every Trail:\n- Click on a trail via the link in the _Name_ column\n- Under the `General details` section, ensure `Log file validation` is set to `Enabled` \n\n**From Command Line:**\n```\naws cloudtrail describe-trails\n```\nEnsure `LogFileValidationEnabled` is set to `true` for each trail",
+          "RemediationProcedure": "Perform the following to enable log file validation on a given trail:  **From Console:**  1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/cloudtrail](https://console.aws.amazon.com/cloudtrail) 2. Click on `Trails` on the left navigation pane 3. Click on target trail 4. Within the `General details` section click `edit` 5. Under the `Advanced settings` section 6. Check the enable box under `Log file validation`  7. Click `Save changes`   **From Command Line:** ``` aws cloudtrail update-trail --name  --enable-log-file-validation ``` Note that periodic validation of logs using these digests can be performed by running the following command: ``` aws cloudtrail validate-logs --trail-arn  --start-time  --end-time  ```",
+          "AuditProcedure": "Perform the following on each trail to determine if log file validation is enabled:  **From Console:**  1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/cloudtrail](https://console.aws.amazon.com/cloudtrail) 2. Click on `Trails` on the left navigation pane 3. For Every Trail: - Click on a trail via the link in the _Name_ column - Under the `General details` section, ensure `Log file validation` is set to `Enabled`   **From Command Line:** ``` aws cloudtrail describe-trails ``` Ensure `LogFileValidationEnabled` is set to `true` for each trail",
           "AdditionalInformation": "",
           "References": "https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-log-file-validation-enabling.html"
         }
@@ -775,8 +775,8 @@
           "Description": "CloudTrail logs a record of every API call made in your AWS account. These logs file are stored in an S3 bucket. It is recommended that the bucket policy or access control list (ACL) applied to the S3 bucket that CloudTrail logs to prevent public access to the CloudTrail logs.",
           "RationaleStatement": "Allowing public access to CloudTrail log content may aid an adversary in identifying weaknesses in the affected account's use or configuration.",
           "ImpactStatement": "",
-          "RemediationProcedure": "Perform the following to remove any public access that has been granted to the bucket via an ACL or S3 bucket policy:\n\n1. Go to Amazon S3 console at [https://console.aws.amazon.com/s3/home](https://console.aws.amazon.com/s3/home)\n2. Right-click on the bucket and click Properties\n3. In the `Properties` pane, click the `Permissions` tab.\n4. The tab shows a list of grants, one row per grant, in the bucket ACL. Each row identifies the grantee and the permissions granted.\n5. Select the row that grants permission to `Everyone` or `Any Authenticated User` \n6. Uncheck all the permissions granted to `Everyone` or `Any Authenticated User` (click `x` to delete the row).\n7. Click `Save` to save the ACL.\n8. If the `Edit bucket policy` button is present, click it.\n9. Remove any `Statement` having an `Effect` set to `Allow` and a `Principal` set to \"\\*\" or {\"AWS\" : \"\\*\"}.",
-          "AuditProcedure": "Perform the following to determine if any public access is granted to an S3 bucket via an ACL or S3 bucket policy:\n\n**From Console:**\n\n1. Go to the Amazon CloudTrail console at [https://console.aws.amazon.com/cloudtrail/home](https://console.aws.amazon.com/cloudtrail/home)\n2. In the `API activity history` pane on the left, click `Trails` \n3. In the `Trails` pane, note the bucket names in the `S3 bucket` column\n4. Go to Amazon S3 console at [https://console.aws.amazon.com/s3/home](https://console.aws.amazon.com/s3/home)\n5. For each bucket noted in step 3, right-click on the bucket and click `Properties` \n6. In the `Properties` pane, click the `Permissions` tab.\n7. The tab shows a list of grants, one row per grant, in the bucket ACL. Each row identifies the grantee and the permissions granted.\n8. Ensure no rows exists that have the `Grantee` set to `Everyone` or the `Grantee` set to `Any Authenticated User.` \n9. If the `Edit bucket policy` button is present, click it to review the bucket policy.\n10. Ensure the policy does not contain a `Statement` having an `Effect` set to `Allow` and a `Principal` set to \"\\*\" or {\"AWS\" : \"\\*\"}\n\n**From Command Line:**\n\n1. Get the name of the S3 bucket that CloudTrail is logging to:\n```\n aws cloudtrail describe-trails --query 'trailList[*].S3BucketName'\n```\n2. Ensure the `AllUsers` principal is not granted privileges to that `` :\n```\n aws s3api get-bucket-acl --bucket  --query 'Grants[?Grantee.URI== `https://acs.amazonaws.com/groups/global/AllUsers` ]'\n```\n3. Ensure the `AuthenticatedUsers` principal is not granted privileges to that ``:\n```\n aws s3api get-bucket-acl --bucket  --query 'Grants[?Grantee.URI== `https://acs.amazonaws.com/groups/global/Authenticated Users` ]'\n```\n4. Get the S3 Bucket Policy\n```\n aws s3api get-bucket-policy --bucket  \n```\n5. Ensure the policy does not contain a `Statement` having an `Effect` set to `Allow` and a `Principal` set to \"\\*\" or {\"AWS\" : \"\\*\"}\n\n**Note:** Principal set to \"\\*\" or {\"AWS\" : \"\\*\"} allows anonymous access.",
+          "RemediationProcedure": "Perform the following to remove any public access that has been granted to the bucket via an ACL or S3 bucket policy:  1. Go to Amazon S3 console at [https://console.aws.amazon.com/s3/home](https://console.aws.amazon.com/s3/home) 2. Right-click on the bucket and click Properties 3. In the `Properties` pane, click the `Permissions` tab. 4. The tab shows a list of grants, one row per grant, in the bucket ACL. Each row identifies the grantee and the permissions granted. 5. Select the row that grants permission to `Everyone` or `Any Authenticated User`  6. Uncheck all the permissions granted to `Everyone` or `Any Authenticated User` (click `x` to delete the row). 7. Click `Save` to save the ACL. 8. If the `Edit bucket policy` button is present, click it. 9. Remove any `Statement` having an `Effect` set to `Allow` and a `Principal` set to \"\\*\" or {\"AWS\" : \"\\*\"}.",
+          "AuditProcedure": "Perform the following to determine if any public access is granted to an S3 bucket via an ACL or S3 bucket policy:  **From Console:**  1. Go to the Amazon CloudTrail console at [https://console.aws.amazon.com/cloudtrail/home](https://console.aws.amazon.com/cloudtrail/home) 2. In the `API activity history` pane on the left, click `Trails`  3. In the `Trails` pane, note the bucket names in the `S3 bucket` column 4. Go to Amazon S3 console at [https://console.aws.amazon.com/s3/home](https://console.aws.amazon.com/s3/home) 5. For each bucket noted in step 3, right-click on the bucket and click `Properties`  6. In the `Properties` pane, click the `Permissions` tab. 7. The tab shows a list of grants, one row per grant, in the bucket ACL. Each row identifies the grantee and the permissions granted. 8. Ensure no rows exists that have the `Grantee` set to `Everyone` or the `Grantee` set to `Any Authenticated User.`  9. If the `Edit bucket policy` button is present, click it to review the bucket policy. 10. Ensure the policy does not contain a `Statement` having an `Effect` set to `Allow` and a `Principal` set to \"\\*\" or {\"AWS\" : \"\\*\"}  **From Command Line:**  1. Get the name of the S3 bucket that CloudTrail is logging to: ```  aws cloudtrail describe-trails --query 'trailList[*].S3BucketName' ``` 2. Ensure the `AllUsers` principal is not granted privileges to that `` : ```  aws s3api get-bucket-acl --bucket  --query 'Grants[?Grantee.URI== `https://acs.amazonaws.com/groups/global/AllUsers` ]' ``` 3. Ensure the `AuthenticatedUsers` principal is not granted privileges to that ``: ```  aws s3api get-bucket-acl --bucket  --query 'Grants[?Grantee.URI== `https://acs.amazonaws.com/groups/global/Authenticated Users` ]' ``` 4. Get the S3 Bucket Policy ```  aws s3api get-bucket-policy --bucket   ``` 5. Ensure the policy does not contain a `Statement` having an `Effect` set to `Allow` and a `Principal` set to \"\\*\" or {\"AWS\" : \"\\*\"}  **Note:** Principal set to \"\\*\" or {\"AWS\" : \"\\*\"} allows anonymous access.",
           "AdditionalInformation": "",
           "References": "https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_principal.html"
         }
@@ -793,11 +793,11 @@
           "Section": "3. Logging",
           "Profile": "Level 1",
           "AssessmentStatus": "Automated",
-          "Description": "AWS CloudTrail is a web service that records AWS API calls made in a given AWS account. The recorded information includes the identity of the API caller, the time of the API call, the source IP address of the API caller, the request parameters, and the response elements returned by the AWS service. CloudTrail uses Amazon S3 for log file storage and delivery, so log files are stored durably. In addition to capturing CloudTrail logs within a specified S3 bucket for long term analysis, realtime analysis can be performed by configuring CloudTrail to send logs to CloudWatch Logs. For a trail that is enabled in all regions in an account, CloudTrail sends log files from all those regions to a CloudWatch Logs log group. It is recommended that CloudTrail logs be sent to CloudWatch Logs.\n\nNote: The intent of this recommendation is to ensure AWS account activity is being captured, monitored, and appropriately alarmed on. CloudWatch Logs is a native way to accomplish this using AWS services but does not preclude the use of an alternate solution.",
+          "Description": "AWS CloudTrail is a web service that records AWS API calls made in a given AWS account. The recorded information includes the identity of the API caller, the time of the API call, the source IP address of the API caller, the request parameters, and the response elements returned by the AWS service. CloudTrail uses Amazon S3 for log file storage and delivery, so log files are stored durably. In addition to capturing CloudTrail logs within a specified S3 bucket for long term analysis, realtime analysis can be performed by configuring CloudTrail to send logs to CloudWatch Logs. For a trail that is enabled in all regions in an account, CloudTrail sends log files from all those regions to a CloudWatch Logs log group. It is recommended that CloudTrail logs be sent to CloudWatch Logs.  Note: The intent of this recommendation is to ensure AWS account activity is being captured, monitored, and appropriately alarmed on. CloudWatch Logs is a native way to accomplish this using AWS services but does not preclude the use of an alternate solution.",
           "RationaleStatement": "Sending CloudTrail logs to CloudWatch Logs will facilitate real-time and historic activity logging based on user, API, resource, and IP address, and provides opportunity to establish alarms and notifications for anomalous or sensitivity account activity.",
-          "ImpactStatement": "Note: By default, CloudWatch Logs will store Logs indefinitely unless a specific retention period is defined for the log group. When choosing the number of days to retain, keep in mind the average days it takes an organization to realize they have been breached is 210 days (at the time of this writing). Since additional time is required to research a breach, a minimum 365 day retention policy allows time for detection and research. You may also wish to archive the logs to a cheaper storage service rather than simply deleting them. See the following AWS resource to manage CloudWatch Logs retention periods:\n\n1. https://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/SettingLogRetention.html",
-          "RemediationProcedure": "Perform the following to establish the prescribed state:\n\n**From Console:**\n\n1. Login to the CloudTrail console at `https://console.aws.amazon.com/cloudtrail/`\n2. Select the `Trail` the needs to be updated.\n3. Scroll down to `CloudWatch Logs`\n4. Click `Edit`\n5. Under `CloudWatch Logs` click the box `Enabled`\n6. Under `Log Group` pick new or select an existing log group\n7. Edit the `Log group name` to match the CloudTrail or pick the existing CloudWatch Group.\n8. Under `IAM Role` pick new or select an existing.\n9. Edit the `Role name` to match the CloudTrail or pick the existing IAM Role.\n10. Click `Save changes.\n\n**From Command Line:**\n```\naws cloudtrail update-trail --name  --cloudwatch-logs-log-group-arn  --cloudwatch-logs-role-arn \n```",
-          "AuditProcedure": "Perform the following to ensure CloudTrail is configured as prescribed:\n\n**From Console:**\n\n1. Login to the CloudTrail console at `https://console.aws.amazon.com/cloudtrail/`\n2. Under `Trails` , click on the CloudTrail you wish to evaluate\n3. Under the `CloudWatch Logs` section.\n4. Ensure a `CloudWatch Logs` log group is configured and listed.\n5. Under `General details` confirm `Last log file delivered` has a recent (~one day old) timestamp.\n\n**From Command Line:**\n\n1. Run the following command to get a listing of existing trails:\n```\n aws cloudtrail describe-trails\n```\n2. Ensure `CloudWatchLogsLogGroupArn` is not empty and note the value of the `Name` property.\n3. Using the noted value of the `Name` property, run the following command:\n```\n aws cloudtrail get-trail-status --name \n```\n4. Ensure the `LatestcloudwatchLogdDeliveryTime` property is set to a recent (~one day old) timestamp.\n\nIf the `CloudWatch Logs` log group is not setup and the delivery time is not recent refer to the remediation below.",
+          "ImpactStatement": "Note: By default, CloudWatch Logs will store Logs indefinitely unless a specific retention period is defined for the log group. When choosing the number of days to retain, keep in mind the average days it takes an organization to realize they have been breached is 210 days (at the time of this writing). Since additional time is required to research a breach, a minimum 365 day retention policy allows time for detection and research. You may also wish to archive the logs to a cheaper storage service rather than simply deleting them. See the following AWS resource to manage CloudWatch Logs retention periods:  1. https://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/SettingLogRetention.html",
+          "RemediationProcedure": "Perform the following to establish the prescribed state:  **From Console:**  1. Login to the CloudTrail console at `https://console.aws.amazon.com/cloudtrail/` 2. Select the `Trail` the needs to be updated. 3. Scroll down to `CloudWatch Logs` 4. Click `Edit` 5. Under `CloudWatch Logs` click the box `Enabled` 6. Under `Log Group` pick new or select an existing log group 7. Edit the `Log group name` to match the CloudTrail or pick the existing CloudWatch Group. 8. Under `IAM Role` pick new or select an existing. 9. Edit the `Role name` to match the CloudTrail or pick the existing IAM Role. 10. Click `Save changes.  **From Command Line:** ``` aws cloudtrail update-trail --name  --cloudwatch-logs-log-group-arn  --cloudwatch-logs-role-arn  ```",
+          "AuditProcedure": "Perform the following to ensure CloudTrail is configured as prescribed:  **From Console:**  1. Login to the CloudTrail console at `https://console.aws.amazon.com/cloudtrail/` 2. Under `Trails` , click on the CloudTrail you wish to evaluate 3. Under the `CloudWatch Logs` section. 4. Ensure a `CloudWatch Logs` log group is configured and listed. 5. Under `General details` confirm `Last log file delivered` has a recent (~one day old) timestamp.  **From Command Line:**  1. Run the following command to get a listing of existing trails: ```  aws cloudtrail describe-trails ``` 2. Ensure `CloudWatchLogsLogGroupArn` is not empty and note the value of the `Name` property. 3. Using the noted value of the `Name` property, run the following command: ```  aws cloudtrail get-trail-status --name  ``` 4. Ensure the `LatestcloudwatchLogdDeliveryTime` property is set to a recent (~one day old) timestamp.  If the `CloudWatch Logs` log group is not setup and the delivery time is not recent refer to the remediation below.",
           "AdditionalInformation": "",
           "References": "https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-user-guide.html:https://docs.aws.amazon.com/awscloudtrail/latest/userguide/how-cloudtrail-works.html:https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-aws-service-specific-topics.html"
         }
@@ -817,8 +817,8 @@
           "Description": "AWS Config is a web service that performs configuration management of supported AWS resources within your account and delivers log files to you. The recorded information includes the configuration item (AWS resource), relationships between configuration items (AWS resources), any configuration changes between resources. It is recommended AWS Config be enabled in all regions.",
           "RationaleStatement": "The AWS configuration item history captured by AWS Config enables security analysis, resource change tracking, and compliance auditing.",
           "ImpactStatement": "It is recommended AWS Config be enabled in all regions.",
-          "RemediationProcedure": "To implement AWS Config configuration:\n\n**From Console:**\n\n1. Select the region you want to focus on in the top right of the console\n2. Click `Services` \n3. Click `Config` \n4. Define which resources you want to record in the selected region\n5. Choose to include global resources (IAM resources)\n6. Specify an S3 bucket in the same account or in another managed AWS account\n7. Create an SNS Topic from the same AWS account or another managed AWS account\n\n**From Command Line:**\n\n1. Ensure there is an appropriate S3 bucket, SNS topic, and IAM role per the [AWS Config Service prerequisites](http://docs.aws.amazon.com/config/latest/developerguide/gs-cli-prereq.html).\n2. Run this command to set up the configuration recorder\n```\naws configservice subscribe --s3-bucket my-config-bucket --sns-topic arn:aws:sns:us-east-1:012345678912:my-config-notice --iam-role arn:aws:iam::012345678912:role/myConfigRole\n```\n3. Run this command to start the configuration recorder:\n```\nstart-configuration-recorder --configuration-recorder-name \n```",
-          "AuditProcedure": "Process to evaluate AWS Config configuration per region\n\n**From Console:**\n\n1. Sign in to the AWS Management Console and open the AWS Config console at [https://console.aws.amazon.com/config/](https://console.aws.amazon.com/config/).\n2. On the top right of the console select target Region.\n3. If presented with Setup AWS Config - follow remediation procedure:\n4. On the Resource inventory page, Click on edit (the gear icon). The Set Up AWS Config page appears.\n5. Ensure 1 or both check-boxes under \"All Resources\" is checked.\n - Include global resources related to IAM resources - which needs to be enabled in 1 region only\n6. Ensure the correct S3 bucket has been defined.\n7. Ensure the correct SNS topic has been defined.\n8. Repeat steps 2 to 7 for each region.\n\n**From Command Line:**\n\n1. Run this command to show all AWS Config recorders and their properties:\n```\naws configservice describe-configuration-recorders\n```\n2. Evaluate the output to ensure that there's at least one recorder for which `recordingGroup` object includes `\"allSupported\": true` AND `\"includeGlobalResourceTypes\": true`\n\nNote: There is one more parameter \"ResourceTypes\" in recordingGroup object. We don't need to check the same as whenever we set \"allSupported\": true, AWS enforces resource types to be empty (\"ResourceTypes\":[])\n\nSample Output:\n\n```\n{\n \"ConfigurationRecorders\": [\n {\n \"recordingGroup\": {\n \"allSupported\": true,\n \"resourceTypes\": [],\n \"includeGlobalResourceTypes\": true\n },\n \"roleARN\": \"arn:aws:iam:::role/service-role/\",\n \"name\": \"default\"\n }\n ]\n}\n```\n\n3. Run this command to show the status for all AWS Config recorders:\n```\naws configservice describe-configuration-recorder-status\n```\n4. In the output, find recorders with `name` key matching the recorders that met criteria in step 2. Ensure that at least one of them includes `\"recording\": true` and `\"lastStatus\": \"SUCCESS\"`",
+          "RemediationProcedure": "To implement AWS Config configuration:  **From Console:**  1. Select the region you want to focus on in the top right of the console 2. Click `Services`  3. Click `Config`  4. Define which resources you want to record in the selected region 5. Choose to include global resources (IAM resources) 6. Specify an S3 bucket in the same account or in another managed AWS account 7. Create an SNS Topic from the same AWS account or another managed AWS account  **From Command Line:**  1. Ensure there is an appropriate S3 bucket, SNS topic, and IAM role per the [AWS Config Service prerequisites](http://docs.aws.amazon.com/config/latest/developerguide/gs-cli-prereq.html). 2. Run this command to set up the configuration recorder ``` aws configservice subscribe --s3-bucket my-config-bucket --sns-topic arn:aws:sns:us-east-1:012345678912:my-config-notice --iam-role arn:aws:iam::012345678912:role/myConfigRole ``` 3. Run this command to start the configuration recorder: ``` start-configuration-recorder --configuration-recorder-name  ```",
+          "AuditProcedure": "Process to evaluate AWS Config configuration per region  **From Console:**  1. Sign in to the AWS Management Console and open the AWS Config console at [https://console.aws.amazon.com/config/](https://console.aws.amazon.com/config/). 2. On the top right of the console select target Region. 3. If presented with Setup AWS Config - follow remediation procedure: 4. On the Resource inventory page, Click on edit (the gear icon). The Set Up AWS Config page appears. 5. Ensure 1 or both check-boxes under \"All Resources\" is checked.  - Include global resources related to IAM resources - which needs to be enabled in 1 region only 6. Ensure the correct S3 bucket has been defined. 7. Ensure the correct SNS topic has been defined. 8. Repeat steps 2 to 7 for each region.  **From Command Line:**  1. Run this command to show all AWS Config recorders and their properties: ``` aws configservice describe-configuration-recorders ``` 2. Evaluate the output to ensure that there's at least one recorder for which `recordingGroup` object includes `\"allSupported\": true` AND `\"includeGlobalResourceTypes\": true`  Note: There is one more parameter \"ResourceTypes\" in recordingGroup object. We don't need to check the same as whenever we set \"allSupported\": true, AWS enforces resource types to be empty (\"ResourceTypes\":[])  Sample Output:  ``` {  \"ConfigurationRecorders\": [  {  \"recordingGroup\": {  \"allSupported\": true,  \"resourceTypes\": [],  \"includeGlobalResourceTypes\": true  },  \"roleARN\": \"arn:aws:iam:::role/service-role/\",  \"name\": \"default\"  }  ] } ```  3. Run this command to show the status for all AWS Config recorders: ``` aws configservice describe-configuration-recorder-status ``` 4. In the output, find recorders with `name` key matching the recorders that met criteria in step 2. Ensure that at least one of them includes `\"recording\": true` and `\"lastStatus\": \"SUCCESS\"`",
           "AdditionalInformation": "",
           "References": "https://docs.aws.amazon.com/cli/latest/reference/configservice/describe-configuration-recorder-status.html"
         }
@@ -838,8 +838,8 @@
           "Description": "S3 Bucket Access Logging generates a log that contains access records for each request made to your S3 bucket. An access log record contains details about the request, such as the request type, the resources specified in the request worked, and the time and date the request was processed. It is recommended that bucket access logging be enabled on the CloudTrail S3 bucket.",
           "RationaleStatement": "By enabling S3 bucket logging on target S3 buckets, it is possible to capture all events which may affect objects within any target buckets. Configuring logs to be placed in a separate bucket allows access to log information which can be useful in security and incident response workflows.",
           "ImpactStatement": "",
-          "RemediationProcedure": "Perform the following to enable S3 bucket logging:\n\n**From Console:**\n\n1. Sign in to the AWS Management Console and open the S3 console at [https://console.aws.amazon.com/s3](https://console.aws.amazon.com/s3).\n2. Under `All Buckets` click on the target S3 bucket\n3. Click on `Properties` in the top right of the console\n4. Under `Bucket:`  click on `Logging` \n5. Configure bucket logging\n - Click on the `Enabled` checkbox\n - Select Target Bucket from list\n - Enter a Target Prefix\n6. Click `Save`.\n\n**From Command Line:**\n\n1. Get the name of the S3 bucket that CloudTrail is logging to:\n```\naws cloudtrail describe-trails --region  --query trailList[*].S3BucketName\n```\n2. Copy and add target bucket name at ``, Prefix for logfile at `` and optionally add an email address in the following template and save it as ``:\n```\n{\n \"LoggingEnabled\": {\n \"TargetBucket\": \"\",\n \"TargetPrefix\": \"\",\n \"TargetGrants\": [\n {\n \"Grantee\": {\n \"Type\": \"AmazonCustomerByEmail\",\n \"EmailAddress\": \"\"\n },\n \"Permission\": \"FULL_CONTROL\"\n }\n ]\n } \n}\n```\n3. Run the `put-bucket-logging` command with bucket name and `` as input, for more information refer at [put-bucket-logging](https://docs.aws.amazon.com/cli/latest/reference/s3api/put-bucket-logging.html):\n```\naws s3api put-bucket-logging --bucket  --bucket-logging-status file://\n```",
-          "AuditProcedure": "Perform the following ensure the CloudTrail S3 bucket has access logging is enabled:\n\n**From Console:**\n\n1. Go to the Amazon CloudTrail console at [https://console.aws.amazon.com/cloudtrail/home](https://console.aws.amazon.com/cloudtrail/home)\n2. In the API activity history pane on the left, click Trails\n3. In the Trails pane, note the bucket names in the S3 bucket column\n4. Sign in to the AWS Management Console and open the S3 console at [https://console.aws.amazon.com/s3](https://console.aws.amazon.com/s3).\n5. Under `All Buckets` click on a target S3 bucket\n6. Click on `Properties` in the top right of the console\n7. Under `Bucket:` _ `` _ click on `Logging` \n8. Ensure `Enabled` is checked.\n\n**From Command Line:**\n\n1. Get the name of the S3 bucket that CloudTrail is logging to:\n``` \naws cloudtrail describe-trails --query 'trailList[*].S3BucketName' \n```\n2. Ensure Bucket Logging is enabled:\n```\naws s3api get-bucket-logging --bucket \n```\nEnsure command does not returns empty output.\n\nSample Output for a bucket with logging enabled:\n\n```\n{\n \"LoggingEnabled\": {\n \"TargetPrefix\": \"\",\n \"TargetBucket\": \"\"\n }\n}\n```",
+          "RemediationProcedure": "Perform the following to enable S3 bucket logging:  **From Console:**  1. Sign in to the AWS Management Console and open the S3 console at [https://console.aws.amazon.com/s3](https://console.aws.amazon.com/s3). 2. Under `All Buckets` click on the target S3 bucket 3. Click on `Properties` in the top right of the console 4. Under `Bucket:`  click on `Logging`  5. Configure bucket logging  - Click on the `Enabled` checkbox  - Select Target Bucket from list  - Enter a Target Prefix 6. Click `Save`.  **From Command Line:**  1. Get the name of the S3 bucket that CloudTrail is logging to: ``` aws cloudtrail describe-trails --region  --query trailList[*].S3BucketName ``` 2. Copy and add target bucket name at ``, Prefix for logfile at `` and optionally add an email address in the following template and save it as ``: ``` {  \"LoggingEnabled\": {  \"TargetBucket\": \"\",  \"TargetPrefix\": \"\",  \"TargetGrants\": [  {  \"Grantee\": {  \"Type\": \"AmazonCustomerByEmail\",  \"EmailAddress\": \"\"  },  \"Permission\": \"FULL_CONTROL\"  }  ]  }  } ``` 3. Run the `put-bucket-logging` command with bucket name and `` as input, for more information refer at [put-bucket-logging](https://docs.aws.amazon.com/cli/latest/reference/s3api/put-bucket-logging.html): ``` aws s3api put-bucket-logging --bucket  --bucket-logging-status file:// ```",
+          "AuditProcedure": "Perform the following ensure the CloudTrail S3 bucket has access logging is enabled:  **From Console:**  1. Go to the Amazon CloudTrail console at [https://console.aws.amazon.com/cloudtrail/home](https://console.aws.amazon.com/cloudtrail/home) 2. In the API activity history pane on the left, click Trails 3. In the Trails pane, note the bucket names in the S3 bucket column 4. Sign in to the AWS Management Console and open the S3 console at [https://console.aws.amazon.com/s3](https://console.aws.amazon.com/s3). 5. Under `All Buckets` click on a target S3 bucket 6. Click on `Properties` in the top right of the console 7. Under `Bucket:` _ `` _ click on `Logging`  8. Ensure `Enabled` is checked.  **From Command Line:**  1. Get the name of the S3 bucket that CloudTrail is logging to: ```  aws cloudtrail describe-trails --query 'trailList[*].S3BucketName'  ``` 2. Ensure Bucket Logging is enabled: ``` aws s3api get-bucket-logging --bucket  ``` Ensure command does not returns empty output.  Sample Output for a bucket with logging enabled:  ``` {  \"LoggingEnabled\": {  \"TargetPrefix\": \"\",  \"TargetBucket\": \"\"  } } ```",
           "AdditionalInformation": "",
           "References": "https://docs.aws.amazon.com/AmazonS3/latest/dev/ServerLogs.html"
         }
@@ -859,9 +859,9 @@
           "Description": "AWS CloudTrail is a web service that records AWS API calls for an account and makes those logs available to users and resources in accordance with IAM policies. AWS Key Management Service (KMS) is a managed service that helps create and control the encryption keys used to encrypt account data, and uses Hardware Security Modules (HSMs) to protect the security of encryption keys. CloudTrail logs can be configured to leverage server side encryption (SSE) and KMS customer created master keys (CMK) to further protect CloudTrail logs. It is recommended that CloudTrail be configured to use SSE-KMS.",
           "RationaleStatement": "Configuring CloudTrail to use SSE-KMS provides additional confidentiality controls on log data as a given user must have S3 read permission on the corresponding log bucket and must be granted decrypt permission by the CMK policy.",
           "ImpactStatement": "Customer created keys incur an additional cost. See https://aws.amazon.com/kms/pricing/ for more information.",
-          "RemediationProcedure": "Perform the following to configure CloudTrail to use SSE-KMS:\n\n**From Console:**\n\n1. Sign in to the AWS Management Console and open the CloudTrail console at [https://console.aws.amazon.com/cloudtrail](https://console.aws.amazon.com/cloudtrail)\n2. In the left navigation pane, choose `Trails` .\n3. Click on a Trail\n4. Under the `S3` section click on the edit button (pencil icon)\n5. Click `Advanced` \n6. Select an existing CMK from the `KMS key Id` drop-down menu\n - Note: Ensure the CMK is located in the same region as the S3 bucket\n - Note: You will need to apply a KMS Key policy on the selected CMK in order for CloudTrail as a service to encrypt and decrypt log files using the CMK provided. Steps are provided [here](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/create-kms-key-policy-for-cloudtrail.html) for editing the selected CMK Key policy\n7. Click `Save` \n8. You will see a notification message stating that you need to have decrypt permissions on the specified KMS key to decrypt log files.\n9. Click `Yes` \n\n**From Command Line:**\n```\naws cloudtrail update-trail --name  --kms-id \naws kms put-key-policy --key-id  --policy \n```",
-          "AuditProcedure": "Perform the following to determine if CloudTrail is configured to use SSE-KMS:\n\n**From Console:**\n\n1. Sign in to the AWS Management Console and open the CloudTrail console at [https://console.aws.amazon.com/cloudtrail](https://console.aws.amazon.com/cloudtrail)\n2. In the left navigation pane, choose `Trails` .\n3. Select a Trail\n4. Under the `S3` section, ensure `Encrypt log files` is set to `Yes` and a KMS key ID is specified in the `KSM Key Id` field.\n\n**From Command Line:**\n\n1. Run the following command:\n```\n aws cloudtrail describe-trails \n```\n2. For each trail listed, SSE-KMS is enabled if the trail has a `KmsKeyId` property defined.",
-          "AdditionalInformation": "3 statements which need to be added to the CMK policy:\n\n1\\. Enable Cloudtrail to describe CMK properties\n```\n
{\n \"Sid\": \"Allow CloudTrail access\",\n \"Effect\": \"Allow\",\n \"Principal\": {\n \"Service\": \"cloudtrail.amazonaws.com\"\n },\n \"Action\": \"kms:DescribeKey\",\n \"Resource\": \"*\"\n}\n```\n2\\. Granting encrypt permissions\n```\n
{\n \"Sid\": \"Allow CloudTrail to encrypt logs\",\n \"Effect\": \"Allow\",\n \"Principal\": {\n \"Service\": \"cloudtrail.amazonaws.com\"\n },\n \"Action\": \"kms:GenerateDataKey*\",\n \"Resource\": \"*\",\n \"Condition\": {\n \"StringLike\": {\n \"kms:EncryptionContext:aws:cloudtrail:arn\": [\n \"arn:aws:cloudtrail:*:aws-account-id:trail/*\"\n ]\n }\n }\n}\n```\n3\\. Granting decrypt permissions\n```\n
{\n \"Sid\": \"Enable CloudTrail log decrypt permissions\",\n \"Effect\": \"Allow\",\n \"Principal\": {\n \"AWS\": \"arn:aws:iam::aws-account-id:user/username\"\n },\n \"Action\": \"kms:Decrypt\",\n \"Resource\": \"*\",\n \"Condition\": {\n \"Null\": {\n \"kms:EncryptionContext:aws:cloudtrail:arn\": \"false\"\n }\n }\n}\n```",
+          "RemediationProcedure": "Perform the following to configure CloudTrail to use SSE-KMS:  **From Console:**  1. Sign in to the AWS Management Console and open the CloudTrail console at [https://console.aws.amazon.com/cloudtrail](https://console.aws.amazon.com/cloudtrail) 2. In the left navigation pane, choose `Trails` . 3. Click on a Trail 4. Under the `S3` section click on the edit button (pencil icon) 5. Click `Advanced`  6. Select an existing CMK from the `KMS key Id` drop-down menu  - Note: Ensure the CMK is located in the same region as the S3 bucket  - Note: You will need to apply a KMS Key policy on the selected CMK in order for CloudTrail as a service to encrypt and decrypt log files using the CMK provided. Steps are provided [here](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/create-kms-key-policy-for-cloudtrail.html) for editing the selected CMK Key policy 7. Click `Save`  8. You will see a notification message stating that you need to have decrypt permissions on the specified KMS key to decrypt log files. 9. Click `Yes`   **From Command Line:** ``` aws cloudtrail update-trail --name  --kms-id  aws kms put-key-policy --key-id  --policy  ```",
+          "AuditProcedure": "Perform the following to determine if CloudTrail is configured to use SSE-KMS:  **From Console:**  1. Sign in to the AWS Management Console and open the CloudTrail console at [https://console.aws.amazon.com/cloudtrail](https://console.aws.amazon.com/cloudtrail) 2. In the left navigation pane, choose `Trails` . 3. Select a Trail 4. Under the `S3` section, ensure `Encrypt log files` is set to `Yes` and a KMS key ID is specified in the `KSM Key Id` field.  **From Command Line:**  1. Run the following command: ```  aws cloudtrail describe-trails  ``` 2. For each trail listed, SSE-KMS is enabled if the trail has a `KmsKeyId` property defined.",
+          "AdditionalInformation": "3 statements which need to be added to the CMK policy:  1\\. Enable Cloudtrail to describe CMK properties ``` 
{  \"Sid\": \"Allow CloudTrail access\",  \"Effect\": \"Allow\",  \"Principal\": {  \"Service\": \"cloudtrail.amazonaws.com\"  },  \"Action\": \"kms:DescribeKey\",  \"Resource\": \"*\" } ``` 2\\. Granting encrypt permissions ``` 
{  \"Sid\": \"Allow CloudTrail to encrypt logs\",  \"Effect\": \"Allow\",  \"Principal\": {  \"Service\": \"cloudtrail.amazonaws.com\"  },  \"Action\": \"kms:GenerateDataKey*\",  \"Resource\": \"*\",  \"Condition\": {  \"StringLike\": {  \"kms:EncryptionContext:aws:cloudtrail:arn\": [  \"arn:aws:cloudtrail:*:aws-account-id:trail/*\"  ]  }  } } ``` 3\\. Granting decrypt permissions ``` 
{  \"Sid\": \"Enable CloudTrail log decrypt permissions\",  \"Effect\": \"Allow\",  \"Principal\": {  \"AWS\": \"arn:aws:iam::aws-account-id:user/username\"  },  \"Action\": \"kms:Decrypt\",  \"Resource\": \"*\",  \"Condition\": {  \"Null\": {  \"kms:EncryptionContext:aws:cloudtrail:arn\": \"false\"  }  } } ```",
           "References": "https://docs.aws.amazon.com/awscloudtrail/latest/userguide/encrypting-cloudtrail-log-files-with-aws-kms.html:https://docs.aws.amazon.com/kms/latest/developerguide/create-keys.html"
         }
       ]
@@ -878,10 +878,10 @@
           "Profile": "Level 2",
           "AssessmentStatus": "Automated",
           "Description": "AWS Key Management Service (KMS) allows customers to rotate the backing key which is key material stored within the KMS which is tied to the key ID of the Customer Created customer master key (CMK). It is the backing key that is used to perform cryptographic operations such as encryption and decryption. Automated key rotation currently retains all prior backing keys so that decryption of encrypted data can take place transparently. It is recommended that CMK key rotation be enabled for symmetric keys. Key rotation can not be enabled for any asymmetric CMK.",
-          "RationaleStatement": "Rotating encryption keys helps reduce the potential impact of a compromised key as data encrypted with a new key cannot be accessed with a previous key that may have been exposed.\nKeys should be rotated every year, or upon event that would result in the compromise of that key.",
+          "RationaleStatement": "Rotating encryption keys helps reduce the potential impact of a compromised key as data encrypted with a new key cannot be accessed with a previous key that may have been exposed. Keys should be rotated every year, or upon event that would result in the compromise of that key.",
           "ImpactStatement": "Creation, management, and storage of CMKs may require additional time from and administrator.",
-          "RemediationProcedure": "**From Console:**\n\n1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam](https://console.aws.amazon.com/iam).\n2. In the left navigation pane, choose `Customer managed keys` .\n3. Select a customer managed CMK where `Key spec = SYMMETRIC_DEFAULT`\n4. Underneath the \"General configuration\" panel open the tab \"Key rotation\"\n5. Check the \"Automatically rotate this KMS key every year.\" checkbox\n\n**From Command Line:**\n\n1. Run the following command to enable key rotation:\n```\n aws kms enable-key-rotation --key-id \n```",
-          "AuditProcedure": "**From Console:**\n\n1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam](https://console.aws.amazon.com/iam).\n2. In the left navigation pane, choose `Customer managed keys`\n3. Select a customer managed CMK where `Key spec = SYMMETRIC_DEFAULT`\n4. Underneath the `General configuration` panel open the tab `Key rotation`\n5. Ensure that the checkbox `Automatically rotate this KMS key every year.` is activated\n6. Repeat steps 3 - 5 for all customer managed CMKs where \"Key spec = SYMMETRIC_DEFAULT\"\n\n**From Command Line:**\n\n1. Run the following command to get a list of all keys and their associated `KeyIds` \n```\n aws kms list-keys\n```\n2. For each key, note the KeyId and run the following command\n```\ndescribe-key --key-id \n```\n3. If the response contains \"KeySpec = SYMMETRIC_DEFAULT\" run the following command\n```\n aws kms get-key-rotation-status --key-id \n```\n4. Ensure `KeyRotationEnabled` is set to `true`\n5. Repeat steps 2 - 4 for all remaining CMKs",
+          "RemediationProcedure": "**From Console:**  1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam](https://console.aws.amazon.com/iam). 2. In the left navigation pane, choose `Customer managed keys` . 3. Select a customer managed CMK where `Key spec = SYMMETRIC_DEFAULT` 4. Underneath the \"General configuration\" panel open the tab \"Key rotation\" 5. Check the \"Automatically rotate this KMS key every year.\" checkbox  **From Command Line:**  1. Run the following command to enable key rotation: ```  aws kms enable-key-rotation --key-id  ```",
+          "AuditProcedure": "**From Console:**  1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam](https://console.aws.amazon.com/iam). 2. In the left navigation pane, choose `Customer managed keys` 3. Select a customer managed CMK where `Key spec = SYMMETRIC_DEFAULT` 4. Underneath the `General configuration` panel open the tab `Key rotation` 5. Ensure that the checkbox `Automatically rotate this KMS key every year.` is activated 6. Repeat steps 3 - 5 for all customer managed CMKs where \"Key spec = SYMMETRIC_DEFAULT\"  **From Command Line:**  1. Run the following command to get a list of all keys and their associated `KeyIds`  ```  aws kms list-keys ``` 2. For each key, note the KeyId and run the following command ``` describe-key --key-id  ``` 3. If the response contains \"KeySpec = SYMMETRIC_DEFAULT\" run the following command ```  aws kms get-key-rotation-status --key-id  ``` 4. Ensure `KeyRotationEnabled` is set to `true` 5. Repeat steps 2 - 4 for all remaining CMKs",
           "AdditionalInformation": "",
           "References": "https://aws.amazon.com/kms/pricing/:https://csrc.nist.gov/publications/detail/sp/800-57-part-1/rev-5/final"
         }
@@ -900,9 +900,9 @@
           "AssessmentStatus": "Automated",
           "Description": "VPC Flow Logs is a feature that enables you to capture information about the IP traffic going to and from network interfaces in your VPC. After you've created a flow log, you can view and retrieve its data in Amazon CloudWatch Logs. It is recommended that VPC Flow Logs be enabled for packet \"Rejects\" for VPCs.",
           "RationaleStatement": "VPC Flow Logs provide visibility into network traffic that traverses the VPC and can be used to detect anomalous traffic or insight during security workflows.",
-          "ImpactStatement": "By default, CloudWatch Logs will store Logs indefinitely unless a specific retention period is defined for the log group. When choosing the number of days to retain, keep in mind the average days it takes an organization to realize they have been breached is 210 days (at the time of this writing). Since additional time is required to research a breach, a minimum 365 day retention policy allows time for detection and research. You may also wish to archive the logs to a cheaper storage service rather than simply deleting them. See the following AWS resource to manage CloudWatch Logs retention periods:\n\n1. https://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/SettingLogRetention.html",
-          "RemediationProcedure": "Perform the following to determine if VPC Flow logs is enabled:\n\n**From Console:**\n\n1. Sign into the management console\n2. Select `Services` then `VPC` \n3. In the left navigation pane, select `Your VPCs` \n4. Select a VPC\n5. In the right pane, select the `Flow Logs` tab.\n6. If no Flow Log exists, click `Create Flow Log` \n7. For Filter, select `Reject`\n8. Enter in a `Role` and `Destination Log Group` \n9. Click `Create Log Flow` \n10. Click on `CloudWatch Logs Group` \n\n**Note:** Setting the filter to \"Reject\" will dramatically reduce the logging data accumulation for this recommendation and provide sufficient information for the purposes of breach detection, research and remediation. However, during periods of least privilege security group engineering, setting this the filter to \"All\" can be very helpful in discovering existing traffic flows required for proper operation of an already running environment.\n\n**From Command Line:**\n\n1. Create a policy document and name it as `role_policy_document.json` and paste the following content:\n```\n{\n \"Version\": \"2012-10-17\",\n \"Statement\": [\n {\n \"Sid\": \"test\",\n \"Effect\": \"Allow\",\n \"Principal\": {\n \"Service\": \"ec2.amazonaws.com\"\n },\n \"Action\": \"sts:AssumeRole\"\n }\n ]\n}\n```\n2. Create another policy document and name it as `iam_policy.json` and paste the following content:\n```\n{\n \"Version\": \"2012-10-17\",\n \"Statement\": [\n {\n \"Effect\": \"Allow\",\n \"Action\":[\n \"logs:CreateLogGroup\",\n \"logs:CreateLogStream\",\n \"logs:DescribeLogGroups\",\n \"logs:DescribeLogStreams\",\n \"logs:PutLogEvents\",\n \"logs:GetLogEvents\",\n \"logs:FilterLogEvents\"\n ],\n \"Resource\": \"*\"\n }\n ]\n}\n```\n3. Run the below command to create an IAM role:\n```\naws iam create-role --role-name  --assume-role-policy-document file://role_policy_document.json \n```\n4. Run the below command to create an IAM policy:\n```\naws iam create-policy --policy-name  --policy-document file://iam-policy.json\n```\n5. Run `attach-group-policy` command using the IAM policy ARN returned at the previous step to attach the policy to the IAM role (if the command succeeds, no output is returned):\n```\naws iam attach-group-policy --policy-arn arn:aws:iam:::policy/ --group-name \n```\n6. Run `describe-vpcs` to get the VpcId available in the selected region:\n```\naws ec2 describe-vpcs --region \n```\n7. The command output should return the VPC Id available in the selected region.\n8. Run `create-flow-logs` to create a flow log for the vpc:\n```\naws ec2 create-flow-logs --resource-type VPC --resource-ids  --traffic-type REJECT --log-group-name  --deliver-logs-permission-arn \n```\n9. Repeat step 8 for other vpcs available in the selected region.\n10. Change the region by updating --region and repeat remediation procedure for other vpcs.",
-          "AuditProcedure": "Perform the following to determine if VPC Flow logs are enabled:\n\n**From Console:**\n\n1. Sign into the management console\n2. Select `Services` then `VPC` \n3. In the left navigation pane, select `Your VPCs` \n4. Select a VPC\n5. In the right pane, select the `Flow Logs` tab.\n6. Ensure a Log Flow exists that has `Active` in the `Status` column.\n\n**From Command Line:**\n\n1. Run `describe-vpcs` command (OSX/Linux/UNIX) to list the VPC networks available in the current AWS region:\n```\naws ec2 describe-vpcs --region  --query Vpcs[].VpcId\n```\n2. The command output returns the `VpcId` available in the selected region.\n3. Run `describe-flow-logs` command (OSX/Linux/UNIX) using the VPC ID to determine if the selected virtual network has the Flow Logs feature enabled:\n```\naws ec2 describe-flow-logs --filter \"Name=resource-id,Values=\"\n```\n4. If there are no Flow Logs created for the selected VPC, the command output will return an `empty list []`.\n5. Repeat step 3 for other VPCs available in the same region.\n6. Change the region by updating `--region` and repeat steps 1 - 5 for all the VPCs.",
+          "ImpactStatement": "By default, CloudWatch Logs will store Logs indefinitely unless a specific retention period is defined for the log group. When choosing the number of days to retain, keep in mind the average days it takes an organization to realize they have been breached is 210 days (at the time of this writing). Since additional time is required to research a breach, a minimum 365 day retention policy allows time for detection and research. You may also wish to archive the logs to a cheaper storage service rather than simply deleting them. See the following AWS resource to manage CloudWatch Logs retention periods:  1. https://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/SettingLogRetention.html",
+          "RemediationProcedure": "Perform the following to determine if VPC Flow logs is enabled:  **From Console:**  1. Sign into the management console 2. Select `Services` then `VPC`  3. In the left navigation pane, select `Your VPCs`  4. Select a VPC 5. In the right pane, select the `Flow Logs` tab. 6. If no Flow Log exists, click `Create Flow Log`  7. For Filter, select `Reject` 8. Enter in a `Role` and `Destination Log Group`  9. Click `Create Log Flow`  10. Click on `CloudWatch Logs Group`   **Note:** Setting the filter to \"Reject\" will dramatically reduce the logging data accumulation for this recommendation and provide sufficient information for the purposes of breach detection, research and remediation. However, during periods of least privilege security group engineering, setting this the filter to \"All\" can be very helpful in discovering existing traffic flows required for proper operation of an already running environment.  **From Command Line:**  1. Create a policy document and name it as `role_policy_document.json` and paste the following content: ``` {  \"Version\": \"2012-10-17\",  \"Statement\": [  {  \"Sid\": \"test\",  \"Effect\": \"Allow\",  \"Principal\": {  \"Service\": \"ec2.amazonaws.com\"  },  \"Action\": \"sts:AssumeRole\"  }  ] } ``` 2. Create another policy document and name it as `iam_policy.json` and paste the following content: ``` {  \"Version\": \"2012-10-17\",  \"Statement\": [  {  \"Effect\": \"Allow\",  \"Action\":[  \"logs:CreateLogGroup\",  \"logs:CreateLogStream\",  \"logs:DescribeLogGroups\",  \"logs:DescribeLogStreams\",  \"logs:PutLogEvents\",  \"logs:GetLogEvents\",  \"logs:FilterLogEvents\"  ],  \"Resource\": \"*\"  }  ] } ``` 3. Run the below command to create an IAM role: ``` aws iam create-role --role-name  --assume-role-policy-document file://role_policy_document.json  ``` 4. Run the below command to create an IAM policy: ``` aws iam create-policy --policy-name  --policy-document file://iam-policy.json ``` 5. Run `attach-group-policy` command using the IAM policy ARN returned at the previous step to attach the policy to the IAM role (if the command succeeds, no output is returned): ``` aws iam attach-group-policy --policy-arn arn:aws:iam:::policy/ --group-name  ``` 6. Run `describe-vpcs` to get the VpcId available in the selected region: ``` aws ec2 describe-vpcs --region  ``` 7. The command output should return the VPC Id available in the selected region. 8. Run `create-flow-logs` to create a flow log for the vpc: ``` aws ec2 create-flow-logs --resource-type VPC --resource-ids  --traffic-type REJECT --log-group-name  --deliver-logs-permission-arn  ``` 9. Repeat step 8 for other vpcs available in the selected region. 10. Change the region by updating --region and repeat remediation procedure for other vpcs.",
+          "AuditProcedure": "Perform the following to determine if VPC Flow logs are enabled:  **From Console:**  1. Sign into the management console 2. Select `Services` then `VPC`  3. In the left navigation pane, select `Your VPCs`  4. Select a VPC 5. In the right pane, select the `Flow Logs` tab. 6. Ensure a Log Flow exists that has `Active` in the `Status` column.  **From Command Line:**  1. Run `describe-vpcs` command (OSX/Linux/UNIX) to list the VPC networks available in the current AWS region: ``` aws ec2 describe-vpcs --region  --query Vpcs[].VpcId ``` 2. The command output returns the `VpcId` available in the selected region. 3. Run `describe-flow-logs` command (OSX/Linux/UNIX) using the VPC ID to determine if the selected virtual network has the Flow Logs feature enabled: ``` aws ec2 describe-flow-logs --filter \"Name=resource-id,Values=\" ``` 4. If there are no Flow Logs created for the selected VPC, the command output will return an `empty list []`. 5. Repeat step 3 for other VPCs available in the same region. 6. Change the region by updating `--region` and repeat steps 1 - 5 for all the VPCs.",
           "AdditionalInformation": "",
           "References": "https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/flow-logs.html"
         }
@@ -921,10 +921,10 @@
           "AssessmentStatus": "Automated",
           "Description": "Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs and establishing corresponding metric filters and alarms. It is recommended that a metric filter and alarm be established for unauthorized API calls.",
           "RationaleStatement": "Monitoring unauthorized API calls will help reveal application errors and may reduce time to detect malicious activity.",
-          "ImpactStatement": "This alert may be triggered by normal read-only console activities that attempt to opportunistically gather optional information, but gracefully fail if they don't have permissions.\n\nIf an excessive number of alerts are being generated then an organization may wish to consider adding read access to the limited IAM user permissions simply to quiet the alerts.\n\nIn some cases doing this may allow the users to actually view some areas of the system - any additional access given should be reviewed for alignment with the original limited IAM user intent.",
-          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:\n\n1. Create a metric filter based on filter pattern provided which checks for unauthorized API calls and the `` taken from audit step 1.\n```\naws logs put-metric-filter --log-group-name \"cloudtrail_log_group_name\" --filter-name \"\" --metric-transformations metricName=unauthorized_api_calls_metric,metricNamespace=CISBenchmark,metricValue=1 --filter-pattern \"{ ($.errorCode = \"*UnauthorizedOperation\") || ($.errorCode = \"AccessDenied*\") || ($.sourceIPAddress!=\"delivery.logs.amazonaws.com\") || ($.eventName!=\"HeadBucket\") }\"\n```\n\n**Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.\n\n2. Create an SNS topic that the alarm will notify\n```\naws sns create-topic --name \n```\n**Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.\n**Note**: Capture the TopicArn displayed when creating the SNS Topic in Step 2.\n\n3. Create an SNS subscription to the topic created in step 2\n```\naws sns subscribe --topic-arn  --protocol  --notification-endpoint \n```\n\n**Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.\n\n4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2\n```\naws cloudwatch put-metric-alarm --alarm-name \"unauthorized_api_calls_alarm\" --metric-name \"unauthorized_api_calls_metric\" --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace \"CISBenchmark\" --alarm-actions \n```",
-          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:\n\n1. Identify the log group name configured for use with active multi-region CloudTrail:\n\n- List all CloudTrails: `aws cloudtrail describe-trails`\n\n- Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`\n\n- From value associated with \"Name\":` note ``\n\n- From value associated with \"CloudWatchLogsLogGroupArn\" note \n\nExample: for CloudWatchLogsLogGroupArn that looks like arn:aws:logs:::log-group:NewGroup:*,  would be NewGroup\n\n- Ensure Identified Multi region CloudTrail is active\n\n`aws cloudtrail get-trail-status --name `\n\nensure `IsLogging` is set to `TRUE`\n\n- Ensure identified Multi-region Cloudtrail captures all Management Events\n\n`aws cloudtrail get-event-selectors --trail-name <\"Name\" as shown in describe-trails>`\n\nEnsure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`\n\n2. Get a list of all associated metric filters for this `` that you captured in step 1:\n\n```\naws logs describe-metric-filters --log-group-name \"\"\n```\n\n3. Ensure the output from the above command contains the following:\n\n```\n\"filterPattern\": \"{ ($.errorCode = *UnauthorizedOperation) || ($.errorCode = AccessDenied*) || ($.sourceIPAddress!=delivery.logs.amazonaws.com) || ($.eventName!=HeadBucket) }\",\n```\n\n4. Note the \"filterName\" `` value associated with the `filterPattern` found in step 3.\n\n5. Get a list of CloudWatch alarms and filter on the `` captured in step 4.\n\n```\naws cloudwatch describe-alarms --query \"MetricAlarms[?MetricName == `unauthorized_api_calls_metric`]\"\n```\n\n6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.\n\n7. Ensure there is at least one active subscriber to the SNS topic\n\n```\naws sns list-subscriptions-by-topic --topic-arn  \n```\nat least one subscription should have \"SubscriptionArn\" with valid aws ARN.\n\n```\nExample of valid \"SubscriptionArn\": \"arn:aws:sns::::\"\n```",
-          "AdditionalInformation": "Configuring log metric filter and alarm on Multi-region (global) CloudTrail\n- ensures that activities from all regions (used as well as unused) are monitored\n- ensures that activities on all supported global services are monitored\n- ensures that all management events across all regions are monitored",
+          "ImpactStatement": "This alert may be triggered by normal read-only console activities that attempt to opportunistically gather optional information, but gracefully fail if they don't have permissions.  If an excessive number of alerts are being generated then an organization may wish to consider adding read access to the limited IAM user permissions simply to quiet the alerts.  In some cases doing this may allow the users to actually view some areas of the system - any additional access given should be reviewed for alignment with the original limited IAM user intent.",
+          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:  1. Create a metric filter based on filter pattern provided which checks for unauthorized API calls and the `` taken from audit step 1. ``` aws logs put-metric-filter --log-group-name \"cloudtrail_log_group_name\" --filter-name \"\" --metric-transformations metricName=unauthorized_api_calls_metric,metricNamespace=CISBenchmark,metricValue=1 --filter-pattern \"{ ($.errorCode = \"*UnauthorizedOperation\") || ($.errorCode = \"AccessDenied*\") || ($.sourceIPAddress!=\"delivery.logs.amazonaws.com\") || ($.eventName!=\"HeadBucket\") }\" ```  **Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.  2. Create an SNS topic that the alarm will notify ``` aws sns create-topic --name  ``` **Note**: you can execute this command once and then re-use the same topic for all monitoring alarms. **Note**: Capture the TopicArn displayed when creating the SNS Topic in Step 2.  3. Create an SNS subscription to the topic created in step 2 ``` aws sns subscribe --topic-arn  --protocol  --notification-endpoint  ```  **Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.  4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 ``` aws cloudwatch put-metric-alarm --alarm-name \"unauthorized_api_calls_alarm\" --metric-name \"unauthorized_api_calls_metric\" --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace \"CISBenchmark\" --alarm-actions  ```",
+          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:  1. Identify the log group name configured for use with active multi-region CloudTrail:  - List all CloudTrails: `aws cloudtrail describe-trails`  - Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`  - From value associated with \"Name\":` note ``  - From value associated with \"CloudWatchLogsLogGroupArn\" note   Example: for CloudWatchLogsLogGroupArn that looks like arn:aws:logs:::log-group:NewGroup:*,  would be NewGroup  - Ensure Identified Multi region CloudTrail is active  `aws cloudtrail get-trail-status --name `  ensure `IsLogging` is set to `TRUE`  - Ensure identified Multi-region Cloudtrail captures all Management Events  `aws cloudtrail get-event-selectors --trail-name <\"Name\" as shown in describe-trails>`  Ensure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`  2. Get a list of all associated metric filters for this `` that you captured in step 1:  ``` aws logs describe-metric-filters --log-group-name \"\" ```  3. Ensure the output from the above command contains the following:  ``` \"filterPattern\": \"{ ($.errorCode = *UnauthorizedOperation) || ($.errorCode = AccessDenied*) || ($.sourceIPAddress!=delivery.logs.amazonaws.com) || ($.eventName!=HeadBucket) }\", ```  4. Note the \"filterName\" `` value associated with the `filterPattern` found in step 3.  5. Get a list of CloudWatch alarms and filter on the `` captured in step 4.  ``` aws cloudwatch describe-alarms --query \"MetricAlarms[?MetricName == `unauthorized_api_calls_metric`]\" ```  6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.  7. Ensure there is at least one active subscriber to the SNS topic  ``` aws sns list-subscriptions-by-topic --topic-arn   ``` at least one subscription should have \"SubscriptionArn\" with valid aws ARN.  ``` Example of valid \"SubscriptionArn\": \"arn:aws:sns::::\" ```",
+          "AdditionalInformation": "Configuring log metric filter and alarm on Multi-region (global) CloudTrail - ensures that activities from all regions (used as well as unused) are monitored - ensures that activities on all supported global services are monitored - ensures that all management events across all regions are monitored",
           "References": "https://aws.amazon.com/sns/:https://docs.aws.amazon.com/awscloudtrail/latest/userguide/receive-cloudtrail-log-files-from-multiple-regions.html:https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudwatch-alarms-for-cloudtrail.html:https://docs.aws.amazon.com/sns/latest/dg/SubscribeTopic.html"
         }
       ]
@@ -943,9 +943,9 @@
           "Description": "Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs and establishing corresponding metric filters and alarms. Security Groups are a stateful packet filter that controls ingress and egress traffic within a VPC. It is recommended that a metric filter and alarm be established for detecting changes to Security Groups.",
           "RationaleStatement": "Monitoring changes to security group will help ensure that resources and services are not unintentionally exposed.",
           "ImpactStatement": "",
-          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:\n\n1. Create a metric filter based on filter pattern provided which checks for security groups changes and the `` taken from audit step 1.\n```\naws logs put-metric-filter --log-group-name \"\" --filter-name \"\" --metric-transformations metricName= \"\" ,metricNamespace=\"CISBenchmark\",metricValue=1 --filter-pattern \"{ ($.eventName = AuthorizeSecurityGroupIngress) || ($.eventName = AuthorizeSecurityGroupEgress) || ($.eventName = RevokeSecurityGroupIngress) || ($.eventName = RevokeSecurityGroupEgress) || ($.eventName = CreateSecurityGroup) || ($.eventName = DeleteSecurityGroup) }\"\n```\n\n**Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.\n\n2. Create an SNS topic that the alarm will notify\n```\naws sns create-topic --name \"\"\n```\n\n**Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.\n\n3. Create an SNS subscription to the topic created in step 2\n```\naws sns subscribe --topic-arn \"\" --protocol  --notification-endpoint \"\"\n```\n\n**Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.\n\n4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2\n```\naws cloudwatch put-metric-alarm --alarm-name \"\" --metric-name \"\" --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace \"CISBenchmark\" --alarm-actions \"\"\n```",
-          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:\n\n1. Identify the log group name configured for use with active multi-region CloudTrail:\n\n- List all CloudTrails: `aws cloudtrail describe-trails`\n\n- Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`\n\n- From value associated with CloudWatchLogsLogGroupArn note ``\n\nExample: for CloudWatchLogsLogGroupArn that looks like `arn:aws:logs:::log-group:NewGroup:*`, `` would be `NewGroup`\n\n- Ensure Identified Multi region CloudTrail is active\n\n`aws cloudtrail get-trail-status --name `\n\nensure `IsLogging` is set to `TRUE`\n\n- Ensure identified Multi-region Cloudtrail captures all Management Events\n\n`aws cloudtrail get-event-selectors --trail-name `\n\nEnsure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`\n\n2. Get a list of all associated metric filters for this ``:\n```\naws logs describe-metric-filters --log-group-name \"\"\n```\n3. Ensure the output from the above command contains the following:\n```\n\"filterPattern\": \"{ ($.eventName = AuthorizeSecurityGroupIngress) || ($.eventName = AuthorizeSecurityGroupEgress) || ($.eventName = RevokeSecurityGroupIngress) || ($.eventName = RevokeSecurityGroupEgress) || ($.eventName = CreateSecurityGroup) || ($.eventName = DeleteSecurityGroup) }\"\n```\n4. Note the `` value associated with the `filterPattern` found in step 3.\n\n5. Get a list of CloudWatch alarms and filter on the `` captured in step 4.\n```\naws cloudwatch describe-alarms --query \"MetricAlarms[?MetricName== '']\"\n```\n6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.\n\n7. Ensure there is at least one active subscriber to the SNS topic\n```\naws sns list-subscriptions-by-topic --topic-arn  \n```\nat least one subscription should have \"SubscriptionArn\" with valid aws ARN.\n```\nExample of valid \"SubscriptionArn\": \"arn:aws:sns::::\"\n```",
-          "AdditionalInformation": "Configuring log metric filter and alarm on Multi-region (global) CloudTrail\n- ensures that activities from all regions (used as well as unused) are monitored\n- ensures that activities on all supported global services are monitored\n- ensures that all management events across all regions are monitored",
+          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:  1. Create a metric filter based on filter pattern provided which checks for security groups changes and the `` taken from audit step 1. ``` aws logs put-metric-filter --log-group-name \"\" --filter-name \"\" --metric-transformations metricName= \"\" ,metricNamespace=\"CISBenchmark\",metricValue=1 --filter-pattern \"{ ($.eventName = AuthorizeSecurityGroupIngress) || ($.eventName = AuthorizeSecurityGroupEgress) || ($.eventName = RevokeSecurityGroupIngress) || ($.eventName = RevokeSecurityGroupEgress) || ($.eventName = CreateSecurityGroup) || ($.eventName = DeleteSecurityGroup) }\" ```  **Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.  2. Create an SNS topic that the alarm will notify ``` aws sns create-topic --name \"\" ```  **Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.  3. Create an SNS subscription to the topic created in step 2 ``` aws sns subscribe --topic-arn \"\" --protocol  --notification-endpoint \"\" ```  **Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.  4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 ``` aws cloudwatch put-metric-alarm --alarm-name \"\" --metric-name \"\" --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace \"CISBenchmark\" --alarm-actions \"\" ```",
+          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:  1. Identify the log group name configured for use with active multi-region CloudTrail:  - List all CloudTrails: `aws cloudtrail describe-trails`  - Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`  - From value associated with CloudWatchLogsLogGroupArn note ``  Example: for CloudWatchLogsLogGroupArn that looks like `arn:aws:logs:::log-group:NewGroup:*`, `` would be `NewGroup`  - Ensure Identified Multi region CloudTrail is active  `aws cloudtrail get-trail-status --name `  ensure `IsLogging` is set to `TRUE`  - Ensure identified Multi-region Cloudtrail captures all Management Events  `aws cloudtrail get-event-selectors --trail-name `  Ensure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`  2. Get a list of all associated metric filters for this ``: ``` aws logs describe-metric-filters --log-group-name \"\" ``` 3. Ensure the output from the above command contains the following: ``` \"filterPattern\": \"{ ($.eventName = AuthorizeSecurityGroupIngress) || ($.eventName = AuthorizeSecurityGroupEgress) || ($.eventName = RevokeSecurityGroupIngress) || ($.eventName = RevokeSecurityGroupEgress) || ($.eventName = CreateSecurityGroup) || ($.eventName = DeleteSecurityGroup) }\" ``` 4. Note the `` value associated with the `filterPattern` found in step 3.  5. Get a list of CloudWatch alarms and filter on the `` captured in step 4. ``` aws cloudwatch describe-alarms --query \"MetricAlarms[?MetricName== '']\" ``` 6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.  7. Ensure there is at least one active subscriber to the SNS topic ``` aws sns list-subscriptions-by-topic --topic-arn   ``` at least one subscription should have \"SubscriptionArn\" with valid aws ARN. ``` Example of valid \"SubscriptionArn\": \"arn:aws:sns::::\" ```",
+          "AdditionalInformation": "Configuring log metric filter and alarm on Multi-region (global) CloudTrail - ensures that activities from all regions (used as well as unused) are monitored - ensures that activities on all supported global services are monitored - ensures that all management events across all regions are monitored",
           "References": "https://docs.aws.amazon.com/awscloudtrail/latest/userguide/receive-cloudtrail-log-files-from-multiple-regions.html:https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudwatch-alarms-for-cloudtrail.html:https://docs.aws.amazon.com/sns/latest/dg/SubscribeTopic.html"
         }
       ]
@@ -964,9 +964,9 @@
           "Description": "Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs and establishing corresponding metric filters and alarms. NACLs are used as a stateless packet filter to control ingress and egress traffic for subnets within a VPC. It is recommended that a metric filter and alarm be established for changes made to NACLs.",
           "RationaleStatement": "Monitoring changes to NACLs will help ensure that AWS resources and services are not unintentionally exposed.",
           "ImpactStatement": "",
-          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:\n\n1. Create a metric filter based on filter pattern provided which checks for NACL changes and the `` taken from audit step 1.\n```\naws logs put-metric-filter --log-group-name  --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventName = CreateNetworkAcl) || ($.eventName = CreateNetworkAclEntry) || ($.eventName = DeleteNetworkAcl) || ($.eventName = DeleteNetworkAclEntry) || ($.eventName = ReplaceNetworkAclEntry) || ($.eventName = ReplaceNetworkAclAssociation) }'\n```\n\n**Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.\n\n2. Create an SNS topic that the alarm will notify\n```\naws sns create-topic --name \n```\n\n**Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.\n\n3. Create an SNS subscription to the topic created in step 2\n```\naws sns subscribe --topic-arn  --protocol  --notification-endpoint \n```\n\n**Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.\n\n4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2\n```\naws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions \n```",
-          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:\n\n1. Identify the log group name configured for use with active multi-region CloudTrail:\n\n- List all CloudTrails: `aws cloudtrail describe-trails`\n\n- Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`\n\n- From value associated with CloudWatchLogsLogGroupArn note ``\n\nExample: for CloudWatchLogsLogGroupArn that looks like `arn:aws:logs:::log-group:NewGroup:*`, `` would be `NewGroup`\n\n- Ensure Identified Multi region CloudTrail is active\n\n`aws cloudtrail get-trail-status --name `\n\nensure `IsLogging` is set to `TRUE`\n\n- Ensure identified Multi-region Cloudtrail captures all Management Events\n\n`aws cloudtrail get-event-selectors --trail-name `\n\nEnsure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`\n\n2. Get a list of all associated metric filters for this ``:\n```\naws logs describe-metric-filters --log-group-name \"\"\n```\n3. Ensure the output from the above command contains the following:\n```\n\"filterPattern\": \"{ ($.eventName = CreateNetworkAcl) || ($.eventName = CreateNetworkAclEntry) || ($.eventName = DeleteNetworkAcl) || ($.eventName = DeleteNetworkAclEntry) || ($.eventName = ReplaceNetworkAclEntry) || ($.eventName = ReplaceNetworkAclAssociation) }\"\n```\n4. Note the `` value associated with the `filterPattern` found in step 3.\n\n5. Get a list of CloudWatch alarms and filter on the `` captured in step 4.\n```\naws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]'\n```\n6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.\n\n7. Ensure there is at least one active subscriber to the SNS topic\n```\naws sns list-subscriptions-by-topic --topic-arn  \n```\nat least one subscription should have \"SubscriptionArn\" with valid aws ARN.\n```\nExample of valid \"SubscriptionArn\": \"arn:aws:sns::::\"\n```",
-          "AdditionalInformation": "Configuring log metric filter and alarm on Multi-region (global) CloudTrail\n- ensures that activities from all regions (used as well as unused) are monitored\n- ensures that activities on all supported global services are monitored\n- ensures that all management events across all regions are monitored",
+          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:  1. Create a metric filter based on filter pattern provided which checks for NACL changes and the `` taken from audit step 1. ``` aws logs put-metric-filter --log-group-name  --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventName = CreateNetworkAcl) || ($.eventName = CreateNetworkAclEntry) || ($.eventName = DeleteNetworkAcl) || ($.eventName = DeleteNetworkAclEntry) || ($.eventName = ReplaceNetworkAclEntry) || ($.eventName = ReplaceNetworkAclAssociation) }' ```  **Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.  2. Create an SNS topic that the alarm will notify ``` aws sns create-topic --name  ```  **Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.  3. Create an SNS subscription to the topic created in step 2 ``` aws sns subscribe --topic-arn  --protocol  --notification-endpoint  ```  **Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.  4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 ``` aws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions  ```",
+          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:  1. Identify the log group name configured for use with active multi-region CloudTrail:  - List all CloudTrails: `aws cloudtrail describe-trails`  - Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`  - From value associated with CloudWatchLogsLogGroupArn note ``  Example: for CloudWatchLogsLogGroupArn that looks like `arn:aws:logs:::log-group:NewGroup:*`, `` would be `NewGroup`  - Ensure Identified Multi region CloudTrail is active  `aws cloudtrail get-trail-status --name `  ensure `IsLogging` is set to `TRUE`  - Ensure identified Multi-region Cloudtrail captures all Management Events  `aws cloudtrail get-event-selectors --trail-name `  Ensure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`  2. Get a list of all associated metric filters for this ``: ``` aws logs describe-metric-filters --log-group-name \"\" ``` 3. Ensure the output from the above command contains the following: ``` \"filterPattern\": \"{ ($.eventName = CreateNetworkAcl) || ($.eventName = CreateNetworkAclEntry) || ($.eventName = DeleteNetworkAcl) || ($.eventName = DeleteNetworkAclEntry) || ($.eventName = ReplaceNetworkAclEntry) || ($.eventName = ReplaceNetworkAclAssociation) }\" ``` 4. Note the `` value associated with the `filterPattern` found in step 3.  5. Get a list of CloudWatch alarms and filter on the `` captured in step 4. ``` aws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]' ``` 6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.  7. Ensure there is at least one active subscriber to the SNS topic ``` aws sns list-subscriptions-by-topic --topic-arn   ``` at least one subscription should have \"SubscriptionArn\" with valid aws ARN. ``` Example of valid \"SubscriptionArn\": \"arn:aws:sns::::\" ```",
+          "AdditionalInformation": "Configuring log metric filter and alarm on Multi-region (global) CloudTrail - ensures that activities from all regions (used as well as unused) are monitored - ensures that activities on all supported global services are monitored - ensures that all management events across all regions are monitored",
           "References": "https://docs.aws.amazon.com/awscloudtrail/latest/userguide/receive-cloudtrail-log-files-from-multiple-regions.html:https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudwatch-alarms-for-cloudtrail.html:https://docs.aws.amazon.com/sns/latest/dg/SubscribeTopic.html"
         }
       ]
@@ -985,9 +985,9 @@
           "Description": "Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs and establishing corresponding metric filters and alarms. Network gateways are required to send/receive traffic to a destination outside of a VPC. It is recommended that a metric filter and alarm be established for changes to network gateways.",
           "RationaleStatement": "Monitoring changes to network gateways will help ensure that all ingress/egress traffic traverses the VPC border via a controlled path.",
           "ImpactStatement": "",
-          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:\n\n1. Create a metric filter based on filter pattern provided which checks for network gateways changes and the `` taken from audit step 1.\n```\naws logs put-metric-filter --log-group-name  --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventName = CreateCustomerGateway) || ($.eventName = DeleteCustomerGateway) || ($.eventName = AttachInternetGateway) || ($.eventName = CreateInternetGateway) || ($.eventName = DeleteInternetGateway) || ($.eventName = DetachInternetGateway) }'\n```\n\n**Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.\n\n2. Create an SNS topic that the alarm will notify\n```\naws sns create-topic --name \n```\n\n**Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.\n\n3. Create an SNS subscription to the topic created in step 2\n```\naws sns subscribe --topic-arn  --protocol  --notification-endpoint \n```\n\n**Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.\n\n4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2\n```\naws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions \n```",
-          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:\n\n1. Identify the log group name configured for use with active multi-region CloudTrail:\n\n- List all CloudTrails: `aws cloudtrail describe-trails`\n\n- Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`\n\n- From value associated with CloudWatchLogsLogGroupArn note ``\n\nExample: for CloudWatchLogsLogGroupArn that looks like `arn:aws:logs:::log-group:NewGroup:*`, `` would be `NewGroup`\n\n- Ensure Identified Multi region CloudTrail is active\n\n`aws cloudtrail get-trail-status --name `\n\nensure `IsLogging` is set to `TRUE`\n\n- Ensure identified Multi-region Cloudtrail captures all Management Events\n\n`aws cloudtrail get-event-selectors --trail-name `\n\nEnsure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`\n\n2. Get a list of all associated metric filters for this ``:\n```\naws logs describe-metric-filters --log-group-name \"\"\n```\n3. Ensure the output from the above command contains the following:\n```\n\"filterPattern\": \"{ ($.eventName = CreateCustomerGateway) || ($.eventName = DeleteCustomerGateway) || ($.eventName = AttachInternetGateway) || ($.eventName = CreateInternetGateway) || ($.eventName = DeleteInternetGateway) || ($.eventName = DetachInternetGateway) }\"\n```\n4. Note the `` value associated with the `filterPattern` found in step 3.\n\n5. Get a list of CloudWatch alarms and filter on the `` captured in step 4.\n```\naws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]'\n```\n6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.\n\n7. Ensure there is at least one active subscriber to the SNS topic\n```\naws sns list-subscriptions-by-topic --topic-arn  \n```\nat least one subscription should have \"SubscriptionArn\" with valid aws ARN.\n```\nExample of valid \"SubscriptionArn\": \"arn:aws:sns::::\"\n```",
-          "AdditionalInformation": "Configuring log metric filter and alarm on Multi-region (global) CloudTrail\n- ensures that activities from all regions (used as well as unused) are monitored\n- ensures that activities on all supported global services are monitored\n- ensures that all management events across all regions are monitored",
+          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:  1. Create a metric filter based on filter pattern provided which checks for network gateways changes and the `` taken from audit step 1. ``` aws logs put-metric-filter --log-group-name  --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventName = CreateCustomerGateway) || ($.eventName = DeleteCustomerGateway) || ($.eventName = AttachInternetGateway) || ($.eventName = CreateInternetGateway) || ($.eventName = DeleteInternetGateway) || ($.eventName = DetachInternetGateway) }' ```  **Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.  2. Create an SNS topic that the alarm will notify ``` aws sns create-topic --name  ```  **Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.  3. Create an SNS subscription to the topic created in step 2 ``` aws sns subscribe --topic-arn  --protocol  --notification-endpoint  ```  **Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.  4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 ``` aws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions  ```",
+          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:  1. Identify the log group name configured for use with active multi-region CloudTrail:  - List all CloudTrails: `aws cloudtrail describe-trails`  - Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`  - From value associated with CloudWatchLogsLogGroupArn note ``  Example: for CloudWatchLogsLogGroupArn that looks like `arn:aws:logs:::log-group:NewGroup:*`, `` would be `NewGroup`  - Ensure Identified Multi region CloudTrail is active  `aws cloudtrail get-trail-status --name `  ensure `IsLogging` is set to `TRUE`  - Ensure identified Multi-region Cloudtrail captures all Management Events  `aws cloudtrail get-event-selectors --trail-name `  Ensure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`  2. Get a list of all associated metric filters for this ``: ``` aws logs describe-metric-filters --log-group-name \"\" ``` 3. Ensure the output from the above command contains the following: ``` \"filterPattern\": \"{ ($.eventName = CreateCustomerGateway) || ($.eventName = DeleteCustomerGateway) || ($.eventName = AttachInternetGateway) || ($.eventName = CreateInternetGateway) || ($.eventName = DeleteInternetGateway) || ($.eventName = DetachInternetGateway) }\" ``` 4. Note the `` value associated with the `filterPattern` found in step 3.  5. Get a list of CloudWatch alarms and filter on the `` captured in step 4. ``` aws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]' ``` 6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.  7. Ensure there is at least one active subscriber to the SNS topic ``` aws sns list-subscriptions-by-topic --topic-arn   ``` at least one subscription should have \"SubscriptionArn\" with valid aws ARN. ``` Example of valid \"SubscriptionArn\": \"arn:aws:sns::::\" ```",
+          "AdditionalInformation": "Configuring log metric filter and alarm on Multi-region (global) CloudTrail - ensures that activities from all regions (used as well as unused) are monitored - ensures that activities on all supported global services are monitored - ensures that all management events across all regions are monitored",
           "References": "https://docs.aws.amazon.com/awscloudtrail/latest/userguide/receive-cloudtrail-log-files-from-multiple-regions.html:https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudwatch-alarms-for-cloudtrail.html:https://docs.aws.amazon.com/sns/latest/dg/SubscribeTopic.html"
         }
       ]
@@ -1006,9 +1006,9 @@
           "Description": "Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs and establishing corresponding metric filters and alarms. Routing tables are used to route network traffic between subnets and to network gateways. It is recommended that a metric filter and alarm be established for changes to route tables.",
           "RationaleStatement": "Monitoring changes to route tables will help ensure that all VPC traffic flows through an expected path.",
           "ImpactStatement": "",
-          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:\n\n1. Create a metric filter based on filter pattern provided which checks for route table changes and the `` taken from audit step 1.\n```\naws logs put-metric-filter --log-group-name  --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventName = CreateRoute) || ($.eventName = CreateRouteTable) || ($.eventName = ReplaceRoute) || ($.eventName = ReplaceRouteTableAssociation) || ($.eventName = DeleteRouteTable) || ($.eventName = DeleteRoute) || ($.eventName = DisassociateRouteTable) }'\n```\n\n**Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.\n\n2. Create an SNS topic that the alarm will notify\n```\naws sns create-topic --name \n```\n\n**Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.\n\n3. Create an SNS subscription to the topic created in step 2\n```\naws sns subscribe --topic-arn  --protocol  --notification-endpoint \n```\n\n**Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.\n\n4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2\n```\naws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions \n```",
-          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:\n\n1. Identify the log group name configured for use with active multi-region CloudTrail:\n\n- List all CloudTrails: `aws cloudtrail describe-trails`\n\n- Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`\n\n- From value associated with CloudWatchLogsLogGroupArn note ``\n\nExample: for CloudWatchLogsLogGroupArn that looks like `arn:aws:logs:::log-group:NewGroup:*`, `` would be `NewGroup`\n\n- Ensure Identified Multi region CloudTrail is active\n\n`aws cloudtrail get-trail-status --name `\n\nensure `IsLogging` is set to `TRUE`\n\n- Ensure identified Multi-region Cloudtrail captures all Management Events\n\n`aws cloudtrail get-event-selectors --trail-name `\n\nEnsure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`\n\n2. Get a list of all associated metric filters for this ``:\n\n```\naws logs describe-metric-filters --log-group-name \"\"\n```\n\n3. Ensure the output from the above command contains the following:\n\n```\n\"filterPattern\": \"{ ($.eventName = CreateRoute) || ($.eventName = CreateRouteTable) || ($.eventName = ReplaceRoute) || ($.eventName = ReplaceRouteTableAssociation) || ($.eventName = DeleteRouteTable) || ($.eventName = DeleteRoute) || ($.eventName = DisassociateRouteTable) }\"\n```\n\n4. Note the `` value associated with the `filterPattern` found in step 3.\n\n5. Get a list of CloudWatch alarms and filter on the `` captured in step 4.\n\n```\naws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]'\n```\n\n6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.\n\n7. Ensure there is at least one active subscriber to the SNS topic\n\n```\naws sns list-subscriptions-by-topic --topic-arn  \n```\nat least one subscription should have \"SubscriptionArn\" with valid aws ARN.\n\n```\nExample of valid \"SubscriptionArn\": \"arn:aws:sns::::\"\n```",
-          "AdditionalInformation": "Configuring log metric filter and alarm on Multi-region (global) CloudTrail\n- ensures that activities from all regions (used as well as unused) are monitored\n- ensures that activities on all supported global services are monitored\n- ensures that all management events across all regions are monitored",
+          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:  1. Create a metric filter based on filter pattern provided which checks for route table changes and the `` taken from audit step 1. ``` aws logs put-metric-filter --log-group-name  --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventName = CreateRoute) || ($.eventName = CreateRouteTable) || ($.eventName = ReplaceRoute) || ($.eventName = ReplaceRouteTableAssociation) || ($.eventName = DeleteRouteTable) || ($.eventName = DeleteRoute) || ($.eventName = DisassociateRouteTable) }' ```  **Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.  2. Create an SNS topic that the alarm will notify ``` aws sns create-topic --name  ```  **Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.  3. Create an SNS subscription to the topic created in step 2 ``` aws sns subscribe --topic-arn  --protocol  --notification-endpoint  ```  **Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.  4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 ``` aws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions  ```",
+          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:  1. Identify the log group name configured for use with active multi-region CloudTrail:  - List all CloudTrails: `aws cloudtrail describe-trails`  - Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`  - From value associated with CloudWatchLogsLogGroupArn note ``  Example: for CloudWatchLogsLogGroupArn that looks like `arn:aws:logs:::log-group:NewGroup:*`, `` would be `NewGroup`  - Ensure Identified Multi region CloudTrail is active  `aws cloudtrail get-trail-status --name `  ensure `IsLogging` is set to `TRUE`  - Ensure identified Multi-region Cloudtrail captures all Management Events  `aws cloudtrail get-event-selectors --trail-name `  Ensure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`  2. Get a list of all associated metric filters for this ``:  ``` aws logs describe-metric-filters --log-group-name \"\" ```  3. Ensure the output from the above command contains the following:  ``` \"filterPattern\": \"{ ($.eventName = CreateRoute) || ($.eventName = CreateRouteTable) || ($.eventName = ReplaceRoute) || ($.eventName = ReplaceRouteTableAssociation) || ($.eventName = DeleteRouteTable) || ($.eventName = DeleteRoute) || ($.eventName = DisassociateRouteTable) }\" ```  4. Note the `` value associated with the `filterPattern` found in step 3.  5. Get a list of CloudWatch alarms and filter on the `` captured in step 4.  ``` aws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]' ```  6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.  7. Ensure there is at least one active subscriber to the SNS topic  ``` aws sns list-subscriptions-by-topic --topic-arn   ``` at least one subscription should have \"SubscriptionArn\" with valid aws ARN.  ``` Example of valid \"SubscriptionArn\": \"arn:aws:sns::::\" ```",
+          "AdditionalInformation": "Configuring log metric filter and alarm on Multi-region (global) CloudTrail - ensures that activities from all regions (used as well as unused) are monitored - ensures that activities on all supported global services are monitored - ensures that all management events across all regions are monitored",
           "References": "https://docs.aws.amazon.com/awscloudtrail/latest/userguide/receive-cloudtrail-log-files-from-multiple-regions.html:https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudwatch-alarms-for-cloudtrail.html:https://docs.aws.amazon.com/sns/latest/dg/SubscribeTopic.html"
         }
       ]
@@ -1027,9 +1027,9 @@
           "Description": "Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs and establishing corresponding metric filters and alarms. It is possible to have more than 1 VPC within an account, in addition it is also possible to create a peer connection between 2 VPCs enabling network traffic to route between VPCs. It is recommended that a metric filter and alarm be established for changes made to VPCs.",
           "RationaleStatement": "Monitoring changes to VPC will help ensure VPC traffic flow is not getting impacted.",
           "ImpactStatement": "",
-          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:\n\n1. Create a metric filter based on filter pattern provided which checks for VPC changes and the `` taken from audit step 1.\n```\naws logs put-metric-filter --log-group-name  --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventName = CreateVpc) || ($.eventName = DeleteVpc) || ($.eventName = ModifyVpcAttribute) || ($.eventName = AcceptVpcPeeringConnection) || ($.eventName = CreateVpcPeeringConnection) || ($.eventName = DeleteVpcPeeringConnection) || ($.eventName = RejectVpcPeeringConnection) || ($.eventName = AttachClassicLinkVpc) || ($.eventName = DetachClassicLinkVpc) || ($.eventName = DisableVpcClassicLink) || ($.eventName = EnableVpcClassicLink) }'\n```\n\n**Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.\n\n2. Create an SNS topic that the alarm will notify\n```\naws sns create-topic --name \n```\n\n**Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.\n\n3. Create an SNS subscription to the topic created in step 2\n```\naws sns subscribe --topic-arn  --protocol  --notification-endpoint \n```\n\n**Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.\n\n4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2\n```\naws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions \n```",
-          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:\n\n1. Identify the log group name configured for use with active multi-region CloudTrail:\n\n- List all CloudTrails: `aws cloudtrail describe-trails`\n\n- Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`\n\n- From value associated with CloudWatchLogsLogGroupArn note ``\n\nExample: for CloudWatchLogsLogGroupArn that looks like `arn:aws:logs:::log-group:NewGroup:*`, `` would be `NewGroup`\n\n- Ensure Identified Multi region CloudTrail is active\n\n`aws cloudtrail get-trail-status --name `\n\nensure `IsLogging` is set to `TRUE`\n\n- Ensure identified Multi-region Cloudtrail captures all Management Events\n\n`aws cloudtrail get-event-selectors --trail-name `\n\nEnsure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`\n\n2. Get a list of all associated metric filters for this ``:\n\n```\naws logs describe-metric-filters --log-group-name \"\"\n```\n\n3. Ensure the output from the above command contains the following:\n\n```\n\"filterPattern\": \"{ ($.eventName = CreateVpc) || ($.eventName = DeleteVpc) || ($.eventName = ModifyVpcAttribute) || ($.eventName = AcceptVpcPeeringConnection) || ($.eventName = CreateVpcPeeringConnection) || ($.eventName = DeleteVpcPeeringConnection) || ($.eventName = RejectVpcPeeringConnection) || ($.eventName = AttachClassicLinkVpc) || ($.eventName = DetachClassicLinkVpc) || ($.eventName = DisableVpcClassicLink) || ($.eventName = EnableVpcClassicLink) }\"\n```\n\n4. Note the `` value associated with the `filterPattern` found in step 3.\n\n5. Get a list of CloudWatch alarms and filter on the `` captured in step 4.\n\n```\naws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]'\n```\n\n6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.\n\n7. Ensure there is at least one active subscriber to the SNS topic\n\n```\naws sns list-subscriptions-by-topic --topic-arn  \n```\nat least one subscription should have \"SubscriptionArn\" with valid aws ARN.\n\n```\nExample of valid \"SubscriptionArn\": \"arn:aws:sns::::\"\n```",
-          "AdditionalInformation": "Configuring log metric filter and alarm on Multi-region (global) CloudTrail\n- ensures that activities from all regions (used as well as unused) are monitored\n- ensures that activities on all supported global services are monitored\n- ensures that all management events across all regions are monitored",
+          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:  1. Create a metric filter based on filter pattern provided which checks for VPC changes and the `` taken from audit step 1. ``` aws logs put-metric-filter --log-group-name  --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventName = CreateVpc) || ($.eventName = DeleteVpc) || ($.eventName = ModifyVpcAttribute) || ($.eventName = AcceptVpcPeeringConnection) || ($.eventName = CreateVpcPeeringConnection) || ($.eventName = DeleteVpcPeeringConnection) || ($.eventName = RejectVpcPeeringConnection) || ($.eventName = AttachClassicLinkVpc) || ($.eventName = DetachClassicLinkVpc) || ($.eventName = DisableVpcClassicLink) || ($.eventName = EnableVpcClassicLink) }' ```  **Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.  2. Create an SNS topic that the alarm will notify ``` aws sns create-topic --name  ```  **Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.  3. Create an SNS subscription to the topic created in step 2 ``` aws sns subscribe --topic-arn  --protocol  --notification-endpoint  ```  **Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.  4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 ``` aws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions  ```",
+          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:  1. Identify the log group name configured for use with active multi-region CloudTrail:  - List all CloudTrails: `aws cloudtrail describe-trails`  - Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`  - From value associated with CloudWatchLogsLogGroupArn note ``  Example: for CloudWatchLogsLogGroupArn that looks like `arn:aws:logs:::log-group:NewGroup:*`, `` would be `NewGroup`  - Ensure Identified Multi region CloudTrail is active  `aws cloudtrail get-trail-status --name `  ensure `IsLogging` is set to `TRUE`  - Ensure identified Multi-region Cloudtrail captures all Management Events  `aws cloudtrail get-event-selectors --trail-name `  Ensure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`  2. Get a list of all associated metric filters for this ``:  ``` aws logs describe-metric-filters --log-group-name \"\" ```  3. Ensure the output from the above command contains the following:  ``` \"filterPattern\": \"{ ($.eventName = CreateVpc) || ($.eventName = DeleteVpc) || ($.eventName = ModifyVpcAttribute) || ($.eventName = AcceptVpcPeeringConnection) || ($.eventName = CreateVpcPeeringConnection) || ($.eventName = DeleteVpcPeeringConnection) || ($.eventName = RejectVpcPeeringConnection) || ($.eventName = AttachClassicLinkVpc) || ($.eventName = DetachClassicLinkVpc) || ($.eventName = DisableVpcClassicLink) || ($.eventName = EnableVpcClassicLink) }\" ```  4. Note the `` value associated with the `filterPattern` found in step 3.  5. Get a list of CloudWatch alarms and filter on the `` captured in step 4.  ``` aws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]' ```  6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.  7. Ensure there is at least one active subscriber to the SNS topic  ``` aws sns list-subscriptions-by-topic --topic-arn   ``` at least one subscription should have \"SubscriptionArn\" with valid aws ARN.  ``` Example of valid \"SubscriptionArn\": \"arn:aws:sns::::\" ```",
+          "AdditionalInformation": "Configuring log metric filter and alarm on Multi-region (global) CloudTrail - ensures that activities from all regions (used as well as unused) are monitored - ensures that activities on all supported global services are monitored - ensures that all management events across all regions are monitored",
           "References": "https://docs.aws.amazon.com/awscloudtrail/latest/userguide/receive-cloudtrail-log-files-from-multiple-regions.html:https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudwatch-alarms-for-cloudtrail.html:https://docs.aws.amazon.com/sns/latest/dg/SubscribeTopic.html"
         }
       ]
@@ -1048,8 +1048,8 @@
           "Description": "Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs and establishing corresponding metric filters and alarms. It is recommended that a metric filter and alarm be established for AWS Organizations changes made in the master AWS Account.",
           "RationaleStatement": "Monitoring AWS Organizations changes can help you prevent any unwanted, accidental or intentional modifications that may lead to unauthorized access or other security breaches. This monitoring technique helps you to ensure that any unexpected changes performed within your AWS Organizations can be investigated and any unwanted changes can be rolled back.",
           "ImpactStatement": "",
-          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:\n\n1. Create a metric filter based on filter pattern provided which checks for AWS Organizations changes and the `` taken from audit step 1:\n```\naws logs put-metric-filter --log-group-name  --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventSource = organizations.amazonaws.com) && (($.eventName = \"AcceptHandshake\") || ($.eventName = \"AttachPolicy\") || ($.eventName = \"CreateAccount\") || ($.eventName = \"CreateOrganizationalUnit\") || ($.eventName = \"CreatePolicy\") || ($.eventName = \"DeclineHandshake\") || ($.eventName = \"DeleteOrganization\") || ($.eventName = \"DeleteOrganizationalUnit\") || ($.eventName = \"DeletePolicy\") || ($.eventName = \"DetachPolicy\") || ($.eventName = \"DisablePolicyType\") || ($.eventName = \"EnablePolicyType\") || ($.eventName = \"InviteAccountToOrganization\") || ($.eventName = \"LeaveOrganization\") || ($.eventName = \"MoveAccount\") || ($.eventName = \"RemoveAccountFromOrganization\") || ($.eventName = \"UpdatePolicy\") || ($.eventName = \"UpdateOrganizationalUnit\")) }'\n```\n**Note:** You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.\n\n2. Create an SNS topic that the alarm will notify:\n```\naws sns create-topic --name \n```\n**Note:** you can execute this command once and then re-use the same topic for all monitoring alarms.\n\n3. Create an SNS subscription to the topic created in step 2:\n```\naws sns subscribe --topic-arn  --protocol  --notification-endpoint \n```\n**Note:** you can execute this command once and then re-use the SNS subscription for all monitoring alarms.\n\n4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2:\n```\naws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions \n```",
-          "AuditProcedure": "1. Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:\n- Identify the log group name configured for use with active multi-region CloudTrail:\n- List all CloudTrails: \n```\naws cloudtrail describe-trails\n```\n- Identify Multi region Cloudtrails, Trails with `\"IsMultiRegionTrail\"` set to true\n- From value associated with CloudWatchLogsLogGroupArn note \n **Example:** for CloudWatchLogsLogGroupArn that looks like arn:aws:logs:::log-group:NewGroup:*,  would be NewGroup\n\n- Ensure Identified Multi region CloudTrail is active:\n```\naws cloudtrail get-trail-status --name \n```\nEnsure `IsLogging` is set to `TRUE`\n\n- Ensure identified Multi-region Cloudtrail captures all Management Events:\n```\naws cloudtrail get-event-selectors --trail-name \n```\n- Ensure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to true and `ReadWriteType` set to `All`.\n\n2. Get a list of all associated metric filters for this :\n```\naws logs describe-metric-filters --log-group-name \"\"\n```\n3. Ensure the output from the above command contains the following:\n```\n\"filterPattern\": \"{ ($.eventSource = organizations.amazonaws.com) && (($.eventName = \"AcceptHandshake\") || ($.eventName = \"AttachPolicy\") || ($.eventName = \"CreateAccount\") || ($.eventName = \"CreateOrganizationalUnit\") || ($.eventName = \"CreatePolicy\") || ($.eventName = \"DeclineHandshake\") || ($.eventName = \"DeleteOrganization\") || ($.eventName = \"DeleteOrganizationalUnit\") || ($.eventName = \"DeletePolicy\") || ($.eventName = \"DetachPolicy\") || ($.eventName = \"DisablePolicyType\") || ($.eventName = \"EnablePolicyType\") || ($.eventName = \"InviteAccountToOrganization\") || ($.eventName = \"LeaveOrganization\") || ($.eventName = \"MoveAccount\") || ($.eventName = \"RemoveAccountFromOrganization\") || ($.eventName = \"UpdatePolicy\") || ($.eventName = \"UpdateOrganizationalUnit\")) }\"\n```\n4. Note the `` value associated with the filterPattern found in step 3.\n\n5. Get a list of CloudWatch alarms and filter on the `` captured in step 4:\n```\naws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]'\n```\n6. Note the AlarmActions value - this will provide the SNS topic ARN value.\n\n7. Ensure there is at least one active subscriber to the SNS topic:\n```\naws sns list-subscriptions-by-topic --topic-arn  \n```\nat least one subscription should have \"SubscriptionArn\" with valid aws ARN.\nExample of valid \"SubscriptionArn\": \n```\n\"arn:aws:sns::::\"\n```",
+          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:  1. Create a metric filter based on filter pattern provided which checks for AWS Organizations changes and the `` taken from audit step 1: ``` aws logs put-metric-filter --log-group-name  --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventSource = organizations.amazonaws.com) && (($.eventName = \"AcceptHandshake\") || ($.eventName = \"AttachPolicy\") || ($.eventName = \"CreateAccount\") || ($.eventName = \"CreateOrganizationalUnit\") || ($.eventName = \"CreatePolicy\") || ($.eventName = \"DeclineHandshake\") || ($.eventName = \"DeleteOrganization\") || ($.eventName = \"DeleteOrganizationalUnit\") || ($.eventName = \"DeletePolicy\") || ($.eventName = \"DetachPolicy\") || ($.eventName = \"DisablePolicyType\") || ($.eventName = \"EnablePolicyType\") || ($.eventName = \"InviteAccountToOrganization\") || ($.eventName = \"LeaveOrganization\") || ($.eventName = \"MoveAccount\") || ($.eventName = \"RemoveAccountFromOrganization\") || ($.eventName = \"UpdatePolicy\") || ($.eventName = \"UpdateOrganizationalUnit\")) }' ``` **Note:** You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.  2. Create an SNS topic that the alarm will notify: ``` aws sns create-topic --name  ``` **Note:** you can execute this command once and then re-use the same topic for all monitoring alarms.  3. Create an SNS subscription to the topic created in step 2: ``` aws sns subscribe --topic-arn  --protocol  --notification-endpoint  ``` **Note:** you can execute this command once and then re-use the SNS subscription for all monitoring alarms.  4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2: ``` aws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions  ```",
+          "AuditProcedure": "1. Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured: - Identify the log group name configured for use with active multi-region CloudTrail: - List all CloudTrails:  ``` aws cloudtrail describe-trails ``` - Identify Multi region Cloudtrails, Trails with `\"IsMultiRegionTrail\"` set to true - From value associated with CloudWatchLogsLogGroupArn note   **Example:** for CloudWatchLogsLogGroupArn that looks like arn:aws:logs:::log-group:NewGroup:*,  would be NewGroup  - Ensure Identified Multi region CloudTrail is active: ``` aws cloudtrail get-trail-status --name  ``` Ensure `IsLogging` is set to `TRUE`  - Ensure identified Multi-region Cloudtrail captures all Management Events: ``` aws cloudtrail get-event-selectors --trail-name  ``` - Ensure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to true and `ReadWriteType` set to `All`.  2. Get a list of all associated metric filters for this : ``` aws logs describe-metric-filters --log-group-name \"\" ``` 3. Ensure the output from the above command contains the following: ``` \"filterPattern\": \"{ ($.eventSource = organizations.amazonaws.com) && (($.eventName = \"AcceptHandshake\") || ($.eventName = \"AttachPolicy\") || ($.eventName = \"CreateAccount\") || ($.eventName = \"CreateOrganizationalUnit\") || ($.eventName = \"CreatePolicy\") || ($.eventName = \"DeclineHandshake\") || ($.eventName = \"DeleteOrganization\") || ($.eventName = \"DeleteOrganizationalUnit\") || ($.eventName = \"DeletePolicy\") || ($.eventName = \"DetachPolicy\") || ($.eventName = \"DisablePolicyType\") || ($.eventName = \"EnablePolicyType\") || ($.eventName = \"InviteAccountToOrganization\") || ($.eventName = \"LeaveOrganization\") || ($.eventName = \"MoveAccount\") || ($.eventName = \"RemoveAccountFromOrganization\") || ($.eventName = \"UpdatePolicy\") || ($.eventName = \"UpdateOrganizationalUnit\")) }\" ``` 4. Note the `` value associated with the filterPattern found in step 3.  5. Get a list of CloudWatch alarms and filter on the `` captured in step 4: ``` aws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]' ``` 6. Note the AlarmActions value - this will provide the SNS topic ARN value.  7. Ensure there is at least one active subscriber to the SNS topic: ``` aws sns list-subscriptions-by-topic --topic-arn   ``` at least one subscription should have \"SubscriptionArn\" with valid aws ARN. Example of valid \"SubscriptionArn\":  ``` \"arn:aws:sns::::\" ```",
           "AdditionalInformation": "",
           "References": "https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudwatch-alarms-for-cloudtrail.html:https://docs.aws.amazon.com/organizations/latest/userguide/orgs_security_incident-response.html"
         }
@@ -1069,8 +1069,8 @@
           "Description": "Security Hub collects security data from across AWS accounts, services, and supported third-party partner products and helps you analyze your security trends and identify the highest priority security issues. When you enable Security Hub, it begins to consume, aggregate, organize, and prioritize findings from AWS services that you have enabled, such as Amazon GuardDuty, Amazon Inspector, and Amazon Macie. You can also enable integrations with AWS partner security products.",
           "RationaleStatement": "AWS Security Hub provides you with a comprehensive view of your security state in AWS and helps you check your environment against security industry standards and best practices - enabling you to quickly assess the security posture across your AWS accounts.",
           "ImpactStatement": "It is recommended AWS Security Hub be enabled in all regions. AWS Security Hub requires AWS Config to be enabled.",
-          "RemediationProcedure": "To grant the permissions required to enable Security Hub, attach the Security Hub managed policy AWSSecurityHubFullAccess to an IAM user, group, or role.\n\nEnabling Security Hub\n\n**From Console:**\n\n1. Use the credentials of the IAM identity to sign in to the Security Hub console.\n2. When you open the Security Hub console for the first time, choose Enable AWS Security Hub.\n3. On the welcome page, Security standards list the security standards that Security Hub supports.\n4. Choose Enable Security Hub.\n\n**From Command Line:**\n\n1. Run the enable-security-hub command. To enable the default standards, include `--enable-default-standards`.\n```\naws securityhub enable-security-hub --enable-default-standards\n```\n\n2. To enable the security hub without the default standards, include `--no-enable-default-standards`.\n```\naws securityhub enable-security-hub --no-enable-default-standards\n```",
-          "AuditProcedure": "The process to evaluate AWS Security Hub configuration per region \n\n**From Console:**\n\n1. Sign in to the AWS Management Console and open the AWS Security Hub console at https://console.aws.amazon.com/securityhub/.\n2. On the top right of the console, select the target Region.\n3. If presented with the Security Hub > Summary page then Security Hub is set-up for the selected region.\n4. If presented with Setup Security Hub or Get Started With Security Hub - follow the online instructions.\n5. Repeat steps 2 to 4 for each region.",
+          "RemediationProcedure": "To grant the permissions required to enable Security Hub, attach the Security Hub managed policy AWSSecurityHubFullAccess to an IAM user, group, or role.  Enabling Security Hub  **From Console:**  1. Use the credentials of the IAM identity to sign in to the Security Hub console. 2. When you open the Security Hub console for the first time, choose Enable AWS Security Hub. 3. On the welcome page, Security standards list the security standards that Security Hub supports. 4. Choose Enable Security Hub.  **From Command Line:**  1. Run the enable-security-hub command. To enable the default standards, include `--enable-default-standards`. ``` aws securityhub enable-security-hub --enable-default-standards ```  2. To enable the security hub without the default standards, include `--no-enable-default-standards`. ``` aws securityhub enable-security-hub --no-enable-default-standards ```",
+          "AuditProcedure": "The process to evaluate AWS Security Hub configuration per region   **From Console:**  1. Sign in to the AWS Management Console and open the AWS Security Hub console at https://console.aws.amazon.com/securityhub/. 2. On the top right of the console, select the target Region. 3. If presented with the Security Hub > Summary page then Security Hub is set-up for the selected region. 4. If presented with Setup Security Hub or Get Started With Security Hub - follow the online instructions. 5. Repeat steps 2 to 4 for each region.",
           "AdditionalInformation": "",
           "References": "https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-get-started.html:https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-enable.html#securityhub-enable-api:https://awscli.amazonaws.com/v2/documentation/api/latest/reference/securityhub/enable-security-hub.html"
         }
@@ -1090,9 +1090,9 @@
           "Description": "Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs and establishing corresponding metric filters and alarms. It is recommended that a metric filter and alarm be established for console logins that are not protected by multi-factor authentication (MFA).",
           "RationaleStatement": "Monitoring for single-factor console logins will increase visibility into accounts that are not protected by MFA.",
           "ImpactStatement": "",
-          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:\n\n1. Create a metric filter based on filter pattern provided which checks for AWS Management Console sign-in without MFA and the `` taken from audit step 1.\n\nUse Command: \n\n```\naws logs put-metric-filter --log-group-name  --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventName = \"ConsoleLogin\") && ($.additionalEventData.MFAUsed != \"Yes\") }'\n```\n\nOr (To reduce false positives incase Single Sign-On (SSO) is used in organization):\n\n```\naws logs put-metric-filter --log-group-name  --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventName = \"ConsoleLogin\") && ($.additionalEventData.MFAUsed != \"Yes\") && ($.userIdentity.type = \"IAMUser\") && ($.responseElements.ConsoleLogin = \"Success\") }'\n```\n\n**Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.\n\n2. Create an SNS topic that the alarm will notify\n```\naws sns create-topic --name \n```\n\n**Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.\n\n3. Create an SNS subscription to the topic created in step 2\n```\naws sns subscribe --topic-arn  --protocol  --notification-endpoint \n```\n\n**Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.\n\n4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2\n```\naws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions \n```",
-          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:\n\n1. Identify the log group name configured for use with active multi-region CloudTrail:\n\n- List all `CloudTrails`:\n\n```\naws cloudtrail describe-trails\n```\n\n- Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`\n\n- From value associated with CloudWatchLogsLogGroupArn note ``\n\nExample: for CloudWatchLogsLogGroupArn that looks like `arn:aws:logs:::log-group:NewGroup:*`, `` would be `NewGroup`\n\n- Ensure Identified Multi region `CloudTrail` is active\n\n```\naws cloudtrail get-trail-status --name \n```\n\nEnsure in the output that `IsLogging` is set to `TRUE`\n\n- Ensure identified Multi-region 'Cloudtrail' captures all Management Events\n\n```\naws cloudtrail get-event-selectors --trail-name \n```\n\nEnsure in the output there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`\n\n2. Get a list of all associated metric filters for this ``:\n```\naws logs describe-metric-filters --log-group-name \"\"\n```\n3. Ensure the output from the above command contains the following:\n```\n\"filterPattern\": \"{ ($.eventName = \"ConsoleLogin\") && ($.additionalEventData.MFAUsed != \"Yes\") }\"\n```\n\nOr (To reduce false positives incase Single Sign-On (SSO) is used in organization):\n\n```\n\"filterPattern\": \"{ ($.eventName = \"ConsoleLogin\") && ($.additionalEventData.MFAUsed != \"Yes\") && ($.userIdentity.type = \"IAMUser\") && ($.responseElements.ConsoleLogin = \"Success\") }\"\n```\n\n4. Note the `` value associated with the `filterPattern` found in step 3.\n\n5. Get a list of CloudWatch alarms and filter on the `` captured in step 4.\n\n```\naws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]'\n```\n6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.\n\n7. Ensure there is at least one active subscriber to the SNS topic\n```\naws sns list-subscriptions-by-topic --topic-arn  \n```\nat least one subscription should have \"SubscriptionArn\" with valid aws ARN.\n```\nExample of valid \"SubscriptionArn\": \"arn:aws:sns::::\"\n```",
-          "AdditionalInformation": "Configuring log metric filter and alarm on Multi-region (global) CloudTrail\n- ensures that activities from all regions (used as well as unused) are monitored\n- ensures that activities on all supported global services are monitored\n- ensures that all management events across all regions are monitored\n-Filter pattern set to `{ ($.eventName = \"ConsoleLogin\") && ($.additionalEventData.MFAUsed != \"Yes\") && ($.userIdentity.type = \"IAMUser\") && ($.responseElements.ConsoleLogin = \"Success\"}` reduces false alarms raised when user logs in via SSO account.",
+          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:  1. Create a metric filter based on filter pattern provided which checks for AWS Management Console sign-in without MFA and the `` taken from audit step 1.  Use Command:   ``` aws logs put-metric-filter --log-group-name  --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventName = \"ConsoleLogin\") && ($.additionalEventData.MFAUsed != \"Yes\") }' ```  Or (To reduce false positives incase Single Sign-On (SSO) is used in organization):  ``` aws logs put-metric-filter --log-group-name  --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventName = \"ConsoleLogin\") && ($.additionalEventData.MFAUsed != \"Yes\") && ($.userIdentity.type = \"IAMUser\") && ($.responseElements.ConsoleLogin = \"Success\") }' ```  **Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.  2. Create an SNS topic that the alarm will notify ``` aws sns create-topic --name  ```  **Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.  3. Create an SNS subscription to the topic created in step 2 ``` aws sns subscribe --topic-arn  --protocol  --notification-endpoint  ```  **Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.  4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 ``` aws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions  ```",
+          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:  1. Identify the log group name configured for use with active multi-region CloudTrail:  - List all `CloudTrails`:  ``` aws cloudtrail describe-trails ```  - Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`  - From value associated with CloudWatchLogsLogGroupArn note ``  Example: for CloudWatchLogsLogGroupArn that looks like `arn:aws:logs:::log-group:NewGroup:*`, `` would be `NewGroup`  - Ensure Identified Multi region `CloudTrail` is active  ``` aws cloudtrail get-trail-status --name  ```  Ensure in the output that `IsLogging` is set to `TRUE`  - Ensure identified Multi-region 'Cloudtrail' captures all Management Events  ``` aws cloudtrail get-event-selectors --trail-name  ```  Ensure in the output there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`  2. Get a list of all associated metric filters for this ``: ``` aws logs describe-metric-filters --log-group-name \"\" ``` 3. Ensure the output from the above command contains the following: ``` \"filterPattern\": \"{ ($.eventName = \"ConsoleLogin\") && ($.additionalEventData.MFAUsed != \"Yes\") }\" ```  Or (To reduce false positives incase Single Sign-On (SSO) is used in organization):  ``` \"filterPattern\": \"{ ($.eventName = \"ConsoleLogin\") && ($.additionalEventData.MFAUsed != \"Yes\") && ($.userIdentity.type = \"IAMUser\") && ($.responseElements.ConsoleLogin = \"Success\") }\" ```  4. Note the `` value associated with the `filterPattern` found in step 3.  5. Get a list of CloudWatch alarms and filter on the `` captured in step 4.  ``` aws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]' ``` 6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.  7. Ensure there is at least one active subscriber to the SNS topic ``` aws sns list-subscriptions-by-topic --topic-arn   ``` at least one subscription should have \"SubscriptionArn\" with valid aws ARN. ``` Example of valid \"SubscriptionArn\": \"arn:aws:sns::::\" ```",
+          "AdditionalInformation": "Configuring log metric filter and alarm on Multi-region (global) CloudTrail - ensures that activities from all regions (used as well as unused) are monitored - ensures that activities on all supported global services are monitored - ensures that all management events across all regions are monitored -Filter pattern set to `{ ($.eventName = \"ConsoleLogin\") && ($.additionalEventData.MFAUsed != \"Yes\") && ($.userIdentity.type = \"IAMUser\") && ($.responseElements.ConsoleLogin = \"Success\"}` reduces false alarms raised when user logs in via SSO account.",
           "References": "https://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/viewing_metrics_with_cloudwatch.html:https://docs.aws.amazon.com/awscloudtrail/latest/userguide/receive-cloudtrail-log-files-from-multiple-regions.html:https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudwatch-alarms-for-cloudtrail.html:https://docs.aws.amazon.com/sns/latest/dg/SubscribeTopic.html"
         }
       ]
@@ -1111,9 +1111,9 @@
           "Description": "Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs and establishing corresponding metric filters and alarms. It is recommended that a metric filter and alarm be established for 'root' login attempts.",
           "RationaleStatement": "Monitoring for 'root' account logins will provide visibility into the use of a fully privileged account and an opportunity to reduce the use of it.",
           "ImpactStatement": "",
-          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:\n\n1. Create a metric filter based on filter pattern provided which checks for 'Root' account usage and the `` taken from audit step 1.\n```\naws logs put-metric-filter --log-group-name `` --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ $.userIdentity.type = \"Root\" && $.userIdentity.invokedBy NOT EXISTS && $.eventType != \"AwsServiceEvent\" }'\n```\n\n**Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.\n\n2. Create an SNS topic that the alarm will notify\n```\naws sns create-topic --name \n```\n\n**Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.\n\n3. Create an SNS subscription to the topic created in step 2\n```\naws sns subscribe --topic-arn  --protocol  --notification-endpoint \n```\n\n**Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.\n\n4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2\n```\naws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions \n```",
-          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:\n\n1. Identify the log group name configured for use with active multi-region CloudTrail:\n\n- List all CloudTrails:\n\n`aws cloudtrail describe-trails`\n\n- Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`\n\n- From value associated with CloudWatchLogsLogGroupArn note ``\n\nExample: for CloudWatchLogsLogGroupArn that looks like `arn:aws:logs:::log-group:NewGroup:*`, `` would be `NewGroup`\n\n- Ensure Identified Multi region CloudTrail is active\n\n`aws cloudtrail get-trail-status --name `\n\nensure `IsLogging` is set to `TRUE`\n\n- Ensure identified Multi-region Cloudtrail captures all Management Events\n\n`aws cloudtrail get-event-selectors --trail-name `\n\nEnsure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`\n\n2. Get a list of all associated metric filters for this ``:\n\n```\naws logs describe-metric-filters --log-group-name \"\"\n```\n\n3. Ensure the output from the above command contains the following:\n\n```\n\"filterPattern\": \"{ $.userIdentity.type = \"Root\" && $.userIdentity.invokedBy NOT EXISTS && $.eventType != \"AwsServiceEvent\" }\"\n```\n\n4. Note the `` value associated with the `filterPattern` found in step 3.\n\n5. Get a list of CloudWatch alarms and filter on the `` captured in step 4.\n\n```\naws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]'\n```\n\n6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.\n\n7. Ensure there is at least one active subscriber to the SNS topic\n\n```\naws sns list-subscriptions-by-topic --topic-arn  \n```\nat least one subscription should have \"SubscriptionArn\" with valid aws ARN.\n\n```\nExample of valid \"SubscriptionArn\": \"arn:aws:sns::::\"\n```",
-          "AdditionalInformation": "**Configuring log metric filter and alarm on Multi-region (global) CloudTrail**\n\n- ensures that activities from all regions (used as well as unused) are monitored\n\n- ensures that activities on all supported global services are monitored\n\n- ensures that all management events across all regions are monitored",
+          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:  1. Create a metric filter based on filter pattern provided which checks for 'Root' account usage and the `` taken from audit step 1. ``` aws logs put-metric-filter --log-group-name `` --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ $.userIdentity.type = \"Root\" && $.userIdentity.invokedBy NOT EXISTS && $.eventType != \"AwsServiceEvent\" }' ```  **Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.  2. Create an SNS topic that the alarm will notify ``` aws sns create-topic --name  ```  **Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.  3. Create an SNS subscription to the topic created in step 2 ``` aws sns subscribe --topic-arn  --protocol  --notification-endpoint  ```  **Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.  4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 ``` aws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions  ```",
+          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:  1. Identify the log group name configured for use with active multi-region CloudTrail:  - List all CloudTrails:  `aws cloudtrail describe-trails`  - Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`  - From value associated with CloudWatchLogsLogGroupArn note ``  Example: for CloudWatchLogsLogGroupArn that looks like `arn:aws:logs:::log-group:NewGroup:*`, `` would be `NewGroup`  - Ensure Identified Multi region CloudTrail is active  `aws cloudtrail get-trail-status --name `  ensure `IsLogging` is set to `TRUE`  - Ensure identified Multi-region Cloudtrail captures all Management Events  `aws cloudtrail get-event-selectors --trail-name `  Ensure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`  2. Get a list of all associated metric filters for this ``:  ``` aws logs describe-metric-filters --log-group-name \"\" ```  3. Ensure the output from the above command contains the following:  ``` \"filterPattern\": \"{ $.userIdentity.type = \"Root\" && $.userIdentity.invokedBy NOT EXISTS && $.eventType != \"AwsServiceEvent\" }\" ```  4. Note the `` value associated with the `filterPattern` found in step 3.  5. Get a list of CloudWatch alarms and filter on the `` captured in step 4.  ``` aws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]' ```  6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.  7. Ensure there is at least one active subscriber to the SNS topic  ``` aws sns list-subscriptions-by-topic --topic-arn   ``` at least one subscription should have \"SubscriptionArn\" with valid aws ARN.  ``` Example of valid \"SubscriptionArn\": \"arn:aws:sns::::\" ```",
+          "AdditionalInformation": "**Configuring log metric filter and alarm on Multi-region (global) CloudTrail**  - ensures that activities from all regions (used as well as unused) are monitored  - ensures that activities on all supported global services are monitored  - ensures that all management events across all regions are monitored",
           "References": "https://docs.aws.amazon.com/awscloudtrail/latest/userguide/receive-cloudtrail-log-files-from-multiple-regions.html:https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudwatch-alarms-for-cloudtrail.html:https://docs.aws.amazon.com/sns/latest/dg/SubscribeTopic.html"
         }
       ]
@@ -1132,9 +1132,9 @@
           "Description": "Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs and establishing corresponding metric filters and alarms. It is recommended that a metric filter and alarm be established changes made to Identity and Access Management (IAM) policies.",
           "RationaleStatement": "Monitoring changes to IAM policies will help ensure authentication and authorization controls remain intact.",
           "ImpactStatement": "",
-          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:\n\n1. Create a metric filter based on filter pattern provided which checks for IAM policy changes and the `` taken from audit step 1.\n```\naws logs put-metric-filter --log-group-name `` --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{($.eventName=DeleteGroupPolicy)||($.eventName=DeleteRolePolicy)||($.eventName=DeleteUserPolicy)||($.eventName=PutGroupPolicy)||($.eventName=PutRolePolicy)||($.eventName=PutUserPolicy)||($.eventName=CreatePolicy)||($.eventName=DeletePolicy)||($.eventName=CreatePolicyVersion)||($.eventName=DeletePolicyVersion)||($.eventName=AttachRolePolicy)||($.eventName=DetachRolePolicy)||($.eventName=AttachUserPolicy)||($.eventName=DetachUserPolicy)||($.eventName=AttachGroupPolicy)||($.eventName=DetachGroupPolicy)}'\n```\n\n**Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.\n\n2. Create an SNS topic that the alarm will notify\n```\naws sns create-topic --name \n```\n\n**Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.\n\n3. Create an SNS subscription to the topic created in step 2\n```\naws sns subscribe --topic-arn  --protocol  --notification-endpoint \n```\n\n**Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.\n\n4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2\n```\naws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions \n```",
-          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:\n\n1. Identify the log group name configured for use with active multi-region CloudTrail:\n\n- List all CloudTrails:\n\n`aws cloudtrail describe-trails`\n\n- Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`\n\n- From value associated with CloudWatchLogsLogGroupArn note ``\n\nExample: for CloudWatchLogsLogGroupArn that looks like `arn:aws:logs:::log-group:NewGroup:*`, `` would be `NewGroup`\n\n- Ensure Identified Multi region CloudTrail is active\n\n`aws cloudtrail get-trail-status --name `\n\nensure `IsLogging` is set to `TRUE`\n\n- Ensure identified Multi-region Cloudtrail captures all Management Events\n\n`aws cloudtrail get-event-selectors --trail-name `\n\nEnsure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`\n\n2. Get a list of all associated metric filters for this ``:\n\n```\naws logs describe-metric-filters --log-group-name \"\"\n```\n\n3. Ensure the output from the above command contains the following:\n\n```\n\"filterPattern\": \"{($.eventName=DeleteGroupPolicy)||($.eventName=DeleteRolePolicy)||($.eventName=DeleteUserPolicy)||($.eventName=PutGroupPolicy)||($.eventName=PutRolePolicy)||($.eventName=PutUserPolicy)||($.eventName=CreatePolicy)||($.eventName=DeletePolicy)||($.eventName=CreatePolicyVersion)||($.eventName=DeletePolicyVersion)||($.eventName=AttachRolePolicy)||($.eventName=DetachRolePolicy)||($.eventName=AttachUserPolicy)||($.eventName=DetachUserPolicy)||($.eventName=AttachGroupPolicy)||($.eventName=DetachGroupPolicy)}\"\n```\n\n4. Note the `` value associated with the `filterPattern` found in step 3.\n\n5. Get a list of CloudWatch alarms and filter on the `` captured in step 4.\n\n```\naws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]'\n```\n\n6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.\n\n7. Ensure there is at least one active subscriber to the SNS topic\n\n```\naws sns list-subscriptions-by-topic --topic-arn  \n```\nat least one subscription should have \"SubscriptionArn\" with valid aws ARN.\n\n```\nExample of valid \"SubscriptionArn\": \"arn:aws:sns::::\"\n```",
-          "AdditionalInformation": "Configuring log metric filter and alarm on Multi-region (global) CloudTrail\n- ensures that activities from all regions (used as well as unused) are monitored\n- ensures that activities on all supported global services are monitored\n- ensures that all management events across all regions are monitored",
+          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:  1. Create a metric filter based on filter pattern provided which checks for IAM policy changes and the `` taken from audit step 1. ``` aws logs put-metric-filter --log-group-name `` --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{($.eventName=DeleteGroupPolicy)||($.eventName=DeleteRolePolicy)||($.eventName=DeleteUserPolicy)||($.eventName=PutGroupPolicy)||($.eventName=PutRolePolicy)||($.eventName=PutUserPolicy)||($.eventName=CreatePolicy)||($.eventName=DeletePolicy)||($.eventName=CreatePolicyVersion)||($.eventName=DeletePolicyVersion)||($.eventName=AttachRolePolicy)||($.eventName=DetachRolePolicy)||($.eventName=AttachUserPolicy)||($.eventName=DetachUserPolicy)||($.eventName=AttachGroupPolicy)||($.eventName=DetachGroupPolicy)}' ```  **Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.  2. Create an SNS topic that the alarm will notify ``` aws sns create-topic --name  ```  **Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.  3. Create an SNS subscription to the topic created in step 2 ``` aws sns subscribe --topic-arn  --protocol  --notification-endpoint  ```  **Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.  4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 ``` aws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions  ```",
+          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:  1. Identify the log group name configured for use with active multi-region CloudTrail:  - List all CloudTrails:  `aws cloudtrail describe-trails`  - Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`  - From value associated with CloudWatchLogsLogGroupArn note ``  Example: for CloudWatchLogsLogGroupArn that looks like `arn:aws:logs:::log-group:NewGroup:*`, `` would be `NewGroup`  - Ensure Identified Multi region CloudTrail is active  `aws cloudtrail get-trail-status --name `  ensure `IsLogging` is set to `TRUE`  - Ensure identified Multi-region Cloudtrail captures all Management Events  `aws cloudtrail get-event-selectors --trail-name `  Ensure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`  2. Get a list of all associated metric filters for this ``:  ``` aws logs describe-metric-filters --log-group-name \"\" ```  3. Ensure the output from the above command contains the following:  ``` \"filterPattern\": \"{($.eventName=DeleteGroupPolicy)||($.eventName=DeleteRolePolicy)||($.eventName=DeleteUserPolicy)||($.eventName=PutGroupPolicy)||($.eventName=PutRolePolicy)||($.eventName=PutUserPolicy)||($.eventName=CreatePolicy)||($.eventName=DeletePolicy)||($.eventName=CreatePolicyVersion)||($.eventName=DeletePolicyVersion)||($.eventName=AttachRolePolicy)||($.eventName=DetachRolePolicy)||($.eventName=AttachUserPolicy)||($.eventName=DetachUserPolicy)||($.eventName=AttachGroupPolicy)||($.eventName=DetachGroupPolicy)}\" ```  4. Note the `` value associated with the `filterPattern` found in step 3.  5. Get a list of CloudWatch alarms and filter on the `` captured in step 4.  ``` aws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]' ```  6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.  7. Ensure there is at least one active subscriber to the SNS topic  ``` aws sns list-subscriptions-by-topic --topic-arn   ``` at least one subscription should have \"SubscriptionArn\" with valid aws ARN.  ``` Example of valid \"SubscriptionArn\": \"arn:aws:sns::::\" ```",
+          "AdditionalInformation": "Configuring log metric filter and alarm on Multi-region (global) CloudTrail - ensures that activities from all regions (used as well as unused) are monitored - ensures that activities on all supported global services are monitored - ensures that all management events across all regions are monitored",
           "References": "https://docs.aws.amazon.com/awscloudtrail/latest/userguide/receive-cloudtrail-log-files-from-multiple-regions.html:https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudwatch-alarms-for-cloudtrail.html:https://docs.aws.amazon.com/sns/latest/dg/SubscribeTopic.html"
         }
       ]
@@ -1153,9 +1153,9 @@
           "Description": "Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs and establishing corresponding metric filters and alarms. It is recommended that a metric filter and alarm be established for detecting changes to CloudTrail's configurations.",
           "RationaleStatement": "Monitoring changes to CloudTrail's configuration will help ensure sustained visibility to activities performed in the AWS account.",
           "ImpactStatement": "",
-          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:\n\n1. Create a metric filter based on filter pattern provided which checks for cloudtrail configuration changes and the `` taken from audit step 1.\n```\naws logs put-metric-filter --log-group-name  --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventName = CreateTrail) || ($.eventName = UpdateTrail) || ($.eventName = DeleteTrail) || ($.eventName = StartLogging) || ($.eventName = StopLogging) }'\n```\n\n**Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.\n\n2. Create an SNS topic that the alarm will notify\n```\naws sns create-topic --name \n```\n\n**Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.\n\n3. Create an SNS subscription to the topic created in step 2\n```\naws sns subscribe --topic-arn  --protocol  --notification-endpoint \n```\n\n**Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.\n\n4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2\n```\naws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions \n```",
-          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:\n\n1. Identify the log group name configured for use with active multi-region CloudTrail:\n\n- List all CloudTrails: `aws cloudtrail describe-trails`\n\n- Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`\n\n- From value associated with CloudWatchLogsLogGroupArn note ``\n\nExample: for CloudWatchLogsLogGroupArn that looks like `arn:aws:logs:::log-group:NewGroup:*`, `` would be `NewGroup`\n\n- Ensure Identified Multi region CloudTrail is active\n\n`aws cloudtrail get-trail-status --name `\n\nensure `IsLogging` is set to `TRUE`\n\n- Ensure identified Multi-region Cloudtrail captures all Management Events\n\n`aws cloudtrail get-event-selectors --trail-name `\n\nEnsure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`\n\n2. Get a list of all associated metric filters for this ``:\n\n```\naws logs describe-metric-filters --log-group-name \"\"\n```\n\n3. Ensure the output from the above command contains the following:\n\n```\n\"filterPattern\": \"{ ($.eventName = CreateTrail) || ($.eventName = UpdateTrail) || ($.eventName = DeleteTrail) || ($.eventName = StartLogging) || ($.eventName = StopLogging) }\"\n```\n\n4. Note the `` value associated with the `filterPattern` found in step 3.\n\n5. Get a list of CloudWatch alarms and filter on the `` captured in step 4.\n\n```\naws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]'\n```\n\n6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.\n\n7. Ensure there is at least one active subscriber to the SNS topic\n\n```\naws sns list-subscriptions-by-topic --topic-arn  \n```\nat least one subscription should have \"SubscriptionArn\" with valid aws ARN.\n\n```\nExample of valid \"SubscriptionArn\": \"arn:aws:sns::::\"\n```",
-          "AdditionalInformation": "Configuring log metric filter and alarm on Multi-region (global) CloudTrail\n- ensures that activities from all regions (used as well as unused) are monitored\n- ensures that activities on all supported global services are monitored\n- ensures that all management events across all regions are monitored",
+          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:  1. Create a metric filter based on filter pattern provided which checks for cloudtrail configuration changes and the `` taken from audit step 1. ``` aws logs put-metric-filter --log-group-name  --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventName = CreateTrail) || ($.eventName = UpdateTrail) || ($.eventName = DeleteTrail) || ($.eventName = StartLogging) || ($.eventName = StopLogging) }' ```  **Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.  2. Create an SNS topic that the alarm will notify ``` aws sns create-topic --name  ```  **Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.  3. Create an SNS subscription to the topic created in step 2 ``` aws sns subscribe --topic-arn  --protocol  --notification-endpoint  ```  **Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.  4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 ``` aws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions  ```",
+          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:  1. Identify the log group name configured for use with active multi-region CloudTrail:  - List all CloudTrails: `aws cloudtrail describe-trails`  - Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`  - From value associated with CloudWatchLogsLogGroupArn note ``  Example: for CloudWatchLogsLogGroupArn that looks like `arn:aws:logs:::log-group:NewGroup:*`, `` would be `NewGroup`  - Ensure Identified Multi region CloudTrail is active  `aws cloudtrail get-trail-status --name `  ensure `IsLogging` is set to `TRUE`  - Ensure identified Multi-region Cloudtrail captures all Management Events  `aws cloudtrail get-event-selectors --trail-name `  Ensure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`  2. Get a list of all associated metric filters for this ``:  ``` aws logs describe-metric-filters --log-group-name \"\" ```  3. Ensure the output from the above command contains the following:  ``` \"filterPattern\": \"{ ($.eventName = CreateTrail) || ($.eventName = UpdateTrail) || ($.eventName = DeleteTrail) || ($.eventName = StartLogging) || ($.eventName = StopLogging) }\" ```  4. Note the `` value associated with the `filterPattern` found in step 3.  5. Get a list of CloudWatch alarms and filter on the `` captured in step 4.  ``` aws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]' ```  6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.  7. Ensure there is at least one active subscriber to the SNS topic  ``` aws sns list-subscriptions-by-topic --topic-arn   ``` at least one subscription should have \"SubscriptionArn\" with valid aws ARN.  ``` Example of valid \"SubscriptionArn\": \"arn:aws:sns::::\" ```",
+          "AdditionalInformation": "Configuring log metric filter and alarm on Multi-region (global) CloudTrail - ensures that activities from all regions (used as well as unused) are monitored - ensures that activities on all supported global services are monitored - ensures that all management events across all regions are monitored",
           "References": "https://docs.aws.amazon.com/awscloudtrail/latest/userguide/receive-cloudtrail-log-files-from-multiple-regions.html:https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudwatch-alarms-for-cloudtrail.html:https://docs.aws.amazon.com/sns/latest/dg/SubscribeTopic.html"
         }
       ]
@@ -1174,9 +1174,9 @@
           "Description": "Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs and establishing corresponding metric filters and alarms. It is recommended that a metric filter and alarm be established for failed console authentication attempts.",
           "RationaleStatement": "Monitoring failed console logins may decrease lead time to detect an attempt to brute force a credential, which may provide an indicator, such as source IP, that can be used in other event correlation.",
           "ImpactStatement": "",
-          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:\n\n1. Create a metric filter based on filter pattern provided which checks for AWS management Console Login Failures and the `` taken from audit step 1.\n```\naws logs put-metric-filter --log-group-name  --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventName = ConsoleLogin) && ($.errorMessage = \"Failed authentication\") }'\n```\n**Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.\n\n2. Create an SNS topic that the alarm will notify\n```\naws sns create-topic --name \n```\n**Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.\n\n3. Create an SNS subscription to the topic created in step 2\n```\naws sns subscribe --topic-arn  --protocol  --notification-endpoint \n```\n**Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.\n\n4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2\n```\naws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions \n```",
-          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:\n\n1. Identify the log group name configured for use with active multi-region CloudTrail:\n\n- List all CloudTrails: `aws cloudtrail describe-trails`\n\n- Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`\n\n- From value associated with CloudWatchLogsLogGroupArn note ``\n\nExample: for CloudWatchLogsLogGroupArn that looks like `arn:aws:logs:::log-group:NewGroup:*`, `` would be `NewGroup`\n\n- Ensure Identified Multi region CloudTrail is active\n\n`aws cloudtrail get-trail-status --name `\n\nensure `IsLogging` is set to `TRUE`\n\n- Ensure identified Multi-region Cloudtrail captures all Management Events\n\n`aws cloudtrail get-event-selectors --trail-name `\n\nEnsure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`\n\n2. Get a list of all associated metric filters for this ``:\n```\naws logs describe-metric-filters --log-group-name \"\"\n```\n3. Ensure the output from the above command contains the following:\n```\n\"filterPattern\": \"{ ($.eventName = ConsoleLogin) && ($.errorMessage = \"Failed authentication\") }\"\n```\n\n4. Note the `` value associated with the `filterPattern` found in step 3.\n\n5. Get a list of CloudWatch alarms and filter on the `` captured in step 4.\n```\naws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]'\n```\n6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.\n\n7. Ensure there is at least one active subscriber to the SNS topic\n```\naws sns list-subscriptions-by-topic --topic-arn  \n```\nat least one subscription should have \"SubscriptionArn\" with valid aws ARN.\n```\nExample of valid \"SubscriptionArn\": \"arn:aws:sns::::\"\n```",
-          "AdditionalInformation": "Configuring log metric filter and alarm on Multi-region (global) CloudTrail\n- ensures that activities from all regions (used as well as unused) are monitored\n- ensures that activities on all supported global services are monitored\n- ensures that all management events across all regions are monitored",
+          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:  1. Create a metric filter based on filter pattern provided which checks for AWS management Console Login Failures and the `` taken from audit step 1. ``` aws logs put-metric-filter --log-group-name  --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventName = ConsoleLogin) && ($.errorMessage = \"Failed authentication\") }' ``` **Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.  2. Create an SNS topic that the alarm will notify ``` aws sns create-topic --name  ``` **Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.  3. Create an SNS subscription to the topic created in step 2 ``` aws sns subscribe --topic-arn  --protocol  --notification-endpoint  ``` **Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.  4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 ``` aws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions  ```",
+          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:  1. Identify the log group name configured for use with active multi-region CloudTrail:  - List all CloudTrails: `aws cloudtrail describe-trails`  - Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`  - From value associated with CloudWatchLogsLogGroupArn note ``  Example: for CloudWatchLogsLogGroupArn that looks like `arn:aws:logs:::log-group:NewGroup:*`, `` would be `NewGroup`  - Ensure Identified Multi region CloudTrail is active  `aws cloudtrail get-trail-status --name `  ensure `IsLogging` is set to `TRUE`  - Ensure identified Multi-region Cloudtrail captures all Management Events  `aws cloudtrail get-event-selectors --trail-name `  Ensure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`  2. Get a list of all associated metric filters for this ``: ``` aws logs describe-metric-filters --log-group-name \"\" ``` 3. Ensure the output from the above command contains the following: ``` \"filterPattern\": \"{ ($.eventName = ConsoleLogin) && ($.errorMessage = \"Failed authentication\") }\" ```  4. Note the `` value associated with the `filterPattern` found in step 3.  5. Get a list of CloudWatch alarms and filter on the `` captured in step 4. ``` aws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]' ``` 6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.  7. Ensure there is at least one active subscriber to the SNS topic ``` aws sns list-subscriptions-by-topic --topic-arn   ``` at least one subscription should have \"SubscriptionArn\" with valid aws ARN. ``` Example of valid \"SubscriptionArn\": \"arn:aws:sns::::\" ```",
+          "AdditionalInformation": "Configuring log metric filter and alarm on Multi-region (global) CloudTrail - ensures that activities from all regions (used as well as unused) are monitored - ensures that activities on all supported global services are monitored - ensures that all management events across all regions are monitored",
           "References": "https://docs.aws.amazon.com/awscloudtrail/latest/userguide/receive-cloudtrail-log-files-from-multiple-regions.html:https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudwatch-alarms-for-cloudtrail.html:https://docs.aws.amazon.com/sns/latest/dg/SubscribeTopic.html"
         }
       ]
@@ -1195,9 +1195,9 @@
           "Description": "Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs and establishing corresponding metric filters and alarms. It is recommended that a metric filter and alarm be established for customer created CMKs which have changed state to disabled or scheduled deletion.",
           "RationaleStatement": "Data encrypted with disabled or deleted keys will no longer be accessible.",
           "ImpactStatement": "",
-          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:\n\n1. Create a metric filter based on filter pattern provided which checks for disabled or scheduled for deletion CMK's and the `` taken from audit step 1.\n```\naws logs put-metric-filter --log-group-name  --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{($.eventSource = kms.amazonaws.com) && (($.eventName=DisableKey)||($.eventName=ScheduleKeyDeletion)) }'\n```\n**Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.\n\n2. Create an SNS topic that the alarm will notify\n```\naws sns create-topic --name \n```\n**Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.\n\n3. Create an SNS subscription to the topic created in step 2\n```\naws sns subscribe --topic-arn  --protocol  --notification-endpoint \n```\n**Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.\n\n4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2\n```\naws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions \n```",
-          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:\n\n1. Identify the log group name configured for use with active multi-region CloudTrail:\n\n- List all CloudTrails: `aws cloudtrail describe-trails`\n\n- Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`\n\n- From value associated with CloudWatchLogsLogGroupArn note ``\n\nExample: for CloudWatchLogsLogGroupArn that looks like `arn:aws:logs:::log-group:NewGroup:*`, `` would be `NewGroup`\n\n- Ensure Identified Multi region CloudTrail is active\n\n`aws cloudtrail get-trail-status --name `\n\nensure `IsLogging` is set to `TRUE`\n\n- Ensure identified Multi-region Cloudtrail captures all Management Events\n\n`aws cloudtrail get-event-selectors --trail-name `\n\nEnsure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`\n\n2. Get a list of all associated metric filters for this ``:\n```\naws logs describe-metric-filters --log-group-name \"\"\n```\n3. Ensure the output from the above command contains the following:\n```\n\"filterPattern\": \"{($.eventSource = kms.amazonaws.com) && (($.eventName=DisableKey)||($.eventName=ScheduleKeyDeletion)) }\"\n```\n4. Note the `` value associated with the `filterPattern` found in step 3.\n\n5. Get a list of CloudWatch alarms and filter on the `` captured in step 4.\n```\naws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]'\n```\n6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.\n\n7. Ensure there is at least one active subscriber to the SNS topic\n```\naws sns list-subscriptions-by-topic --topic-arn  \n```\nat least one subscription should have \"SubscriptionArn\" with valid aws ARN.\n```\nExample of valid \"SubscriptionArn\": \"arn:aws:sns::::\"\n```",
-          "AdditionalInformation": "Configuring log metric filter and alarm on Multi-region (global) CloudTrail\n- ensures that activities from all regions (used as well as unused) are monitored\n- ensures that activities on all supported global services are monitored\n- ensures that all management events across all regions are monitored",
+          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:  1. Create a metric filter based on filter pattern provided which checks for disabled or scheduled for deletion CMK's and the `` taken from audit step 1. ``` aws logs put-metric-filter --log-group-name  --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{($.eventSource = kms.amazonaws.com) && (($.eventName=DisableKey)||($.eventName=ScheduleKeyDeletion)) }' ``` **Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.  2. Create an SNS topic that the alarm will notify ``` aws sns create-topic --name  ``` **Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.  3. Create an SNS subscription to the topic created in step 2 ``` aws sns subscribe --topic-arn  --protocol  --notification-endpoint  ``` **Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.  4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 ``` aws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions  ```",
+          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:  1. Identify the log group name configured for use with active multi-region CloudTrail:  - List all CloudTrails: `aws cloudtrail describe-trails`  - Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`  - From value associated with CloudWatchLogsLogGroupArn note ``  Example: for CloudWatchLogsLogGroupArn that looks like `arn:aws:logs:::log-group:NewGroup:*`, `` would be `NewGroup`  - Ensure Identified Multi region CloudTrail is active  `aws cloudtrail get-trail-status --name `  ensure `IsLogging` is set to `TRUE`  - Ensure identified Multi-region Cloudtrail captures all Management Events  `aws cloudtrail get-event-selectors --trail-name `  Ensure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`  2. Get a list of all associated metric filters for this ``: ``` aws logs describe-metric-filters --log-group-name \"\" ``` 3. Ensure the output from the above command contains the following: ``` \"filterPattern\": \"{($.eventSource = kms.amazonaws.com) && (($.eventName=DisableKey)||($.eventName=ScheduleKeyDeletion)) }\" ``` 4. Note the `` value associated with the `filterPattern` found in step 3.  5. Get a list of CloudWatch alarms and filter on the `` captured in step 4. ``` aws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]' ``` 6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.  7. Ensure there is at least one active subscriber to the SNS topic ``` aws sns list-subscriptions-by-topic --topic-arn   ``` at least one subscription should have \"SubscriptionArn\" with valid aws ARN. ``` Example of valid \"SubscriptionArn\": \"arn:aws:sns::::\" ```",
+          "AdditionalInformation": "Configuring log metric filter and alarm on Multi-region (global) CloudTrail - ensures that activities from all regions (used as well as unused) are monitored - ensures that activities on all supported global services are monitored - ensures that all management events across all regions are monitored",
           "References": "https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudwatch-alarms-for-cloudtrail.html:https://docs.aws.amazon.com/awscloudtrail/latest/userguide/receive-cloudtrail-log-files-from-multiple-regions.html:https://docs.aws.amazon.com/sns/latest/dg/SubscribeTopic.html"
         }
       ]
@@ -1216,9 +1216,9 @@
           "Description": "Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs and establishing corresponding metric filters and alarms. It is recommended that a metric filter and alarm be established for changes to S3 bucket policies.",
           "RationaleStatement": "Monitoring changes to S3 bucket policies may reduce time to detect and correct permissive policies on sensitive S3 buckets.",
           "ImpactStatement": "",
-          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:\n\n1. Create a metric filter based on filter pattern provided which checks for S3 bucket policy changes and the `` taken from audit step 1.\n```\naws logs put-metric-filter --log-group-name  --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventSource = s3.amazonaws.com) && (($.eventName = PutBucketAcl) || ($.eventName = PutBucketPolicy) || ($.eventName = PutBucketCors) || ($.eventName = PutBucketLifecycle) || ($.eventName = PutBucketReplication) || ($.eventName = DeleteBucketPolicy) || ($.eventName = DeleteBucketCors) || ($.eventName = DeleteBucketLifecycle) || ($.eventName = DeleteBucketReplication)) }'\n```\n\n**Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.\n\n2. Create an SNS topic that the alarm will notify\n```\naws sns create-topic --name \n```\n\n**Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.\n\n3. Create an SNS subscription to the topic created in step 2\n```\naws sns subscribe --topic-arn  --protocol  --notification-endpoint \n```\n\n**Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.\n\n4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2\n```\naws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions \n```",
-          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:\n\n1. Identify the log group name configured for use with active multi-region CloudTrail:\n\n- List all CloudTrails: `aws cloudtrail describe-trails`\n\n- Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`\n\n- From value associated with CloudWatchLogsLogGroupArn note ``\n\nExample: for CloudWatchLogsLogGroupArn that looks like `arn:aws:logs:::log-group:NewGroup:*`, `` would be `NewGroup`\n\n- Ensure Identified Multi region CloudTrail is active\n\n`aws cloudtrail get-trail-status --name `\n\nensure `IsLogging` is set to `TRUE`\n\n- Ensure identified Multi-region Cloudtrail captures all Management Events\n\n`aws cloudtrail get-event-selectors --trail-name `\n\nEnsure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`\n\n2. Get a list of all associated metric filters for this ``:\n```\naws logs describe-metric-filters --log-group-name \"\"\n```\n3. Ensure the output from the above command contains the following:\n```\n\"filterPattern\": \"{ ($.eventSource = s3.amazonaws.com) && (($.eventName = PutBucketAcl) || ($.eventName = PutBucketPolicy) || ($.eventName = PutBucketCors) || ($.eventName = PutBucketLifecycle) || ($.eventName = PutBucketReplication) || ($.eventName = DeleteBucketPolicy) || ($.eventName = DeleteBucketCors) || ($.eventName = DeleteBucketLifecycle) || ($.eventName = DeleteBucketReplication)) }\"\n```\n4. Note the `` value associated with the `filterPattern` found in step 3.\n\n5. Get a list of CloudWatch alarms and filter on the `` captured in step 4.\n```\naws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]'\n```\n6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.\n\n7. Ensure there is at least one active subscriber to the SNS topic\n```\naws sns list-subscriptions-by-topic --topic-arn  \n```\nat least one subscription should have \"SubscriptionArn\" with valid aws ARN.\n```\nExample of valid \"SubscriptionArn\": \"arn:aws:sns::::\"\n```",
-          "AdditionalInformation": "Configuring log metric filter and alarm on Multi-region (global) CloudTrail\n- ensures that activities from all regions (used as well as unused) are monitored\n- ensures that activities on all supported global services are monitored\n- ensures that all management events across all regions are monitored",
+          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:  1. Create a metric filter based on filter pattern provided which checks for S3 bucket policy changes and the `` taken from audit step 1. ``` aws logs put-metric-filter --log-group-name  --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventSource = s3.amazonaws.com) && (($.eventName = PutBucketAcl) || ($.eventName = PutBucketPolicy) || ($.eventName = PutBucketCors) || ($.eventName = PutBucketLifecycle) || ($.eventName = PutBucketReplication) || ($.eventName = DeleteBucketPolicy) || ($.eventName = DeleteBucketCors) || ($.eventName = DeleteBucketLifecycle) || ($.eventName = DeleteBucketReplication)) }' ```  **Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.  2. Create an SNS topic that the alarm will notify ``` aws sns create-topic --name  ```  **Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.  3. Create an SNS subscription to the topic created in step 2 ``` aws sns subscribe --topic-arn  --protocol  --notification-endpoint  ```  **Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.  4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 ``` aws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions  ```",
+          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:  1. Identify the log group name configured for use with active multi-region CloudTrail:  - List all CloudTrails: `aws cloudtrail describe-trails`  - Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`  - From value associated with CloudWatchLogsLogGroupArn note ``  Example: for CloudWatchLogsLogGroupArn that looks like `arn:aws:logs:::log-group:NewGroup:*`, `` would be `NewGroup`  - Ensure Identified Multi region CloudTrail is active  `aws cloudtrail get-trail-status --name `  ensure `IsLogging` is set to `TRUE`  - Ensure identified Multi-region Cloudtrail captures all Management Events  `aws cloudtrail get-event-selectors --trail-name `  Ensure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`  2. Get a list of all associated metric filters for this ``: ``` aws logs describe-metric-filters --log-group-name \"\" ``` 3. Ensure the output from the above command contains the following: ``` \"filterPattern\": \"{ ($.eventSource = s3.amazonaws.com) && (($.eventName = PutBucketAcl) || ($.eventName = PutBucketPolicy) || ($.eventName = PutBucketCors) || ($.eventName = PutBucketLifecycle) || ($.eventName = PutBucketReplication) || ($.eventName = DeleteBucketPolicy) || ($.eventName = DeleteBucketCors) || ($.eventName = DeleteBucketLifecycle) || ($.eventName = DeleteBucketReplication)) }\" ``` 4. Note the `` value associated with the `filterPattern` found in step 3.  5. Get a list of CloudWatch alarms and filter on the `` captured in step 4. ``` aws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]' ``` 6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.  7. Ensure there is at least one active subscriber to the SNS topic ``` aws sns list-subscriptions-by-topic --topic-arn   ``` at least one subscription should have \"SubscriptionArn\" with valid aws ARN. ``` Example of valid \"SubscriptionArn\": \"arn:aws:sns::::\" ```",
+          "AdditionalInformation": "Configuring log metric filter and alarm on Multi-region (global) CloudTrail - ensures that activities from all regions (used as well as unused) are monitored - ensures that activities on all supported global services are monitored - ensures that all management events across all regions are monitored",
           "References": "https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudwatch-alarms-for-cloudtrail.html:https://docs.aws.amazon.com/awscloudtrail/latest/userguide/receive-cloudtrail-log-files-from-multiple-regions.html:https://docs.aws.amazon.com/sns/latest/dg/SubscribeTopic.html"
         }
       ]
@@ -1237,9 +1237,9 @@
           "Description": "Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs and establishing corresponding metric filters and alarms. It is recommended that a metric filter and alarm be established for detecting changes to CloudTrail's configurations.",
           "RationaleStatement": "Monitoring changes to AWS Config configuration will help ensure sustained visibility of configuration items within the AWS account.",
           "ImpactStatement": "",
-          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:\n\n1. Create a metric filter based on filter pattern provided which checks for AWS Configuration changes and the `` taken from audit step 1.\n```\naws logs put-metric-filter --log-group-name  --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventSource = config.amazonaws.com) && (($.eventName=StopConfigurationRecorder)||($.eventName=DeleteDeliveryChannel)||($.eventName=PutDeliveryChannel)||($.eventName=PutConfigurationRecorder)) }'\n```\n\n**Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.\n\n2. Create an SNS topic that the alarm will notify\n```\naws sns create-topic --name \n```\n\n**Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.\n\n3. Create an SNS subscription to topic created in step 2\n```\naws sns subscribe --topic-arn  --protocol  --notification-endpoint \n```\n\n**Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.\n\n4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2\n```\naws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions \n```",
-          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:\n\n1. Identify the log group name configured for use with active multi-region CloudTrail:\n\n- List all CloudTrails: `aws cloudtrail describe-trails`\n\n- Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`\n\n- From value associated with CloudWatchLogsLogGroupArn note ``\n\nExample: for CloudWatchLogsLogGroupArn that looks like `arn:aws:logs:::log-group:NewGroup:*`, `` would be `NewGroup`\n\n- Ensure Identified Multi region CloudTrail is active\n\n`aws cloudtrail get-trail-status --name `\n\nensure `IsLogging` is set to `TRUE`\n\n- Ensure identified Multi-region Cloudtrail captures all Management Events\n\n`aws cloudtrail get-event-selectors --trail-name `\n\nEnsure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`\n\n2. Get a list of all associated metric filters for this ``:\n```\naws logs describe-metric-filters --log-group-name \"\"\n```\n3. Ensure the output from the above command contains the following:\n```\n\"filterPattern\": \"{ ($.eventSource = config.amazonaws.com) && (($.eventName=StopConfigurationRecorder)||($.eventName=DeleteDeliveryChannel)||($.eventName=PutDeliveryChannel)||($.eventName=PutConfigurationRecorder)) }\"\n```\n4. Note the `` value associated with the `filterPattern` found in step 3.\n\n5. Get a list of CloudWatch alarms and filter on the `` captured in step 4.\n```\naws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]'\n```\n6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.\n\n7. Ensure there is at least one active subscriber to the SNS topic\n```\naws sns list-subscriptions-by-topic --topic-arn  \n```\nat least one subscription should have \"SubscriptionArn\" with valid aws ARN.\n```\nExample of valid \"SubscriptionArn\": \"arn:aws:sns::::\"\n```",
-          "AdditionalInformation": "Configuring log metric filter and alarm on Multi-region (global) CloudTrail\n- ensures that activities from all regions (used as well as unused) are monitored\n- ensures that activities on all supported global services are monitored\n- ensures that all management events across all regions are monitored",
+          "RemediationProcedure": "Perform the following to setup the metric filter, alarm, SNS topic, and subscription:  1. Create a metric filter based on filter pattern provided which checks for AWS Configuration changes and the `` taken from audit step 1. ``` aws logs put-metric-filter --log-group-name  --filter-name `` --metric-transformations metricName= `` ,metricNamespace='CISBenchmark',metricValue=1 --filter-pattern '{ ($.eventSource = config.amazonaws.com) && (($.eventName=StopConfigurationRecorder)||($.eventName=DeleteDeliveryChannel)||($.eventName=PutDeliveryChannel)||($.eventName=PutConfigurationRecorder)) }' ```  **Note**: You can choose your own metricName and metricNamespace strings. Using the same metricNamespace for all Foundations Benchmark metrics will group them together.  2. Create an SNS topic that the alarm will notify ``` aws sns create-topic --name  ```  **Note**: you can execute this command once and then re-use the same topic for all monitoring alarms.  3. Create an SNS subscription to topic created in step 2 ``` aws sns subscribe --topic-arn  --protocol  --notification-endpoint  ```  **Note**: you can execute this command once and then re-use the SNS subscription for all monitoring alarms.  4. Create an alarm that is associated with the CloudWatch Logs Metric Filter created in step 1 and an SNS topic created in step 2 ``` aws cloudwatch put-metric-alarm --alarm-name `` --metric-name `` --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace 'CISBenchmark' --alarm-actions  ```",
+          "AuditProcedure": "Perform the following to ensure that there is at least one active multi-region CloudTrail with prescribed metric filters and alarms configured:  1. Identify the log group name configured for use with active multi-region CloudTrail:  - List all CloudTrails: `aws cloudtrail describe-trails`  - Identify Multi region Cloudtrails: `Trails with \"IsMultiRegionTrail\" set to true`  - From value associated with CloudWatchLogsLogGroupArn note ``  Example: for CloudWatchLogsLogGroupArn that looks like `arn:aws:logs:::log-group:NewGroup:*`, `` would be `NewGroup`  - Ensure Identified Multi region CloudTrail is active  `aws cloudtrail get-trail-status --name `  ensure `IsLogging` is set to `TRUE`  - Ensure identified Multi-region Cloudtrail captures all Management Events  `aws cloudtrail get-event-selectors --trail-name `  Ensure there is at least one Event Selector for a Trail with `IncludeManagementEvents` set to `true` and `ReadWriteType` set to `All`  2. Get a list of all associated metric filters for this ``: ``` aws logs describe-metric-filters --log-group-name \"\" ``` 3. Ensure the output from the above command contains the following: ``` \"filterPattern\": \"{ ($.eventSource = config.amazonaws.com) && (($.eventName=StopConfigurationRecorder)||($.eventName=DeleteDeliveryChannel)||($.eventName=PutDeliveryChannel)||($.eventName=PutConfigurationRecorder)) }\" ``` 4. Note the `` value associated with the `filterPattern` found in step 3.  5. Get a list of CloudWatch alarms and filter on the `` captured in step 4. ``` aws cloudwatch describe-alarms --query 'MetricAlarms[?MetricName== ``]' ``` 6. Note the `AlarmActions` value - this will provide the SNS topic ARN value.  7. Ensure there is at least one active subscriber to the SNS topic ``` aws sns list-subscriptions-by-topic --topic-arn   ``` at least one subscription should have \"SubscriptionArn\" with valid aws ARN. ``` Example of valid \"SubscriptionArn\": \"arn:aws:sns::::\" ```",
+          "AdditionalInformation": "Configuring log metric filter and alarm on Multi-region (global) CloudTrail - ensures that activities from all regions (used as well as unused) are monitored - ensures that activities on all supported global services are monitored - ensures that all management events across all regions are monitored",
           "References": "https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudwatch-alarms-for-cloudtrail.html:https://docs.aws.amazon.com/awscloudtrail/latest/userguide/receive-cloudtrail-log-files-from-multiple-regions.html:https://docs.aws.amazon.com/sns/latest/dg/SubscribeTopic.html"
         }
       ]
@@ -1260,8 +1260,8 @@
           "Description": "The Network Access Control List (NACL) function provide stateless filtering of ingress and egress network traffic to AWS resources. It is recommended that no NACL allows unrestricted ingress access to remote server administration ports, such as SSH to port `22` and RDP to port `3389`.",
           "RationaleStatement": "Public access to remote server administration ports, such as 22 and 3389, increases resource attack surface and unnecessarily raises the risk of resource compromise.",
           "ImpactStatement": "",
-          "RemediationProcedure": "**From Console:**\n\nPerform the following:\n1. Login to the AWS Management Console at https://console.aws.amazon.com/vpc/home\n2. In the left pane, click `Network ACLs`\n3. For each network ACL to remediate, perform the following:\n - Select the network ACL\n - Click the `Inbound Rules` tab\n - Click `Edit inbound rules`\n - Either A) update the Source field to a range other than 0.0.0.0/0, or, B) Click `Delete` to remove the offending inbound rule\n - Click `Save`",
-          "AuditProcedure": "**From Console:**\n\nPerform the following to determine if the account is configured as prescribed:\n1. Login to the AWS Management Console at https://console.aws.amazon.com/vpc/home\n2. In the left pane, click `Network ACLs`\n3. For each network ACL, perform the following:\n - Select the network ACL\n - Click the `Inbound Rules` tab\n - Ensure no rule exists that has a port range that includes port `22`, `3389`, or other remote server administration ports for your environment and has a `Source` of `0.0.0.0/0` and shows `ALLOW`\n\n**Note:** A Port value of `ALL` or a port range such as `0-1024` are inclusive of port `22`, `3389`, and other remote server administration ports",
+          "RemediationProcedure": "**From Console:**  Perform the following: 1. Login to the AWS Management Console at https://console.aws.amazon.com/vpc/home 2. In the left pane, click `Network ACLs` 3. For each network ACL to remediate, perform the following:  - Select the network ACL  - Click the `Inbound Rules` tab  - Click `Edit inbound rules`  - Either A) update the Source field to a range other than 0.0.0.0/0, or, B) Click `Delete` to remove the offending inbound rule  - Click `Save`",
+          "AuditProcedure": "**From Console:**  Perform the following to determine if the account is configured as prescribed: 1. Login to the AWS Management Console at https://console.aws.amazon.com/vpc/home 2. In the left pane, click `Network ACLs` 3. For each network ACL, perform the following:  - Select the network ACL  - Click the `Inbound Rules` tab  - Ensure no rule exists that has a port range that includes port `22`, `3389`, or other remote server administration ports for your environment and has a `Source` of `0.0.0.0/0` and shows `ALLOW`  **Note:** A Port value of `ALL` or a port range such as `0-1024` are inclusive of port `22`, `3389`, and other remote server administration ports",
           "AdditionalInformation": "",
           "References": "https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-acls.html:https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Security.html#VPC_Security_Comparison"
         }
@@ -1283,8 +1283,8 @@
           "Description": "Security groups provide stateful filtering of ingress and egress network traffic to AWS resources. It is recommended that no security group allows unrestricted ingress access to remote server administration ports, such as SSH to port `22` and RDP to port `3389`.",
           "RationaleStatement": "Public access to remote server administration ports, such as 22 and 3389, increases resource attack surface and unnecessarily raises the risk of resource compromise.",
           "ImpactStatement": "When updating an existing environment, ensure that administrators have access to remote server administration ports through another mechanism before removing access by deleting the 0.0.0.0/0 inbound rule.",
-          "RemediationProcedure": "Perform the following to implement the prescribed state:\n\n1. Login to the AWS Management Console at [https://console.aws.amazon.com/vpc/home](https://console.aws.amazon.com/vpc/home)\n2. In the left pane, click `Security Groups` \n3. For each security group, perform the following:\n1. Select the security group\n2. Click the `Inbound Rules` tab\n3. Click the `Edit inbound rules` button\n4. Identify the rules to be edited or removed\n5. Either A) update the Source field to a range other than 0.0.0.0/0, or, B) Click `Delete` to remove the offending inbound rule\n6. Click `Save rules`",
-          "AuditProcedure": "Perform the following to determine if the account is configured as prescribed:\n\n1. Login to the AWS Management Console at [https://console.aws.amazon.com/vpc/home](https://console.aws.amazon.com/vpc/home)\n2. In the left pane, click `Security Groups` \n3. For each security group, perform the following:\n1. Select the security group\n2. Click the `Inbound Rules` tab\n3. Ensure no rule exists that has a port range that includes port `22`, `3389`, or other remote server administration ports for your environment and has a `Source` of `0.0.0.0/0` \n\n**Note:** A Port value of `ALL` or a port range such as `0-1024` are inclusive of port `22`, `3389`, and other remote server administration ports.",
+          "RemediationProcedure": "Perform the following to implement the prescribed state:  1. Login to the AWS Management Console at [https://console.aws.amazon.com/vpc/home](https://console.aws.amazon.com/vpc/home) 2. In the left pane, click `Security Groups`  3. For each security group, perform the following: 1. Select the security group 2. Click the `Inbound Rules` tab 3. Click the `Edit inbound rules` button 4. Identify the rules to be edited or removed 5. Either A) update the Source field to a range other than 0.0.0.0/0, or, B) Click `Delete` to remove the offending inbound rule 6. Click `Save rules`",
+          "AuditProcedure": "Perform the following to determine if the account is configured as prescribed:  1. Login to the AWS Management Console at [https://console.aws.amazon.com/vpc/home](https://console.aws.amazon.com/vpc/home) 2. In the left pane, click `Security Groups`  3. For each security group, perform the following: 1. Select the security group 2. Click the `Inbound Rules` tab 3. Ensure no rule exists that has a port range that includes port `22`, `3389`, or other remote server administration ports for your environment and has a `Source` of `0.0.0.0/0`   **Note:** A Port value of `ALL` or a port range such as `0-1024` are inclusive of port `22`, `3389`, and other remote server administration ports.",
           "AdditionalInformation": "",
           "References": "https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-security-groups.html#deleting-security-group-rule"
         }
@@ -1306,8 +1306,8 @@
           "Description": "Security groups provide stateful filtering of ingress and egress network traffic to AWS resources. It is recommended that no security group allows unrestricted ingress access to remote server administration ports, such as SSH to port `22` and RDP to port `3389`.",
           "RationaleStatement": "Public access to remote server administration ports, such as 22 and 3389, increases resource attack surface and unnecessarily raises the risk of resource compromise.",
           "ImpactStatement": "When updating an existing environment, ensure that administrators have access to remote server administration ports through another mechanism before removing access by deleting the ::/0 inbound rule.",
-          "RemediationProcedure": "Perform the following to implement the prescribed state:\n\n1. Login to the AWS Management Console at [https://console.aws.amazon.com/vpc/home](https://console.aws.amazon.com/vpc/home)\n2. In the left pane, click `Security Groups` \n3. For each security group, perform the following:\n1. Select the security group\n2. Click the `Inbound Rules` tab\n3. Click the `Edit inbound rules` button\n4. Identify the rules to be edited or removed\n5. Either A) update the Source field to a range other than ::/0, or, B) Click `Delete` to remove the offending inbound rule\n6. Click `Save rules`",
-          "AuditProcedure": "Perform the following to determine if the account is configured as prescribed:\n\n1. Login to the AWS Management Console at [https://console.aws.amazon.com/vpc/home](https://console.aws.amazon.com/vpc/home)\n2. In the left pane, click `Security Groups` \n3. For each security group, perform the following:\n1. Select the security group\n2. Click the `Inbound Rules` tab\n3. Ensure no rule exists that has a port range that includes port `22`, `3389`, or other remote server administration ports for your environment and has a `Source` of `::/0` \n\n**Note:** A Port value of `ALL` or a port range such as `0-1024` are inclusive of port `22`, `3389`, and other remote server administration ports.",
+          "RemediationProcedure": "Perform the following to implement the prescribed state:  1. Login to the AWS Management Console at [https://console.aws.amazon.com/vpc/home](https://console.aws.amazon.com/vpc/home) 2. In the left pane, click `Security Groups`  3. For each security group, perform the following: 1. Select the security group 2. Click the `Inbound Rules` tab 3. Click the `Edit inbound rules` button 4. Identify the rules to be edited or removed 5. Either A) update the Source field to a range other than ::/0, or, B) Click `Delete` to remove the offending inbound rule 6. Click `Save rules`",
+          "AuditProcedure": "Perform the following to determine if the account is configured as prescribed:  1. Login to the AWS Management Console at [https://console.aws.amazon.com/vpc/home](https://console.aws.amazon.com/vpc/home) 2. In the left pane, click `Security Groups`  3. For each security group, perform the following: 1. Select the security group 2. Click the `Inbound Rules` tab 3. Ensure no rule exists that has a port range that includes port `22`, `3389`, or other remote server administration ports for your environment and has a `Source` of `::/0`   **Note:** A Port value of `ALL` or a port range such as `0-1024` are inclusive of port `22`, `3389`, and other remote server administration ports.",
           "AdditionalInformation": "",
           "References": "https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-security-groups.html#deleting-security-group-rule"
         }
@@ -1324,11 +1324,11 @@
           "Section": "5. Networking",
           "Profile": "Level 2",
           "AssessmentStatus": "Automated",
-          "Description": "A VPC comes with a default security group whose initial settings deny all inbound traffic, allow all outbound traffic, and allow all traffic between instances assigned to the security group. If you don't specify a security group when you launch an instance, the instance is automatically assigned to this default security group. Security groups provide stateful filtering of ingress/egress network traffic to AWS resources. It is recommended that the default security group restrict all traffic.\n\nThe default VPC in every region should have its default security group updated to comply. Any newly created VPCs will automatically contain a default security group that will need remediation to comply with this recommendation.\n\n**NOTE:** When implementing this recommendation, VPC flow logging is invaluable in determining the least privilege port access required by systems to work properly because it can log all packet acceptances and rejections occurring under the current security groups. This dramatically reduces the primary barrier to least privilege engineering - discovering the minimum ports required by systems in the environment. Even if the VPC flow logging recommendation in this benchmark is not adopted as a permanent security measure, it should be used during any period of discovery and engineering for least privileged security groups.",
+          "Description": "A VPC comes with a default security group whose initial settings deny all inbound traffic, allow all outbound traffic, and allow all traffic between instances assigned to the security group. If you don't specify a security group when you launch an instance, the instance is automatically assigned to this default security group. Security groups provide stateful filtering of ingress/egress network traffic to AWS resources. It is recommended that the default security group restrict all traffic.  The default VPC in every region should have its default security group updated to comply. Any newly created VPCs will automatically contain a default security group that will need remediation to comply with this recommendation.  **NOTE:** When implementing this recommendation, VPC flow logging is invaluable in determining the least privilege port access required by systems to work properly because it can log all packet acceptances and rejections occurring under the current security groups. This dramatically reduces the primary barrier to least privilege engineering - discovering the minimum ports required by systems in the environment. Even if the VPC flow logging recommendation in this benchmark is not adopted as a permanent security measure, it should be used during any period of discovery and engineering for least privileged security groups.",
           "RationaleStatement": "Configuring all VPC default security groups to restrict all traffic will encourage least privilege security group development and mindful placement of AWS resources into security groups which will in-turn reduce the exposure of those resources.",
           "ImpactStatement": "Implementing this recommendation in an existing VPC containing operating resources requires extremely careful migration planning as the default security groups are likely to be enabling many ports that are unknown. Enabling VPC flow logging (of accepts) in an existing environment that is known to be breach free will reveal the current pattern of ports being used for each instance to communicate successfully.",
-          "RemediationProcedure": "Security Group Members\n\nPerform the following to implement the prescribed state:\n\n1. Identify AWS resources that exist within the default security group\n2. Create a set of least privilege security groups for those resources\n3. Place the resources in those security groups\n4. Remove the resources noted in #1 from the default security group\n\nSecurity Group State\n\n1. Login to the AWS Management Console at [https://console.aws.amazon.com/vpc/home](https://console.aws.amazon.com/vpc/home)\n2. Repeat the next steps for all VPCs - including the default VPC in each AWS region:\n3. In the left pane, click `Security Groups` \n4. For each default security group, perform the following:\n1. Select the `default` security group\n2. Click the `Inbound Rules` tab\n3. Remove any inbound rules\n4. Click the `Outbound Rules` tab\n5. Remove any Outbound rules\n\nRecommended:\n\nIAM groups allow you to edit the \"name\" field. After remediating default groups rules for all VPCs in all regions, edit this field to add text similar to \"DO NOT USE. DO NOT ADD RULES\"",
-          "AuditProcedure": "Perform the following to determine if the account is configured as prescribed:\n\nSecurity Group State\n\n1. Login to the AWS Management Console at [https://console.aws.amazon.com/vpc/home](https://console.aws.amazon.com/vpc/home)\n2. Repeat the next steps for all VPCs - including the default VPC in each AWS region:\n3. In the left pane, click `Security Groups` \n4. For each default security group, perform the following:\n1. Select the `default` security group\n2. Click the `Inbound Rules` tab\n3. Ensure no rule exist\n4. Click the `Outbound Rules` tab\n5. Ensure no rules exist\n\nSecurity Group Members\n\n1. Login to the AWS Management Console at [https://console.aws.amazon.com/vpc/home](https://console.aws.amazon.com/vpc/home)\n2. Repeat the next steps for all default groups in all VPCs - including the default VPC in each AWS region:\n3. In the left pane, click `Security Groups` \n4. Copy the id of the default security group.\n5. Change to the EC2 Management Console at https://console.aws.amazon.com/ec2/v2/home\n6. In the filter column type 'Security Group ID : < security group id from #4 >'",
+          "RemediationProcedure": "Security Group Members  Perform the following to implement the prescribed state:  1. Identify AWS resources that exist within the default security group 2. Create a set of least privilege security groups for those resources 3. Place the resources in those security groups 4. Remove the resources noted in #1 from the default security group  Security Group State  1. Login to the AWS Management Console at [https://console.aws.amazon.com/vpc/home](https://console.aws.amazon.com/vpc/home) 2. Repeat the next steps for all VPCs - including the default VPC in each AWS region: 3. In the left pane, click `Security Groups`  4. For each default security group, perform the following: 1. Select the `default` security group 2. Click the `Inbound Rules` tab 3. Remove any inbound rules 4. Click the `Outbound Rules` tab 5. Remove any Outbound rules  Recommended:  IAM groups allow you to edit the \"name\" field. After remediating default groups rules for all VPCs in all regions, edit this field to add text similar to \"DO NOT USE. DO NOT ADD RULES\"",
+          "AuditProcedure": "Perform the following to determine if the account is configured as prescribed:  Security Group State  1. Login to the AWS Management Console at [https://console.aws.amazon.com/vpc/home](https://console.aws.amazon.com/vpc/home) 2. Repeat the next steps for all VPCs - including the default VPC in each AWS region: 3. In the left pane, click `Security Groups`  4. For each default security group, perform the following: 1. Select the `default` security group 2. Click the `Inbound Rules` tab 3. Ensure no rule exist 4. Click the `Outbound Rules` tab 5. Ensure no rules exist  Security Group Members  1. Login to the AWS Management Console at [https://console.aws.amazon.com/vpc/home](https://console.aws.amazon.com/vpc/home) 2. Repeat the next steps for all default groups in all VPCs - including the default VPC in each AWS region: 3. In the left pane, click `Security Groups`  4. Copy the id of the default security group. 5. Change to the EC2 Management Console at https://console.aws.amazon.com/ec2/v2/home 6. In the filter column type 'Security Group ID : < security group id from #4 >'",
           "AdditionalInformation": "",
           "References": "https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html:https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-security-groups.html#default-security-group"
         }
@@ -1348,8 +1348,8 @@
           "Description": "Once a VPC peering connection is established, routing tables must be updated to establish any connections between the peered VPCs. These routes can be as specific as desired - even peering a VPC to only a single host on the other side of the connection.",
           "RationaleStatement": "Being highly selective in peering routing tables is a very effective way of minimizing the impact of breach as resources outside of these routes are inaccessible to the peered VPC.",
           "ImpactStatement": "",
-          "RemediationProcedure": "Remove and add route table entries to ensure that the least number of subnets or hosts as is required to accomplish the purpose for peering are routable.\n\n**From Command Line:**\n\n1. For each __ containing routes non compliant with your routing policy (which grants more than desired \"least access\"), delete the non compliant route:\n```\naws ec2 delete-route --route-table-id  --destination-cidr-block \n```\n 2. Create a new compliant route:\n```\naws ec2 create-route --route-table-id  --destination-cidr-block  --vpc-peering-connection-id \n```",
-          "AuditProcedure": "Review routing tables of peered VPCs for whether they route all subnets of each VPC and whether that is necessary to accomplish the intended purposes for peering the VPCs.\n\n**From Command Line:**\n\n1. List all the route tables from a VPC and check if \"GatewayId\" is pointing to a __ (e.g. pcx-1a2b3c4d) and if \"DestinationCidrBlock\" is as specific as desired.\n```\naws ec2 describe-route-tables --filter \"Name=vpc-id,Values=\" --query \"RouteTables[*].{RouteTableId:RouteTableId, VpcId:VpcId, Routes:Routes, AssociatedSubnets:Associations[*].SubnetId}\"\n```",
+          "RemediationProcedure": "Remove and add route table entries to ensure that the least number of subnets or hosts as is required to accomplish the purpose for peering are routable.  **From Command Line:**  1. For each __ containing routes non compliant with your routing policy (which grants more than desired \"least access\"), delete the non compliant route: ``` aws ec2 delete-route --route-table-id  --destination-cidr-block  ```  2. Create a new compliant route: ``` aws ec2 create-route --route-table-id  --destination-cidr-block  --vpc-peering-connection-id  ```",
+          "AuditProcedure": "Review routing tables of peered VPCs for whether they route all subnets of each VPC and whether that is necessary to accomplish the intended purposes for peering the VPCs.  **From Command Line:**  1. List all the route tables from a VPC and check if \"GatewayId\" is pointing to a __ (e.g. pcx-1a2b3c4d) and if \"DestinationCidrBlock\" is as specific as desired. ``` aws ec2 describe-route-tables --filter \"Name=vpc-id,Values=\" --query \"RouteTables[*].{RouteTableId:RouteTableId, VpcId:VpcId, Routes:Routes, AssociatedSubnets:Associations[*].SubnetId}\" ```",
           "AdditionalInformation": "If an organization has AWS transit gateway implemented in their VPC architecture they should look to apply the recommendation above for \"least access\" routing architecture at the AWS transit gateway level in combination with what must be implemented at the standard VPC route table. More specifically, to route traffic between two or more VPCs via a transit gateway VPCs must have an attachment to a transit gateway route table as well as a route, therefore to avoid routing traffic between VPCs an attachment to the transit gateway route table should only be added where there is an intention to route traffic between the VPCs. As transit gateways are able to host multiple route tables it is possible to group VPCs by attaching them to a common route table.",
           "References": "https://docs.aws.amazon.com/AmazonVPC/latest/PeeringGuide/peering-configurations-partial-access.html:https://docs.aws.amazon.com/cli/latest/reference/ec2/create-vpc-peering-connection.html"
         }
@@ -1369,8 +1369,8 @@
           "Description": "When enabling the Metadata Service on AWS EC2 instances, users have the option of using either Instance Metadata Service Version 1 (IMDSv1; a request/response method) or Instance Metadata Service Version 2 (IMDSv2; a session-oriented method).",
           "RationaleStatement": "Allowing Version 1 of the service may open EC2 instances to Server-Side Request Forgery (SSRF) attacks, so Amazon recommends utilizing Version 2 for better instance security.",
           "ImpactStatement": "",
-          "RemediationProcedure": "From Console:\n 1. Login to AWS Management Console and open the Amazon EC2 console using https://console.aws.amazon.com/ec2/\n2. Under the Instances menu, select Instances.\n 3. For each Instance, select the instance, then choose Actions > Modify instance metadata options.\n 4. If the Instance metadata service is enabled, set IMDSv2 to Required.\n\n From Command Line:\n ```\n aws ec2 modify-instance-metadata-options --instance-id  --http-tokens required\n ```\n",
-          "AuditProcedure": "From Console:\n 1. Login to AWS Management Console and open the Amazon EC2 console using https://console.aws.amazon.com/ec2/\n 2. Under the Instances menu, select Instances.\n 3. For each Instance, select the instance, then choose Actions > Modify instance metadata options.\n 4. If the Instance metadata service is enabled, verify whether IMDSv2 is set to required.\n\n From Command Line:\n 1. Use the describe-instances CLI command\n 2. Ensure for all ec2 instances that the metadata-options.http-tokens setting is set to required.\n 3. Repeat for all active regions.\n ```\n aws ec2 describe-instances --filters \"\"Name=metadata-options.http-tokens\",\"Values=optional\" \"\"Name=metadata-options.state\"\",\"\"Values=applied\"\" --query \"\"Reservations[*].Instances[*].\"\"\n```\n",
+          "RemediationProcedure": "From Console:  1. Login to AWS Management Console and open the Amazon EC2 console using https://console.aws.amazon.com/ec2/ 2. Under the Instances menu, select Instances.  3. For each Instance, select the instance, then choose Actions > Modify instance metadata options.  4. If the Instance metadata service is enabled, set IMDSv2 to Required.   From Command Line:  ```  aws ec2 modify-instance-metadata-options --instance-id  --http-tokens required  ``` ",
+          "AuditProcedure": "From Console:  1. Login to AWS Management Console and open the Amazon EC2 console using https://console.aws.amazon.com/ec2/  2. Under the Instances menu, select Instances.  3. For each Instance, select the instance, then choose Actions > Modify instance metadata options.  4. If the Instance metadata service is enabled, verify whether IMDSv2 is set to required.   From Command Line:  1. Use the describe-instances CLI command  2. Ensure for all ec2 instances that the metadata-options.http-tokens setting is set to required.  3. Repeat for all active regions.  ```  aws ec2 describe-instances --filters \"\"Name=metadata-options.http-tokens\",\"Values=optional\" \"\"Name=metadata-options.state\"\",\"\"Values=applied\"\" --query \"\"Reservations[*].Instances[*].\"\" ``` ",
           "AdditionalInformation": "",
           "References": "https://aws.amazon.com/blogs/security/defense-in-depth-open-firewalls-reverse-proxies-ssrf-vulnerabilities-ec2-instance-metadata-service/:https://docs.aws.amazon.com/cli/latest/reference/ec2/describe-instances.html"
         }
diff --git a/prowler/compliance/gcp/cis_2.0_gcp.json b/prowler/compliance/gcp/cis_2.0_gcp.json
index bf11b8f4..694539e3 100644
--- a/prowler/compliance/gcp/cis_2.0_gcp.json
+++ b/prowler/compliance/gcp/cis_2.0_gcp.json
@@ -16,8 +16,8 @@
           "Description": "Use corporate login credentials instead of personal accounts, such as Gmail accounts.",
           "RationaleStatement": "It is recommended fully-managed corporate Google accounts be used for increased visibility, auditing, and controlling access to Cloud Platform resources. Email accounts based outside of the user's organization, such as personal accounts, should not be used for business purposes.",
           "ImpactStatement": "There will be increased overhead as maintaining accounts will now be required. For smaller organizations, this will not be an issue, but will balloon with size.",
-          "RemediationProcedure": "Follow the documentation and setup corporate login accounts.\n\n**Prevention:**\nTo ensure that no email addresses outside the organization can be granted IAM permissions to its Google Cloud projects, folders or organization, turn on the Organization Policy for `Domain Restricted Sharing`. Learn more at: https://cloud.google.com/resource-manager/docs/organization-policy/restricting-domains(https://cloud.google.com/resource-manager/docs/organization-policy/restricting-domains)",
-          "AuditProcedure": "For each Google Cloud Platform project, list the accounts that have been granted access to that project:\n\n**From Google Cloud CLI**\n\n```\ngcloud projects get-iam-policy PROJECT_ID\n```\n\nAlso list the accounts added on each folder: \n\n```\ngcloud resource-manager folders get-iam-policy FOLDER_ID \n```\n\nAnd list your organization's IAM policy: \n\n```\ngcloud organizations get-iam-policy ORGANIZATION_ID\n```\n\nNo email accounts outside the organization domain should be granted permissions in the IAM policies. This excludes Google-owned service accounts.",
+          "RemediationProcedure": "Follow the documentation and setup corporate login accounts.  **Prevention:** To ensure that no email addresses outside the organization can be granted IAM permissions to its Google Cloud projects, folders or organization, turn on the Organization Policy for `Domain Restricted Sharing`. Learn more at: https://cloud.google.com/resource-manager/docs/organization-policy/restricting-domains(https://cloud.google.com/resource-manager/docs/organization-policy/restricting-domains)",
+          "AuditProcedure": "For each Google Cloud Platform project, list the accounts that have been granted access to that project:  **From Google Cloud CLI**  ``` gcloud projects get-iam-policy PROJECT_ID ```  Also list the accounts added on each folder:   ``` gcloud resource-manager folders get-iam-policy FOLDER_ID  ```  And list your organization's IAM policy:   ``` gcloud organizations get-iam-policy ORGANIZATION_ID ```  No email accounts outside the organization domain should be granted permissions in the IAM policies. This excludes Google-owned service accounts.",
           "AdditionalInformation": "",
           "References": "https://cloud.google.com/docs/enterprise/best-practices-for-enterprise-organizations#manage-identities:https://support.google.com/work/android/answer/6371476:https://cloud.google.com/sdk/gcloud/reference/organizations/get-iam-policy:https://cloud.google.com/sdk/gcloud/reference/beta/resource-manager/folders/get-iam-policy:https://cloud.google.com/sdk/gcloud/reference/projects/get-iam-policy:https://cloud.google.com/resource-manager/docs/organization-policy/org-policy-constraints:https://cloud.google.com/resource-manager/docs/organization-policy/restricting-domains"
         }
@@ -35,8 +35,8 @@
           "Description": "Setup multi-factor authentication for Google Cloud Platform accounts.",
           "RationaleStatement": "Multi-factor authentication requires more than one mechanism to authenticate a user. This secures user logins from attackers exploiting stolen or weak credentials.",
           "ImpactStatement": "",
-          "RemediationProcedure": "**From Google Cloud Console**\n\nFor each Google Cloud Platform project:\n\n1. Identify non-service accounts.\n\n1. Setup multi-factor authentication for each account.",
-          "AuditProcedure": "**From Google Cloud Console**\n\nFor each Google Cloud Platform project, folder, or organization:\n\n1. Identify non-service accounts.\n\n1. Manually verify that multi-factor authentication for each account is set.",
+          "RemediationProcedure": "**From Google Cloud Console**  For each Google Cloud Platform project:  1. Identify non-service accounts.  1. Setup multi-factor authentication for each account.",
+          "AuditProcedure": "**From Google Cloud Console**  For each Google Cloud Platform project, folder, or organization:  1. Identify non-service accounts.  1. Manually verify that multi-factor authentication for each account is set.",
           "AdditionalInformation": "",
           "References": "https://cloud.google.com/solutions/securing-gcp-account-u2f:https://support.google.com/accounts/answer/185839"
         }
@@ -54,8 +54,8 @@
           "Description": "Setup Security Key Enforcement for Google Cloud Platform admin accounts.",
           "RationaleStatement": "Google Cloud Platform users with Organization Administrator roles have the highest level of privilege in the organization. These accounts should be protected with the strongest form of two-factor authentication: Security Key Enforcement. Ensure that admins use Security Keys to log in instead of weaker second factors like SMS or one-time passwords (OTP). Security Keys are actual physical keys used to access Google Organization Administrator Accounts. They send an encrypted signature rather than a code, ensuring that logins cannot be phished.",
           "ImpactStatement": "If an organization administrator loses access to their security key, the user could lose access to their account. For this reason, it is important to set up backup security keys.",
-          "RemediationProcedure": "1. Identify users with the Organization Administrator role.\n\n2. Setup Security Key Enforcement for each account. Learn more at: https://cloud.google.com/security-key/(https://cloud.google.com/security-key/)",
-          "AuditProcedure": "1. Identify users with Organization Administrator privileges:\n\n```\ngcloud organizations get-iam-policy ORGANIZATION_ID\n```\n\nLook for members granted the role \"roles/resourcemanager.organizationAdmin\".\n\n2. Manually verify that Security Key Enforcement has been enabled for each account.",
+          "RemediationProcedure": "1. Identify users with the Organization Administrator role.  2. Setup Security Key Enforcement for each account. Learn more at: https://cloud.google.com/security-key/(https://cloud.google.com/security-key/)",
+          "AuditProcedure": "1. Identify users with Organization Administrator privileges:  ``` gcloud organizations get-iam-policy ORGANIZATION_ID ```  Look for members granted the role \"roles/resourcemanager.organizationAdmin\".  2. Manually verify that Security Key Enforcement has been enabled for each account.",
           "AdditionalInformation": "",
           "References": "https://cloud.google.com/security-key/:https://gsuite.google.com/learn-more/key_for_working_smarter_faster_and_more_securely.html"
         }
@@ -73,10 +73,10 @@
           "Profile": "Level 2",
           "AssessmentStatus": "Automated",
           "Description": "API Keys should only be used for services in cases where other authentication methods are unavailable. API keys are always at risk because they can be viewed publicly, such as from within a browser, or they can be accessed on a device where the key resides. It is recommended to restrict API keys to use (call) only APIs required by an application.",
-          "RationaleStatement": "Security risks involved in using API-Keys are below:\n\n- API keys are simple encrypted strings\n\n- API keys do not identify the user or the application making the API request\n\n- API keys are typically accessible to clients, making it easy to discover and steal an API key\n\nIn light of these potential risks, Google recommends using the standard authentication flow instead of API-Keys. However, there are limited cases where API keys are more appropriate. For example, if there is a mobile application that needs to use the Google Cloud Translation API, but doesn't otherwise need a backend server, API keys are the simplest way to authenticate to that API.\n\nIn order to reduce attack surfaces by providing `least privileges`, API-Keys can be restricted to use (call) only APIs required by an application.",
+          "RationaleStatement": "Security risks involved in using API-Keys are below:  - API keys are simple encrypted strings  - API keys do not identify the user or the application making the API request  - API keys are typically accessible to clients, making it easy to discover and steal an API key  In light of these potential risks, Google recommends using the standard authentication flow instead of API-Keys. However, there are limited cases where API keys are more appropriate. For example, if there is a mobile application that needs to use the Google Cloud Translation API, but doesn't otherwise need a backend server, API keys are the simplest way to authenticate to that API.  In order to reduce attack surfaces by providing `least privileges`, API-Keys can be restricted to use (call) only APIs required by an application.",
           "ImpactStatement": "Setting `API restrictions` may break existing application functioning, if not done carefully.",
-          "RemediationProcedure": "**From Console:**\n\n1. Go to `APIs & Services\\Credentials` using `https://console.cloud.google.com/apis/credentials`\n\n2. In the section `API Keys`, Click the `API Key Name`. The API Key properties display on a new page.\n\n3. In the `Key restrictions` section go to `API restrictions`.\n\n4. Click the `Select API` drop-down to choose an API.\n\n5. Click `Save`.\n\n6. Repeat steps 2,3,4,5 for every unrestricted API key\n\n**Note:** Do not set `API restrictions` to `Google Cloud APIs`, as this option allows access to all services offered by Google cloud.\n\n**From Google Cloud CLI**\n\n1. List all API keys.\n```\ngcloud services api-keys list\n```\n2. Note the `UID` of the key to add restrictions to.\n3. Run the update command with the appropriate flags to add the required restrictions.\n```\ngcloud alpha services api-keys update  \n```\nNote- Flags can be found by running\n```\ngcloud alpha services api-keys update --help\n```\nor in this documentation\nhttps://cloud.google.com/sdk/gcloud/reference/alpha/services/api-keys/update",
-          "AuditProcedure": "**From Console:**\n\n1. Go to `APIs & Services\\Credentials` using `https://console.cloud.google.com/apis/credentials`\n\n2. In the section `API Keys`, Click the `API Key Name`. The API Key properties display on a new page.\n\n3. For every API Key, ensure the section `Key restrictions` parameter `API restrictions` is not set to `None`.\n\nOr, \n\nEnsure `API restrictions` is not set to `Google Cloud APIs`\n\n**Note:** `Google Cloud APIs` represents the API collection of all cloud services/APIs offered by Google cloud.\n\n**From Google Cloud CLI**\n\n1. List all API Keys.\n```\ngcloud services api-keys list\n```\nEach key should have a line that says `restrictions:` followed by varying parameters and NOT have a line saying `- service: cloudapis.googleapis.com` as shown here\n```\n restrictions:\n apiTargets:\n - service: cloudapis.googleapis.com\n\n```",
+          "RemediationProcedure": "**From Console:**  1. Go to `APIs & Services\\Credentials` using `https://console.cloud.google.com/apis/credentials`  2. In the section `API Keys`, Click the `API Key Name`. The API Key properties display on a new page.  3. In the `Key restrictions` section go to `API restrictions`.  4. Click the `Select API` drop-down to choose an API.  5. Click `Save`.  6. Repeat steps 2,3,4,5 for every unrestricted API key  **Note:** Do not set `API restrictions` to `Google Cloud APIs`, as this option allows access to all services offered by Google cloud.  **From Google Cloud CLI**  1. List all API keys. ``` gcloud services api-keys list ``` 2. Note the `UID` of the key to add restrictions to. 3. Run the update command with the appropriate flags to add the required restrictions. ``` gcloud alpha services api-keys update   ``` Note- Flags can be found by running ``` gcloud alpha services api-keys update --help ``` or in this documentation https://cloud.google.com/sdk/gcloud/reference/alpha/services/api-keys/update",
+          "AuditProcedure": "**From Console:**  1. Go to `APIs & Services\\Credentials` using `https://console.cloud.google.com/apis/credentials`  2. In the section `API Keys`, Click the `API Key Name`. The API Key properties display on a new page.  3. For every API Key, ensure the section `Key restrictions` parameter `API restrictions` is not set to `None`.  Or,   Ensure `API restrictions` is not set to `Google Cloud APIs`  **Note:** `Google Cloud APIs` represents the API collection of all cloud services/APIs offered by Google cloud.  **From Google Cloud CLI**  1. List all API Keys. ``` gcloud services api-keys list ``` Each key should have a line that says `restrictions:` followed by varying parameters and NOT have a line saying `- service: cloudapis.googleapis.com` as shown here ```  restrictions:  apiTargets:  - service: cloudapis.googleapis.com  ```",
           "AdditionalInformation": "Some of the gcloud commands listed are currently in alpha and might change without notice.",
           "References": "https://cloud.google.com/docs/authentication/api-keys:https://cloud.google.com/apis/docs/overview"
         }
@@ -94,10 +94,10 @@
           "Profile": "Level 2",
           "AssessmentStatus": "Automated",
           "Description": "API Keys should only be used for services in cases where other authentication methods are unavailable. If they are in use it is recommended to rotate API keys every 90 days.",
-          "RationaleStatement": "Security risks involved in using API-Keys are listed below:\n\n- API keys are simple encrypted strings\n\n- API keys do not identify the user or the application making the API request\n\n- API keys are typically accessible to clients, making it easy to discover and steal an API key\n\nBecause of these potential risks, Google recommends using the standard authentication flow instead of API Keys. However, there are limited cases where API keys are more appropriate. For example, if there is a mobile application that needs to use the Google Cloud Translation API, but doesn't otherwise need a backend server, API keys are the simplest way to authenticate to that API.\n\nOnce a key is stolen, it has no expiration, meaning it may be used indefinitely unless the project owner revokes or regenerates the key. \nRotating API keys will reduce the window of opportunity for an access key that is associated with a compromised or terminated account to be used. \n\nAPI keys should be rotated to ensure that data cannot be accessed with an old key that might have been lost, cracked, or stolen.",
+          "RationaleStatement": "Security risks involved in using API-Keys are listed below:  - API keys are simple encrypted strings  - API keys do not identify the user or the application making the API request  - API keys are typically accessible to clients, making it easy to discover and steal an API key  Because of these potential risks, Google recommends using the standard authentication flow instead of API Keys. However, there are limited cases where API keys are more appropriate. For example, if there is a mobile application that needs to use the Google Cloud Translation API, but doesn't otherwise need a backend server, API keys are the simplest way to authenticate to that API.  Once a key is stolen, it has no expiration, meaning it may be used indefinitely unless the project owner revokes or regenerates the key.  Rotating API keys will reduce the window of opportunity for an access key that is associated with a compromised or terminated account to be used.   API keys should be rotated to ensure that data cannot be accessed with an old key that might have been lost, cracked, or stolen.",
           "ImpactStatement": "`Regenerating Key` may break existing client connectivity as the client will try to connect with older API keys they have stored on devices.",
-          "RemediationProcedure": "**From Google Cloud Console**\n\n1. Go to `APIs & Services\\Credentials` using `https://console.cloud.google.com/apis/credentials`\n\n2. In the section `API Keys`, Click the `API Key Name`. The API Key properties display on a new page.\n\n3. Click `REGENERATE KEY` to rotate API key.\n\n4. Click `Save`.\n\n5. Repeat steps 2,3,4 for every API key that has not been rotated in the last 90 days.\n\n**Note:** Do not set `HTTP referrers` to wild-cards (* or *.TLD or *.TLD/*) allowing access to any/wide HTTP referrer(s)\nDo not set `IP addresses` and referrer to `any host (0.0.0.0 or 0.0.0.0/0 or ::0)`\n\n**From Google Cloud CLI**\n\nThere is not currently a way to regenerate and API key using gcloud commands. To 'regenerate' a key you will need to create a new one, duplicate the restrictions from the key being rotated, and delete the old key.\n\n1. List existing keys.\n```\ngcloud services api-keys list\n```\n2. Note the `UID` and restrictions of the key to regenerate.\n\n3. Run this command to create a new API key.  is the display name of the new key.\n````\ngcloud alpha services api-keys create --display-name=\"\"\n````\nNote the `UID` of the newly created key\n\n4. Run the update command to add required restrictions. \n\nNote - the restriction may vary for each key. Refer to this documentation for the appropriate flags.\nhttps://cloud.google.com/sdk/gcloud/reference/alpha/services/api-keys/update\n```\ngcloud alpha services api-keys update \n```\n5. Delete the old key.\n```\ngcloud alpha services api-keys delete \n```",
-          "AuditProcedure": "**From Google Cloud Console**\n\n1. Go to `APIs & Services\\Credentials` using `https://console.cloud.google.com/apis/credentials`\n\n2. In the section `API Keys`, for every key ensure the `creation date` is less than 90 days.\n\n**From Google Cloud CLI**\n\nTo list keys, use the command\n\n```\ngcloud services api-keys list\n```\nEnsure the date in `createTime` is within 90 days.",
+          "RemediationProcedure": "**From Google Cloud Console**  1. Go to `APIs & Services\\Credentials` using `https://console.cloud.google.com/apis/credentials`  2. In the section `API Keys`, Click the `API Key Name`. The API Key properties display on a new page.  3. Click `REGENERATE KEY` to rotate API key.  4. Click `Save`.  5. Repeat steps 2,3,4 for every API key that has not been rotated in the last 90 days.  **Note:** Do not set `HTTP referrers` to wild-cards (* or *.TLD or *.TLD/*) allowing access to any/wide HTTP referrer(s) Do not set `IP addresses` and referrer to `any host (0.0.0.0 or 0.0.0.0/0 or ::0)`  **From Google Cloud CLI**  There is not currently a way to regenerate and API key using gcloud commands. To 'regenerate' a key you will need to create a new one, duplicate the restrictions from the key being rotated, and delete the old key.  1. List existing keys. ``` gcloud services api-keys list ``` 2. Note the `UID` and restrictions of the key to regenerate.  3. Run this command to create a new API key.  is the display name of the new key. ```` gcloud alpha services api-keys create --display-name=\"\" ```` Note the `UID` of the newly created key  4. Run the update command to add required restrictions.   Note - the restriction may vary for each key. Refer to this documentation for the appropriate flags. https://cloud.google.com/sdk/gcloud/reference/alpha/services/api-keys/update ``` gcloud alpha services api-keys update  ``` 5. Delete the old key. ``` gcloud alpha services api-keys delete  ```",
+          "AuditProcedure": "**From Google Cloud Console**  1. Go to `APIs & Services\\Credentials` using `https://console.cloud.google.com/apis/credentials`  2. In the section `API Keys`, for every key ensure the `creation date` is less than 90 days.  **From Google Cloud CLI**  To list keys, use the command  ``` gcloud services api-keys list ``` Ensure the date in `createTime` is within 90 days.",
           "AdditionalInformation": "There is no option to automatically regenerate (rotate) API keys periodically.",
           "References": "https://developers.google.com/maps/api-security-best-practices#regenerate-apikey:https://cloud.google.com/sdk/gcloud/reference/alpha/services/api-keys"
         }
@@ -115,11 +115,11 @@
           "Profile": "Level 2",
           "AssessmentStatus": "Automated",
           "Description": "API Keys should only be used for services in cases where other authentication methods are unavailable. Unused keys with their permissions in tact may still exist within a project. Keys are insecure because they can be viewed publicly, such as from within a browser, or they can be accessed on a device where the key resides. It is recommended to use standard authentication flow instead.",
-          "RationaleStatement": "To avoid the security risk in using API keys, it is recommended to use standard authentication flow instead. Security risks involved in using API-Keys appear below:\n\n- API keys are simple encrypted strings\n\n- API keys do not identify the user or the application making the API request\n\n- API keys are typically accessible to clients, making it easy to discover and steal an API key",
+          "RationaleStatement": "To avoid the security risk in using API keys, it is recommended to use standard authentication flow instead. Security risks involved in using API-Keys appear below:  - API keys are simple encrypted strings  - API keys do not identify the user or the application making the API request  - API keys are typically accessible to clients, making it easy to discover and steal an API key",
           "ImpactStatement": "Deleting an API key will break dependent applications (if any).",
-          "RemediationProcedure": "**From Console:**\n\n1. Go to `APIs & Services\\Credentials` using\n\n1. In the section `API Keys`, to delete API Keys: Click the `Delete Bin Icon` in front of every `API Key Name`.\n\n**From Google Cloud Command Line**\n\n1. Run the following from within the project you wish to audit **`gcloud services api-keys list --filter`**\n\n1. **Pipe the results into ** \n``gcloud alpha services api-keys delete``",
-          "AuditProcedure": "**From Console:**\n\n1. From within the Project you wish to audit Go to `APIs & Services\\Credentials`. \n\n1. In the section `API Keys`, no API key should be listed.\n\n**From Google Cloud Command Line**\n\n1. Run the following from within the project you wish to audit **`gcloud services api-keys list --filter`**.\n\n1. There should be no keys listed at the project level.",
-          "AdditionalInformation": "Google recommends using the standard authentication flow instead of using API keys. However, there are limited cases where API keys are more appropriate. For example, if there is a mobile application that needs to use the Google Cloud Translation API, but doesn't otherwise need a backend server, API keys are the simplest way to authenticate to that API.\n\nIf a business requires API keys to be used, then the API keys should be secured properly.",
+          "RemediationProcedure": "**From Console:**  1. Go to `APIs & Services\\Credentials` using  1. In the section `API Keys`, to delete API Keys: Click the `Delete Bin Icon` in front of every `API Key Name`.  **From Google Cloud Command Line**  1. Run the following from within the project you wish to audit **`gcloud services api-keys list --filter`**  1. **Pipe the results into **  ``gcloud alpha services api-keys delete``",
+          "AuditProcedure": "**From Console:**  1. From within the Project you wish to audit Go to `APIs & Services\\Credentials`.   1. In the section `API Keys`, no API key should be listed.  **From Google Cloud Command Line**  1. Run the following from within the project you wish to audit **`gcloud services api-keys list --filter`**.  1. There should be no keys listed at the project level.",
+          "AdditionalInformation": "Google recommends using the standard authentication flow instead of using API keys. However, there are limited cases where API keys are more appropriate. For example, if there is a mobile application that needs to use the Google Cloud Translation API, but doesn't otherwise need a backend server, API keys are the simplest way to authenticate to that API.  If a business requires API keys to be used, then the API keys should be secured properly.",
           "References": "https://cloud.google.com/docs/authentication/api-keys:https://cloud.google.com/sdk/gcloud/reference/services/api-keys/list:https://cloud.google.com/docs/authentication:https://cloud.google.com/sdk/gcloud/reference/alpha/services/api-keys/delete"
         }
       ]
@@ -138,8 +138,8 @@
           "Description": "It is recommended that Essential Contacts is configured to designate email addresses for Google Cloud services to notify of important technical or security information.",
           "RationaleStatement": "Many Google Cloud services, such as Cloud Billing, send out notifications to share important information with Google Cloud users. By default, these notifications are sent to members with certain Identity and Access Management (IAM) roles. With Essential Contacts, you can customize who receives notifications by providing your own list of contacts.",
           "ImpactStatement": "There is no charge for Essential Contacts.",
-          "RemediationProcedure": "**From Google Cloud Console**\n\n1. Go to `Essential Contacts` by visiting https://console.cloud.google.com/iam-admin/essential-contacts\n2. Make sure the organization appears in the resource selector at the top of the page. The resource selector tells you what project, folder, or organization you are currently managing contacts for.\n3. Click `+Add contact`\n4. In the `Email` and `Confirm Email` fields, enter the email address of the contact.\n5. From the `Notification categories` drop-down menu, select the notification categories that you want the contact to receive communications for.\n6. Click `Save`\n\n**From Google Cloud CLI**\n\n1. To add an organization Essential Contacts run a command:\n```\ngcloud essential-contacts create --email=\"\" \\\n --notification-categories=\"\" \\\n --organization=\n```",
-          "AuditProcedure": "**From Google Cloud Console**\n\n1. Go to `Essential Contacts` by visiting https://console.cloud.google.com/iam-admin/essential-contacts\n2. Make sure the organization appears in the resource selector at the top of the page. The resource selector tells you what project, folder, or organization you are currently managing contacts for.\n3. Ensure that appropriate email addresses are configured for each of the following notification categories:\n- `Legal`\n- `Security`\n- `Suspension`\n- `Technical`\n- `Technical Incidents`\n\nAlternatively, appropriate email addresses can be configured for the `All` notification category to receive all possible important notifications.\n\n**From Google Cloud CLI**\n\n1. To list all configured organization Essential Contacts run a command:\n```\ngcloud essential-contacts list --organization=\n``` \n2. Ensure at least one appropriate email address is configured for each of the following notification categories:\n- `LEGAL`\n- `SECURITY`\n- `SUSPENSION`\n- `TECHNICAL`\n- `TECHNICAL_INCIDENTS`\n\nAlternatively, appropriate email addresses can be configured for the `ALL` notification category to receive all possible important notifications.",
+          "RemediationProcedure": "**From Google Cloud Console**  1. Go to `Essential Contacts` by visiting https://console.cloud.google.com/iam-admin/essential-contacts 2. Make sure the organization appears in the resource selector at the top of the page. The resource selector tells you what project, folder, or organization you are currently managing contacts for. 3. Click `+Add contact` 4. In the `Email` and `Confirm Email` fields, enter the email address of the contact. 5. From the `Notification categories` drop-down menu, select the notification categories that you want the contact to receive communications for. 6. Click `Save`  **From Google Cloud CLI**  1. To add an organization Essential Contacts run a command: ``` gcloud essential-contacts create --email=\"\" \\  --notification-categories=\"\" \\  --organization= ```",
+          "AuditProcedure": "**From Google Cloud Console**  1. Go to `Essential Contacts` by visiting https://console.cloud.google.com/iam-admin/essential-contacts 2. Make sure the organization appears in the resource selector at the top of the page. The resource selector tells you what project, folder, or organization you are currently managing contacts for. 3. Ensure that appropriate email addresses are configured for each of the following notification categories: - `Legal` - `Security` - `Suspension` - `Technical` - `Technical Incidents`  Alternatively, appropriate email addresses can be configured for the `All` notification category to receive all possible important notifications.  **From Google Cloud CLI**  1. To list all configured organization Essential Contacts run a command: ``` gcloud essential-contacts list --organization= ```  2. Ensure at least one appropriate email address is configured for each of the following notification categories: - `LEGAL` - `SECURITY` - `SUSPENSION` - `TECHNICAL` - `TECHNICAL_INCIDENTS`  Alternatively, appropriate email addresses can be configured for the `ALL` notification category to receive all possible important notifications.",
           "AdditionalInformation": "",
           "References": "https://cloud.google.com/resource-manager/docs/managing-notification-contacts"
         }
@@ -147,7 +147,7 @@
     },
     {
       "Id": "1.10",
-      "Description": "Google Cloud Key Management Service stores cryptographic keys in a hierarchical structure designed for useful and elegant access control management. \n\nThe format for the rotation schedule depends on the client library that is used. For the gcloud command-line tool, the next rotation time must be in `ISO` or `RFC3339` format, and the rotation period must be in the form `INTEGERUNIT`, where units can be one of seconds (s), minutes (m), hours (h) or days (d).",
+      "Description": "Google Cloud Key Management Service stores cryptographic keys in a hierarchical structure designed for useful and elegant access control management.   The format for the rotation schedule depends on the client library that is used. For the gcloud command-line tool, the next rotation time must be in `ISO` or `RFC3339` format, and the rotation period must be in the form `INTEGERUNIT`, where units can be one of seconds (s), minutes (m), hours (h) or days (d).",
       "Checks": [
         "kms_key_rotation_enabled"
       ],
@@ -156,12 +156,12 @@
           "Section": "1. Identity and Access Management",
           "Profile": "Level 1",
           "AssessmentStatus": "Automated",
-          "Description": "Google Cloud Key Management Service stores cryptographic keys in a hierarchical structure designed for useful and elegant access control management. \n\nThe format for the rotation schedule depends on the client library that is used. For the gcloud command-line tool, the next rotation time must be in `ISO` or `RFC3339` format, and the rotation period must be in the form `INTEGERUNIT`, where units can be one of seconds (s), minutes (m), hours (h) or days (d).",
-          "RationaleStatement": "Set a key rotation period and starting time. A key can be created with a specified `rotation period`, which is the time between when new key versions are generated automatically. A key can also be created with a specified next rotation time. A key is a named object representing a `cryptographic key` used for a specific purpose. The key material, the actual bits used for `encryption`, can change over time as new key versions are created.\n\nA key is used to protect some `corpus of data`. A collection of files could be encrypted with the same key and people with `decrypt` permissions on that key would be able to decrypt those files. Therefore, it's necessary to make sure the `rotation period` is set to a specific time.",
+          "Description": "Google Cloud Key Management Service stores cryptographic keys in a hierarchical structure designed for useful and elegant access control management.   The format for the rotation schedule depends on the client library that is used. For the gcloud command-line tool, the next rotation time must be in `ISO` or `RFC3339` format, and the rotation period must be in the form `INTEGERUNIT`, where units can be one of seconds (s), minutes (m), hours (h) or days (d).",
+          "RationaleStatement": "Set a key rotation period and starting time. A key can be created with a specified `rotation period`, which is the time between when new key versions are generated automatically. A key can also be created with a specified next rotation time. A key is a named object representing a `cryptographic key` used for a specific purpose. The key material, the actual bits used for `encryption`, can change over time as new key versions are created.  A key is used to protect some `corpus of data`. A collection of files could be encrypted with the same key and people with `decrypt` permissions on that key would be able to decrypt those files. Therefore, it's necessary to make sure the `rotation period` is set to a specific time.",
           "ImpactStatement": "After a successful key rotation, the older key version is required in order to decrypt the data encrypted by that previous key version.",
-          "RemediationProcedure": "**From Google Cloud Console**\n\n1. Go to `Cryptographic Keys` by visiting: https://console.cloud.google.com/security/kms(https://console.cloud.google.com/security/kms).\n2. Click on the specific key ring\n3. From the list of keys, choose the specific key and Click on `Right side pop up the blade (3 dots)`.\n4. Click on `Edit rotation period`.\n5. On the pop-up window, `Select a new rotation period` in days which should be less than 90 and then choose `Starting on` date (date from which the rotation period begins).\n\n**From Google Cloud CLI**\n\n1. Update and schedule rotation by `ROTATION_PERIOD` and `NEXT_ROTATION_TIME` for each key:\n\n```\ngcloud kms keys update new --keyring=KEY_RING --location=LOCATION --next-rotation-time=NEXT_ROTATION_TIME --rotation-period=ROTATION_PERIOD\n```",
-          "AuditProcedure": "**From Google Cloud Console**\n\n1. Go to `Cryptographic Keys` by visiting: https://console.cloud.google.com/security/kms(https://console.cloud.google.com/security/kms).\n2. Click on each key ring, then ensure each key in the keyring has `Next Rotation` set for less than 90 days from the current date.\n\n**From Google Cloud CLI**\n\n1. Ensure rotation is scheduled by `ROTATION_PERIOD` and `NEXT_ROTATION_TIME` for each key :\n\n```\ngcloud kms keys list --keyring= --location= --format=json'(rotationPeriod)'\n```\n\nEnsure outcome values for `rotationPeriod` and `nextRotationTime` satisfy the below criteria:\n\n`rotationPeriod is <= 129600m` \n`rotationPeriod is <= 7776000s` \n`rotationPeriod is <= 2160h` \n`rotationPeriod is <= 90d` \n`nextRotationTime is <= 90days` from current DATE",
-          "AdditionalInformation": "'- Key rotation does NOT re-encrypt already encrypted data with the newly generated key version. If you suspect unauthorized use of a key, you should re-encrypt the data protected by that key and then disable or schedule destruction of the prior key version.\n- It is not recommended to rely solely on irregular rotation, but rather to use irregular rotation if needed in conjunction with a regular rotation schedule.",
+          "RemediationProcedure": "**From Google Cloud Console**  1. Go to `Cryptographic Keys` by visiting: https://console.cloud.google.com/security/kms(https://console.cloud.google.com/security/kms). 2. Click on the specific key ring 3. From the list of keys, choose the specific key and Click on `Right side pop up the blade (3 dots)`. 4. Click on `Edit rotation period`. 5. On the pop-up window, `Select a new rotation period` in days which should be less than 90 and then choose `Starting on` date (date from which the rotation period begins).  **From Google Cloud CLI**  1. Update and schedule rotation by `ROTATION_PERIOD` and `NEXT_ROTATION_TIME` for each key:  ``` gcloud kms keys update new --keyring=KEY_RING --location=LOCATION --next-rotation-time=NEXT_ROTATION_TIME --rotation-period=ROTATION_PERIOD ```",
+          "AuditProcedure": "**From Google Cloud Console**  1. Go to `Cryptographic Keys` by visiting: https://console.cloud.google.com/security/kms(https://console.cloud.google.com/security/kms). 2. Click on each key ring, then ensure each key in the keyring has `Next Rotation` set for less than 90 days from the current date.  **From Google Cloud CLI**  1. Ensure rotation is scheduled by `ROTATION_PERIOD` and `NEXT_ROTATION_TIME` for each key :  ``` gcloud kms keys list --keyring= --location= --format=json'(rotationPeriod)' ```  Ensure outcome values for `rotationPeriod` and `nextRotationTime` satisfy the below criteria:  `rotationPeriod is <= 129600m`  `rotationPeriod is <= 7776000s`  `rotationPeriod is <= 2160h`  `rotationPeriod is <= 90d`  `nextRotationTime is <= 90days` from current DATE",
+          "AdditionalInformation": "'- Key rotation does NOT re-encrypt already encrypted data with the newly generated key version. If you suspect unauthorized use of a key, you should re-encrypt the data protected by that key and then disable or schedule destruction of the prior key version. - It is not recommended to rely solely on irregular rotation, but rather to use irregular rotation if needed in conjunction with a regular rotation schedule.",
           "References": "https://cloud.google.com/kms/docs/key-rotation#frequency_of_key_rotation:https://cloud.google.com/kms/docs/re-encrypt-data"
         }
       ]
@@ -180,9 +180,9 @@
           "Description": "It is recommended that the IAM policy on Cloud KMS `cryptokeys` should restrict anonymous and/or public access.",
           "RationaleStatement": "Granting permissions to `allUsers` or `allAuthenticatedUsers` allows anyone to access the dataset. Such access might not be desirable if sensitive data is stored at the location. In this case, ensure that anonymous and/or public access to a Cloud KMS `cryptokey` is not allowed.",
           "ImpactStatement": "Removing the binding for `allUsers` and `allAuthenticatedUsers` members denies accessing `cryptokeys` to anonymous or public users.",
-          "RemediationProcedure": "**From Google Cloud CLI**\n\n1. List all Cloud KMS `Cryptokeys`.\n\n```\ngcloud kms keys list --keyring=key_ring_name --location=global --format=json | jq '..name'\n```\n2. Remove IAM policy binding for a KMS key to remove access to `allUsers` and `allAuthenticatedUsers` using the below command.\n\n```\ngcloud kms keys remove-iam-policy-binding key_name --keyring=key_ring_name --location=global --member='allAuthenticatedUsers' --role='role'\n\ngcloud kms keys remove-iam-policy-binding key_name --keyring=key_ring_name --location=global --member='allUsers' --role='role'\n```",
-          "AuditProcedure": "**From Google Cloud CLI**\n\n1. List all Cloud KMS `Cryptokeys`.\n```\ngcloud kms keys list --keyring=key_ring_name --location=global --format=json | jq '..name'\n```\n2. Ensure the below command's output does not contain `allUsers` or `allAuthenticatedUsers`.\n```\ngcloud kms keys get-iam-policy key_name --keyring=key_ring_name --location=global --format=json | jq '.bindings.members'\n```",
-          "AdditionalInformation": "key_ring_name : Is the resource ID of the key ring, which is the fully-qualified Key ring name. This value is case-sensitive and in the form: projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING\n\nYou can retrieve the key ring resource ID using the Cloud Console:\n\n1. Open the `Cryptographic Keys` page in the Cloud Console.\n2. For the key ring whose resource ID you are retrieving, click the `More icon (3 vertical dots)`.\n3. Click `Copy Resource ID`. The resource ID for the key ring is copied to your clipboard.\n\nkey_name : Is the resource ID of the key, which is the fully-qualified CryptoKey name. This value is case-sensitive and in the form: projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY\n\nYou can retrieve the key resource ID using the Cloud Console:\n1. Open the `Cryptographic Keys` page in the Cloud Console.\n2. Click the name of the key ring that contains the key.\n3. For the key whose resource ID you are retrieving, click the `More icon (3 vertical dots)`.\n4. Click `Copy Resource ID`. The resource ID for the key is copied to your clipboard.\n\nrole : The role to remove the member from.",
+          "RemediationProcedure": "**From Google Cloud CLI**  1. List all Cloud KMS `Cryptokeys`.  ``` gcloud kms keys list --keyring=key_ring_name --location=global --format=json | jq '..name' ``` 2. Remove IAM policy binding for a KMS key to remove access to `allUsers` and `allAuthenticatedUsers` using the below command.  ``` gcloud kms keys remove-iam-policy-binding key_name --keyring=key_ring_name --location=global --member='allAuthenticatedUsers' --role='role'  gcloud kms keys remove-iam-policy-binding key_name --keyring=key_ring_name --location=global --member='allUsers' --role='role' ```",
+          "AuditProcedure": "**From Google Cloud CLI**  1. List all Cloud KMS `Cryptokeys`. ``` gcloud kms keys list --keyring=key_ring_name --location=global --format=json | jq '..name' ``` 2. Ensure the below command's output does not contain `allUsers` or `allAuthenticatedUsers`. ``` gcloud kms keys get-iam-policy key_name --keyring=key_ring_name --location=global --format=json | jq '.bindings.members' ```",
+          "AdditionalInformation": "key_ring_name : Is the resource ID of the key ring, which is the fully-qualified Key ring name. This value is case-sensitive and in the form: projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING  You can retrieve the key ring resource ID using the Cloud Console:  1. Open the `Cryptographic Keys` page in the Cloud Console. 2. For the key ring whose resource ID you are retrieving, click the `More icon (3 vertical dots)`. 3. Click `Copy Resource ID`. The resource ID for the key ring is copied to your clipboard.  key_name : Is the resource ID of the key, which is the fully-qualified CryptoKey name. This value is case-sensitive and in the form: projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY  You can retrieve the key resource ID using the Cloud Console: 1. Open the `Cryptographic Keys` page in the Cloud Console. 2. Click the name of the key ring that contains the key. 3. For the key whose resource ID you are retrieving, click the `More icon (3 vertical dots)`. 4. Click `Copy Resource ID`. The resource ID for the key is copied to your clipboard.  role : The role to remove the member from.",
           "References": "https://cloud.google.com/sdk/gcloud/reference/kms/keys/remove-iam-policy-binding:https://cloud.google.com/sdk/gcloud/reference/kms/keys/set-iam-policy:https://cloud.google.com/sdk/gcloud/reference/kms/keys/get-iam-policy:https://cloud.google.com/kms/docs/object-hierarchy#key_resource_id"
         }
       ]
@@ -201,8 +201,8 @@
           "Description": "When you use Dataproc, cluster and job data is stored on Persistent Disks (PDs) associated with the Compute Engine VMs in your cluster and in a Cloud Storage staging bucket. This PD and bucket data is encrypted using a Google-generated data encryption key (DEK) and key encryption key (KEK). The CMEK feature allows you to create, use, and revoke the key encryption key (KEK). Google still controls the data encryption key (DEK).",
           "RationaleStatement": "\"Cloud services offer the ability to protect data related to those services using encryption keys managed by the customer within Cloud KMS. These encryption keys are called customer-managed encryption keys (CMEK). When you protect data in Google Cloud services with CMEK, the CMEK key is within your control.",
           "ImpactStatement": "Using Customer Managed Keys involves additional overhead in maintenance by administrators.",
-          "RemediationProcedure": "**From Google Cloud Console**\n\n1. Login to the GCP Console and navigate to the Dataproc Cluster page by visiting https://console.cloud.google.com/dataproc/clusters.\n1. Select the project from the projects dropdown list.\n1. On the `Dataproc Cluster` page, click on the `Create Cluster` to create a new cluster with Customer managed encryption keys.\n1. On `Create a cluster` page, perform below steps:\n - Inside `Set up cluster` section perform below steps:\n -In the `Name` textbox, provide a name for your cluster.\n - From `Location` select the location in which you want to deploy a cluster.\n - Configure other configurations as per your requirements.\n - Inside `Configure Nodes` and `Customize cluster` section configure the settings as per your requirements.\n - Inside `Manage security` section, perform below steps:\n - From `Encryption`, select `Customer-managed key`.\n - Select a customer-managed key from dropdown list.\n - Ensure that the selected KMS Key have Cloud KMS CryptoKey Encrypter/Decrypter role assign to Dataproc Cluster service account (\"serviceAccount:service-@compute-system.iam.gserviceaccount.com\").\n - Click on `Create` to create a cluster.\n - Once the cluster is created migrate all your workloads from the older cluster to the new cluster and delete the old cluster by performing the below steps:\n - On the `Clusters` page, select the old cluster and click on `Delete cluster`.\n - On the `Confirm deletion` window, click on `Confirm` to delete the cluster.\n - Repeat step above for other Dataproc clusters available in the selected project.\n - Change the project from the project dropdown list and repeat the remediation procedure for other Dataproc clusters available in other projects.\n\n**From Google Cloud CLI**\n\nBefore creating cluster ensure that the selected KMS Key have Cloud KMS CryptoKey Encrypter/Decrypter role assign to Dataproc Cluster service account (\"serviceAccount:service-@compute-system.iam.gserviceaccount.com\").\nRun clusters create command to create new cluster with customer-managed key:\n```\ngcloud dataproc clusters create  --region=us-central1 --gce-pd-kms-key=\n```\nThe above command will create a new cluster in the selected region.\n\nOnce the cluster is created migrate all your workloads from the older cluster to the new cluster and Run clusters delete command to delete cluster:\n```\ngcloud dataproc clusters delete  --region=us-central1\n```\nRepeat step no. 1 to create a new Dataproc cluster.\nChange the project by running the below command and repeat the remediation procedure for other projects:\n```\ngcloud config set project \"\n```",
-          "AuditProcedure": "**From Google Cloud Console**\n\n1. Login to the GCP Console and navigate to the Dataproc Cluster page by visiting https://console.cloud.google.com/dataproc/clusters.\n1. Select the project from the project dropdown list.\n1. On the `Dataproc Clusters` page, select the cluster and click on the Name attribute value that you want to examine.\n1. On the `details` page, select the `Configurations` tab.\n1. On the `Configurations` tab, check the `Encryption type` configuration attribute value. If the value is set to `Google-managed key`, then Dataproc Cluster is not encrypted with Customer managed encryption keys.\n\nRepeat step no. 3 - 5 for other Dataproc Clusters available in the selected project.\n\n6. Change the project from the project dropdown list and repeat the audit procedure for other projects.\n\n**From Google Cloud CLI**\n\n1. Run clusters list command to list all the Dataproc Clusters available in the region:\n```\ngcloud dataproc clusters list --region='us-central1'\n```\n2. Run clusters describe command to get the key details of the selected cluster:\n```\ngcloud dataproc clusters describe  --region=us-central1 --flatten=config.encryptionConfig.gcePdKmsKeyName\n```\n3. If the above command output return \"null\", then the selected cluster is not encrypted with Customer managed encryption keys.\n4. Repeat step no. 2 and 3 for other Dataproc Clusters available in the selected region. Change the region by updating --region and repeat step no. 2 for other clusters available in the project. Change the project by running the below command and repeat the audit procedure for other Dataproc clusters available in other projects:\n```\ngcloud config set project \"\n```",
+          "RemediationProcedure": "**From Google Cloud Console**  1. Login to the GCP Console and navigate to the Dataproc Cluster page by visiting https://console.cloud.google.com/dataproc/clusters. 1. Select the project from the projects dropdown list. 1. On the `Dataproc Cluster` page, click on the `Create Cluster` to create a new cluster with Customer managed encryption keys. 1. On `Create a cluster` page, perform below steps:  - Inside `Set up cluster` section perform below steps:  -In the `Name` textbox, provide a name for your cluster.  - From `Location` select the location in which you want to deploy a cluster.  - Configure other configurations as per your requirements.  - Inside `Configure Nodes` and `Customize cluster` section configure the settings as per your requirements.  - Inside `Manage security` section, perform below steps:  - From `Encryption`, select `Customer-managed key`.  - Select a customer-managed key from dropdown list.  - Ensure that the selected KMS Key have Cloud KMS CryptoKey Encrypter/Decrypter role assign to Dataproc Cluster service account (\"serviceAccount:service-@compute-system.iam.gserviceaccount.com\").  - Click on `Create` to create a cluster.  - Once the cluster is created migrate all your workloads from the older cluster to the new cluster and delete the old cluster by performing the below steps:  - On the `Clusters` page, select the old cluster and click on `Delete cluster`.  - On the `Confirm deletion` window, click on `Confirm` to delete the cluster.  - Repeat step above for other Dataproc clusters available in the selected project.  - Change the project from the project dropdown list and repeat the remediation procedure for other Dataproc clusters available in other projects.  **From Google Cloud CLI**  Before creating cluster ensure that the selected KMS Key have Cloud KMS CryptoKey Encrypter/Decrypter role assign to Dataproc Cluster service account (\"serviceAccount:service-@compute-system.iam.gserviceaccount.com\"). Run clusters create command to create new cluster with customer-managed key: ``` gcloud dataproc clusters create  --region=us-central1 --gce-pd-kms-key= ``` The above command will create a new cluster in the selected region.  Once the cluster is created migrate all your workloads from the older cluster to the new cluster and Run clusters delete command to delete cluster: ``` gcloud dataproc clusters delete  --region=us-central1 ``` Repeat step no. 1 to create a new Dataproc cluster. Change the project by running the below command and repeat the remediation procedure for other projects: ``` gcloud config set project \" ```",
+          "AuditProcedure": "**From Google Cloud Console**  1. Login to the GCP Console and navigate to the Dataproc Cluster page by visiting https://console.cloud.google.com/dataproc/clusters. 1. Select the project from the project dropdown list. 1. On the `Dataproc Clusters` page, select the cluster and click on the Name attribute value that you want to examine. 1. On the `details` page, select the `Configurations` tab. 1. On the `Configurations` tab, check the `Encryption type` configuration attribute value. If the value is set to `Google-managed key`, then Dataproc Cluster is not encrypted with Customer managed encryption keys.  Repeat step no. 3 - 5 for other Dataproc Clusters available in the selected project.  6. Change the project from the project dropdown list and repeat the audit procedure for other projects.  **From Google Cloud CLI**  1. Run clusters list command to list all the Dataproc Clusters available in the region: ``` gcloud dataproc clusters list --region='us-central1' ``` 2. Run clusters describe command to get the key details of the selected cluster: ``` gcloud dataproc clusters describe  --region=us-central1 --flatten=config.encryptionConfig.gcePdKmsKeyName ``` 3. If the above command output return \"null\", then the selected cluster is not encrypted with Customer managed encryption keys. 4. Repeat step no. 2 and 3 for other Dataproc Clusters available in the selected region. Change the region by updating --region and repeat step no. 2 for other clusters available in the project. Change the project by running the below command and repeat the audit procedure for other Dataproc clusters available in other projects: ``` gcloud config set project \" ```",
           "AdditionalInformation": "",
           "References": "https://cloud.google.com/docs/security/encryption/default-encryption"
         }
@@ -220,11 +220,11 @@
           "Profile": "Level 1",
           "AssessmentStatus": "Automated",
           "Description": "It is recommended to assign the `Service Account User (iam.serviceAccountUser)` and `Service Account Token Creator (iam.serviceAccountTokenCreator)` roles to a user for a specific service account rather than assigning the role to a user at project level.",
-          "RationaleStatement": "A service account is a special Google account that belongs to an application or a virtual machine (VM), instead of to an individual end-user. Application/VM-Instance uses the service account to call the service's Google API so that users aren't directly involved.\nIn addition to being an identity, a service account is a resource that has IAM policies attached to it. These policies determine who can use the service account.\n\nUsers with IAM roles to update the App Engine and Compute Engine instances (such as App Engine Deployer or Compute Instance Admin) can effectively run code as the service accounts used to run these instances, and indirectly gain access to all the resources for which the service accounts have access. Similarly, SSH access to a Compute Engine instance may also provide the ability to execute code as that instance/Service account.\n\nBased on business needs, there could be multiple user-managed service accounts configured for a project. Granting the `iam.serviceAccountUser` or `iam.serviceAccountTokenCreator` roles to a user for a project gives the user access to all service accounts in the project, including service accounts that may be created in the future. This can result in elevation of privileges by using service accounts and corresponding `Compute Engine instances`.\n\nIn order to implement `least privileges` best practices, IAM users should not be assigned the `Service Account User` or `Service Account Token Creator` roles at the project level. Instead, these roles should be assigned to a user for a specific service account, giving that user access to the service account. The `Service Account User` allows a user to bind a service account to a long-running job service, whereas the `Service Account Token Creator` role allows a user to directly impersonate (or assert) the identity of a service account.",
+          "RationaleStatement": "A service account is a special Google account that belongs to an application or a virtual machine (VM), instead of to an individual end-user. Application/VM-Instance uses the service account to call the service's Google API so that users aren't directly involved. In addition to being an identity, a service account is a resource that has IAM policies attached to it. These policies determine who can use the service account.  Users with IAM roles to update the App Engine and Compute Engine instances (such as App Engine Deployer or Compute Instance Admin) can effectively run code as the service accounts used to run these instances, and indirectly gain access to all the resources for which the service accounts have access. Similarly, SSH access to a Compute Engine instance may also provide the ability to execute code as that instance/Service account.  Based on business needs, there could be multiple user-managed service accounts configured for a project. Granting the `iam.serviceAccountUser` or `iam.serviceAccountTokenCreator` roles to a user for a project gives the user access to all service accounts in the project, including service accounts that may be created in the future. This can result in elevation of privileges by using service accounts and corresponding `Compute Engine instances`.  In order to implement `least privileges` best practices, IAM users should not be assigned the `Service Account User` or `Service Account Token Creator` roles at the project level. Instead, these roles should be assigned to a user for a specific service account, giving that user access to the service account. The `Service Account User` allows a user to bind a service account to a long-running job service, whereas the `Service Account Token Creator` role allows a user to directly impersonate (or assert) the identity of a service account.",
           "ImpactStatement": "After revoking `Service Account User` or `Service Account Token Creator` roles at the project level from all impacted user account(s), these roles should be assigned to a user(s) for specific service account(s) according to business needs.",
-          "RemediationProcedure": "**From Google Cloud Console**\n\n1. Go to the IAM page in the GCP Console by visiting: https://console.cloud.google.com/iam-admin/iam(https://console.cloud.google.com/iam-admin/iam).\n\n2. Click on the filter table text bar. Type `Role: Service Account User`\n\n3. Click the `Delete Bin` icon in front of the role `Service Account User` for every user listed as a result of a filter.\n\n4. Click on the filter table text bar. Type `Role: Service Account Token Creator`\n\n5. Click the `Delete Bin` icon in front of the role `Service Account Token Creator` for every user listed as a result of a filter.\n\n**From Google Cloud CLI**\n\n1. Using a text editor, remove the bindings with the `roles/iam.serviceAccountUser` or `roles/iam.serviceAccountTokenCreator`. \n\nFor example, you can use the iam.json file shown below as follows:\n\n {\n \"bindings\": \n {\n \"members\": \n \"serviceAccount:our-project-123@appspot.gserviceaccount.com\",\n ,\n \"role\": \"roles/appengine.appViewer\"\n },\n {\n \"members\": \n \"user:email1@gmail.com\"\n ,\n \"role\": \"roles/owner\"\n },\n {\n \"members\": \n \"serviceAccount:our-project-123@appspot.gserviceaccount.com\",\n \"serviceAccount:123456789012-compute@developer.gserviceaccount.com\"\n ,\n \"role\": \"roles/editor\"\n }\n ,\n \"etag\": \"BwUjMhCsNvY=\"\n }\n\n2. Update the project's IAM policy:\n\n```\ngcloud projects set-iam-policy PROJECT_ID iam.json\n```",
-          "AuditProcedure": "**From Google Cloud Console**\n\n1. Go to the IAM page in the GCP Console by visiting https://console.cloud.google.com/iam-admin/iam(https://console.cloud.google.com/iam-admin/iam)\n\n2. Click on the filter table text bar, Type `Role: Service Account User`.\n\n3. Ensure no user is listed as a result of the filter.\n\n4. Click on the filter table text bar, Type `Role: Service Account Token Creator`.\n\n3. Ensure no user is listed as a result of the filter.\n\n**From Google Cloud CLI**\n\nTo ensure IAM users are not assigned Service Account User role at the project level:\n\n```\ngcloud projects get-iam-policy PROJECT_ID --format json | jq '.bindings.role' | grep \"roles/iam.serviceAccountUser\"\n\ngcloud projects get-iam-policy PROJECT_ID --format json | jq '.bindings.role' | grep \"roles/iam.serviceAccountTokenCreator\"\n```\n\nThese commands should not return any output.",
-          "AdditionalInformation": "To assign the role `roles/iam.serviceAccountUser` or `roles/iam.serviceAccountTokenCreator` to a user role on a service account instead of a project:\n\n1. Go to https://console.cloud.google.com/projectselector/iam-admin/serviceaccounts(https://console.cloud.google.com/projectselector/iam-admin/serviceaccounts)\n\n2. Select ` Target Project`\n\n3. Select `target service account`. Click `Permissions` on the top bar. It will open permission pane on right side of the page\n\n4. Add desired members with `Service Account User` or `Service Account Token Creator` role.",
+          "RemediationProcedure": "**From Google Cloud Console**  1. Go to the IAM page in the GCP Console by visiting: https://console.cloud.google.com/iam-admin/iam(https://console.cloud.google.com/iam-admin/iam).  2. Click on the filter table text bar. Type `Role: Service Account User`  3. Click the `Delete Bin` icon in front of the role `Service Account User` for every user listed as a result of a filter.  4. Click on the filter table text bar. Type `Role: Service Account Token Creator`  5. Click the `Delete Bin` icon in front of the role `Service Account Token Creator` for every user listed as a result of a filter.  **From Google Cloud CLI**  1. Using a text editor, remove the bindings with the `roles/iam.serviceAccountUser` or `roles/iam.serviceAccountTokenCreator`.   For example, you can use the iam.json file shown below as follows:   {  \"bindings\":   {  \"members\":   \"serviceAccount:our-project-123@appspot.gserviceaccount.com\",  ,  \"role\": \"roles/appengine.appViewer\"  },  {  \"members\":   \"user:email1@gmail.com\"  ,  \"role\": \"roles/owner\"  },  {  \"members\":   \"serviceAccount:our-project-123@appspot.gserviceaccount.com\",  \"serviceAccount:123456789012-compute@developer.gserviceaccount.com\"  ,  \"role\": \"roles/editor\"  }  ,  \"etag\": \"BwUjMhCsNvY=\"  }  2. Update the project's IAM policy:  ``` gcloud projects set-iam-policy PROJECT_ID iam.json ```",
+          "AuditProcedure": "**From Google Cloud Console**  1. Go to the IAM page in the GCP Console by visiting https://console.cloud.google.com/iam-admin/iam(https://console.cloud.google.com/iam-admin/iam)  2. Click on the filter table text bar, Type `Role: Service Account User`.  3. Ensure no user is listed as a result of the filter.  4. Click on the filter table text bar, Type `Role: Service Account Token Creator`.  3. Ensure no user is listed as a result of the filter.  **From Google Cloud CLI**  To ensure IAM users are not assigned Service Account User role at the project level:  ``` gcloud projects get-iam-policy PROJECT_ID --format json | jq '.bindings.role' | grep \"roles/iam.serviceAccountUser\"  gcloud projects get-iam-policy PROJECT_ID --format json | jq '.bindings.role' | grep \"roles/iam.serviceAccountTokenCreator\" ```  These commands should not return any output.",
+          "AdditionalInformation": "To assign the role `roles/iam.serviceAccountUser` or `roles/iam.serviceAccountTokenCreator` to a user role on a service account instead of a project:  1. Go to https://console.cloud.google.com/projectselector/iam-admin/serviceaccounts(https://console.cloud.google.com/projectselector/iam-admin/serviceaccounts)  2. Select ` Target Project`  3. Select `target service account`. Click `Permissions` on the top bar. It will open permission pane on right side of the page  4. Add desired members with `Service Account User` or `Service Account Token Creator` role.",
           "References": "https://cloud.google.com/iam/docs/service-accounts:https://cloud.google.com/iam/docs/granting-roles-to-service-accounts:https://cloud.google.com/iam/docs/understanding-roles:https://cloud.google.com/iam/docs/granting-changing-revoking-access:https://console.cloud.google.com/iam-admin/iam"
         }
       ]
@@ -241,10 +241,10 @@
           "Profile": "Level 2",
           "AssessmentStatus": "Automated",
           "Description": "It is recommended that the principle of 'Separation of Duties' is enforced while assigning KMS related roles to users.",
-          "RationaleStatement": "The built-in/predefined IAM role `Cloud KMS Admin` allows the user/identity to create, delete, and manage service account(s).\nThe built-in/predefined IAM role `Cloud KMS CryptoKey Encrypter/Decrypter` allows the user/identity (with adequate privileges on concerned resources) to encrypt and decrypt data at rest using an encryption key(s).\n\nThe built-in/predefined IAM role `Cloud KMS CryptoKey Encrypter` allows the user/identity (with adequate privileges on concerned resources) to encrypt data at rest using an encryption key(s).\nThe built-in/predefined IAM role `Cloud KMS CryptoKey Decrypter` allows the user/identity (with adequate privileges on concerned resources) to decrypt data at rest using an encryption key(s).\n\nSeparation of duties is the concept of ensuring that one individual does not have all necessary permissions to be able to complete a malicious action. In Cloud KMS, this could be an action such as using a key to access and decrypt data a user should not normally have access to. Separation of duties is a business control typically used in larger organizations, meant to help avoid security or privacy incidents and errors. It is considered best practice.\n\nNo user(s) should have `Cloud KMS Admin` and any of the `Cloud KMS CryptoKey Encrypter/Decrypter`, `Cloud KMS CryptoKey Encrypter`, `Cloud KMS CryptoKey Decrypter` roles assigned at the same time.",
+          "RationaleStatement": "The built-in/predefined IAM role `Cloud KMS Admin` allows the user/identity to create, delete, and manage service account(s). The built-in/predefined IAM role `Cloud KMS CryptoKey Encrypter/Decrypter` allows the user/identity (with adequate privileges on concerned resources) to encrypt and decrypt data at rest using an encryption key(s).  The built-in/predefined IAM role `Cloud KMS CryptoKey Encrypter` allows the user/identity (with adequate privileges on concerned resources) to encrypt data at rest using an encryption key(s). The built-in/predefined IAM role `Cloud KMS CryptoKey Decrypter` allows the user/identity (with adequate privileges on concerned resources) to decrypt data at rest using an encryption key(s).  Separation of duties is the concept of ensuring that one individual does not have all necessary permissions to be able to complete a malicious action. In Cloud KMS, this could be an action such as using a key to access and decrypt data a user should not normally have access to. Separation of duties is a business control typically used in larger organizations, meant to help avoid security or privacy incidents and errors. It is considered best practice.  No user(s) should have `Cloud KMS Admin` and any of the `Cloud KMS CryptoKey Encrypter/Decrypter`, `Cloud KMS CryptoKey Encrypter`, `Cloud KMS CryptoKey Decrypter` roles assigned at the same time.",
           "ImpactStatement": "Removed roles should be assigned to another user based on business needs.",
-          "RemediationProcedure": "**From Google Cloud Console**\n\n1. Go to `IAM & Admin/IAM` using `https://console.cloud.google.com/iam-admin/iam`\n\n2. For any member having `Cloud KMS Admin` and any of the `Cloud KMS CryptoKey Encrypter/Decrypter`, `Cloud KMS CryptoKey Encrypter`, `Cloud KMS CryptoKey Decrypter` roles granted/assigned, click the `Delete Bin` icon to remove the role from the member.\n\nNote: Removing a role should be done based on the business requirement.",
-          "AuditProcedure": "**From Google Cloud Console**\n\n1. Go to `IAM & Admin/IAM` by visiting: https://console.cloud.google.com/iam-admin/iam(https://console.cloud.google.com/iam-admin/iam)\n\n2. Ensure no member has the roles `Cloud KMS Admin` and any of the `Cloud KMS CryptoKey Encrypter/Decrypter`, `Cloud KMS CryptoKey Encrypter`, `Cloud KMS CryptoKey Decrypter` assigned.\n\n**From Google Cloud CLI**\n\n1. List all users and role assignments:\n\n```\ngcloud projects get-iam-policy PROJECT_ID\n```\n\n2. Ensure that there are no common users found in the member section for roles `cloudkms.admin` and any one of `Cloud KMS CryptoKey Encrypter/Decrypter`, `Cloud KMS CryptoKey Encrypter`, `Cloud KMS CryptoKey Decrypter`",
+          "RemediationProcedure": "**From Google Cloud Console**  1. Go to `IAM & Admin/IAM` using `https://console.cloud.google.com/iam-admin/iam`  2. For any member having `Cloud KMS Admin` and any of the `Cloud KMS CryptoKey Encrypter/Decrypter`, `Cloud KMS CryptoKey Encrypter`, `Cloud KMS CryptoKey Decrypter` roles granted/assigned, click the `Delete Bin` icon to remove the role from the member.  Note: Removing a role should be done based on the business requirement.",
+          "AuditProcedure": "**From Google Cloud Console**  1. Go to `IAM & Admin/IAM` by visiting: https://console.cloud.google.com/iam-admin/iam(https://console.cloud.google.com/iam-admin/iam)  2. Ensure no member has the roles `Cloud KMS Admin` and any of the `Cloud KMS CryptoKey Encrypter/Decrypter`, `Cloud KMS CryptoKey Encrypter`, `Cloud KMS CryptoKey Decrypter` assigned.  **From Google Cloud CLI**  1. List all users and role assignments:  ``` gcloud projects get-iam-policy PROJECT_ID ```  2. Ensure that there are no common users found in the member section for roles `cloudkms.admin` and any one of `Cloud KMS CryptoKey Encrypter/Decrypter`, `Cloud KMS CryptoKey Encrypter`, `Cloud KMS CryptoKey Decrypter`",
           "AdditionalInformation": "Users granted with Owner (roles/owner) and Editor (roles/editor) have privileges equivalent to `Cloud KMS Admin` and `Cloud KMS CryptoKey Encrypter/Decrypter`. To avoid misuse, Owner and Editor roles should be granted to a very limited group of users. Use of these primitive privileges should be minimal. These requirements are addressed in separate recommendations.",
           "References": "https://cloud.google.com/kms/docs/separation-of-duties"
         }
@@ -260,10 +260,10 @@
           "Profile": "Level 2",
           "AssessmentStatus": "Manual",
           "Description": "API Keys should only be used for services in cases where other authentication methods are unavailable. In this case, unrestricted keys are insecure because they can be viewed publicly, such as from within a browser, or they can be accessed on a device where the key resides. It is recommended to restrict API key usage to trusted hosts, HTTP referrers and apps. It is recommended to use the more secure standard authentication flow instead.",
-          "RationaleStatement": "Security risks involved in using API-Keys appear below:\n\n- API keys are simple encrypted strings\n\n- API keys do not identify the user or the application making the API request\n\n- API keys are typically accessible to clients, making it easy to discover and steal an API key\n\nIn light of these potential risks, Google recommends using the standard authentication flow instead of API keys. However, there are limited cases where API keys are more appropriate. For example, if there is a mobile application that needs to use the Google Cloud Translation API, but doesn't otherwise need a backend server, API keys are the simplest way to authenticate to that API.\n\nIn order to reduce attack vectors, API-Keys can be restricted only to trusted hosts, HTTP referrers and applications.",
+          "RationaleStatement": "Security risks involved in using API-Keys appear below:  - API keys are simple encrypted strings  - API keys do not identify the user or the application making the API request  - API keys are typically accessible to clients, making it easy to discover and steal an API key  In light of these potential risks, Google recommends using the standard authentication flow instead of API keys. However, there are limited cases where API keys are more appropriate. For example, if there is a mobile application that needs to use the Google Cloud Translation API, but doesn't otherwise need a backend server, API keys are the simplest way to authenticate to that API.  In order to reduce attack vectors, API-Keys can be restricted only to trusted hosts, HTTP referrers and applications.",
           "ImpactStatement": "Setting `Application Restrictions` may break existing application functioning, if not done carefully.",
-          "RemediationProcedure": "**From Google Cloud Console**\n\n***Leaving Keys in Place***\n\n1. Go to `APIs & Services\\Credentials` using `https://console.cloud.google.com/apis/credentials`\n\n2. In the section `API Keys`, Click the `API Key Name`. The API Key properties display on a new page.\n\n3. In the `Key restrictions` section, set the application restrictions to any of `HTTP referrers, IP addresses, Android apps, iOS apps`.\n\n4. Click `Save`.\n\n1. Repeat steps 2,3,4 for every unrestricted API key.\n**Note:** Do not set `HTTP referrers` to wild-cards (* or *.TLD or *.TLD/*) allowing access to any/wide HTTP referrer(s)\nDo not set `IP addresses` and referrer to `any host (0.0.0.0 or 0.0.0.0/0 or ::0)`\n\n***Removing Keys***\n\nAnother option is to remove the keys entirely.\n\n1. Go to `APIs & Services\\Credentials` using `https://console.cloud.google.com/apis/credentials`\n\n2. In the section `API Keys`, select the checkbox next to each key you wish to remove\n\n3. Select `Delete` and confirm.",
-          "AuditProcedure": "**From Google Cloud Console**\n\n1. Go to `APIs & Services\\Credentials` using `https://console.cloud.google.com/apis/credentials`\n\n1. In the section `API Keys`, Click the `API Key Name`. The API Key properties display on a new page.\n\n1. For every API Key, ensure the section `Key restrictions` parameter `Application restrictions` is not set to `None`.\n\nOr,\n\n1. Ensure `Application restrictions` is set to `HTTP referrers` and the referrer is not set to wild-cards `(* or *.TLD or *.TLD/*) allowing access to any/wide HTTP referrer(s)`\n\nOr,\n\n1. Ensure `Application restrictions` is set to `IP addresses` and referrer is not set to `any host (0.0.0.0 or 0.0.0.0/0 or ::0)`\n\n**From Google Cloud Command Line**\n\n1. Run the following from within the project you wish to audit \n```\ngcloud services api-keys list --filter=\"-restrictions:*\" --format=\"tablebox(displayName:label='Key With No Restrictions')\n```",
+          "RemediationProcedure": "**From Google Cloud Console**  ***Leaving Keys in Place***  1. Go to `APIs & Services\\Credentials` using `https://console.cloud.google.com/apis/credentials`  2. In the section `API Keys`, Click the `API Key Name`. The API Key properties display on a new page.  3. In the `Key restrictions` section, set the application restrictions to any of `HTTP referrers, IP addresses, Android apps, iOS apps`.  4. Click `Save`.  1. Repeat steps 2,3,4 for every unrestricted API key. **Note:** Do not set `HTTP referrers` to wild-cards (* or *.TLD or *.TLD/*) allowing access to any/wide HTTP referrer(s) Do not set `IP addresses` and referrer to `any host (0.0.0.0 or 0.0.0.0/0 or ::0)`  ***Removing Keys***  Another option is to remove the keys entirely.  1. Go to `APIs & Services\\Credentials` using `https://console.cloud.google.com/apis/credentials`  2. In the section `API Keys`, select the checkbox next to each key you wish to remove  3. Select `Delete` and confirm.",
+          "AuditProcedure": "**From Google Cloud Console**  1. Go to `APIs & Services\\Credentials` using `https://console.cloud.google.com/apis/credentials`  1. In the section `API Keys`, Click the `API Key Name`. The API Key properties display on a new page.  1. For every API Key, ensure the section `Key restrictions` parameter `Application restrictions` is not set to `None`.  Or,  1. Ensure `Application restrictions` is set to `HTTP referrers` and the referrer is not set to wild-cards `(* or *.TLD or *.TLD/*) allowing access to any/wide HTTP referrer(s)`  Or,  1. Ensure `Application restrictions` is set to `IP addresses` and referrer is not set to `any host (0.0.0.0 or 0.0.0.0/0 or ::0)`  **From Google Cloud Command Line**  1. Run the following from within the project you wish to audit  ``` gcloud services api-keys list --filter=\"-restrictions:*\" --format=\"tablebox(displayName:label='Key With No Restrictions') ```",
           "AdditionalInformation": "",
           "References": "https://cloud.google.com/docs/authentication/api-keys:https://cloud.google.com/sdk/gcloud/reference/services/api-keys/list:https://cloud.google.com/sdk/gcloud/reference/alpha/services/api-keys/update"
         }
@@ -281,10 +281,10 @@
           "Profile": "Level 2",
           "AssessmentStatus": "Automated",
           "Description": "It is recommended that the principle of 'Separation of Duties' is enforced while assigning service-account related roles to users.",
-          "RationaleStatement": "The built-in/predefined IAM role `Service Account admin` allows the user/identity to create, delete, and manage service account(s).\nThe built-in/predefined IAM role `Service Account User` allows the user/identity (with adequate privileges on Compute and App Engine) to assign service account(s) to Apps/Compute Instances.\n\nSeparation of duties is the concept of ensuring that one individual does not have all necessary permissions to be able to complete a malicious action. In Cloud IAM - service accounts, this could be an action such as using a service account to access resources that user should not normally have access to.\n\nSeparation of duties is a business control typically used in larger organizations, meant to help avoid security or privacy incidents and errors. It is considered best practice.\n\nNo user should have `Service Account Admin` and `Service Account User` roles assigned at the same time.",
+          "RationaleStatement": "The built-in/predefined IAM role `Service Account admin` allows the user/identity to create, delete, and manage service account(s). The built-in/predefined IAM role `Service Account User` allows the user/identity (with adequate privileges on Compute and App Engine) to assign service account(s) to Apps/Compute Instances.  Separation of duties is the concept of ensuring that one individual does not have all necessary permissions to be able to complete a malicious action. In Cloud IAM - service accounts, this could be an action such as using a service account to access resources that user should not normally have access to.  Separation of duties is a business control typically used in larger organizations, meant to help avoid security or privacy incidents and errors. It is considered best practice.  No user should have `Service Account Admin` and `Service Account User` roles assigned at the same time.",
           "ImpactStatement": "The removed role should be assigned to a different user based on business needs.",
-          "RemediationProcedure": "**From Google Cloud Console**\n\n1. Go to `IAM & Admin/IAM` using `https://console.cloud.google.com/iam-admin/iam`.\n\n2. For any member having both `Service Account Admin` and `Service account User` roles granted/assigned, click the `Delete Bin` icon to remove either role from the member.\nRemoval of a role should be done based on the business requirements.",
-          "AuditProcedure": "**From Google Cloud Console**\n\n1. Go to `IAM & Admin/IAM` using `https://console.cloud.google.com/iam-admin/iam`.\n\n2. Ensure no member has the roles `Service Account Admin` and `Service account User` assigned together.\n\n**From Google Cloud CLI**\n\n1. List all users and role assignments:\n\n```\ngcloud projects get-iam-policy Project_ID --format json | \\\n jq -r '\n (\"Service_Account_Admin_and_User\" | (., map(length*\"-\"))), \n (\n \n .bindings | \n select(.role == \"roles/iam.serviceAccountAdmin\" or .role == \"roles/iam.serviceAccountUser\").members\n  | \n group_by(.) | \n map({User: ., Count: length}) | \n . | \n select(.Count == 2).User | \n unique\n )\n  | \n . | \n @tsv'\n```\n\n2. All common users listed under `Service_Account_Admin_and_User` are assigned both the `roles/iam.serviceAccountAdmin` and `roles/iam.serviceAccountUser` roles.",
+          "RemediationProcedure": "**From Google Cloud Console**  1. Go to `IAM & Admin/IAM` using `https://console.cloud.google.com/iam-admin/iam`.  2. For any member having both `Service Account Admin` and `Service account User` roles granted/assigned, click the `Delete Bin` icon to remove either role from the member. Removal of a role should be done based on the business requirements.",
+          "AuditProcedure": "**From Google Cloud Console**  1. Go to `IAM & Admin/IAM` using `https://console.cloud.google.com/iam-admin/iam`.  2. Ensure no member has the roles `Service Account Admin` and `Service account User` assigned together.  **From Google Cloud CLI**  1. List all users and role assignments:  ``` gcloud projects get-iam-policy Project_ID --format json | \\  jq -r '  (\"Service_Account_Admin_and_User\" | (., map(length*\"-\"))),   (    .bindings |   select(.role == \"roles/iam.serviceAccountAdmin\" or .role == \"roles/iam.serviceAccountUser\").members   |   group_by(.) |   map({User: ., Count: length}) |   . |   select(.Count == 2).User |   unique  )   |   . |   @tsv' ```  2. All common users listed under `Service_Account_Admin_and_User` are assigned both the `roles/iam.serviceAccountAdmin` and `roles/iam.serviceAccountUser` roles.",
           "AdditionalInformation": "Users granted with Owner (roles/owner) and Editor (roles/editor) have privileges equivalent to `Service Account Admin` and `Service Account User`. To avoid the misuse, Owner and Editor roles should be granted to very limited users and Use of these primitive privileges should be minimal. These requirements are addressed in separate recommendations.",
           "References": "https://cloud.google.com/iam/docs/service-accounts:https://cloud.google.com/iam/docs/understanding-roles:https://cloud.google.com/iam/docs/granting-roles-to-service-accounts"
         }
@@ -304,9 +304,9 @@
           "Description": "A service account is a special Google account that belongs to an application or a VM, instead of to an individual end-user. The application uses the service account to call the service's Google API so that users aren't directly involved. It's recommended not to use admin access for ServiceAccount.",
           "RationaleStatement": "Service accounts represent service-level security of the Resources (application or a VM) which can be determined by the roles assigned to it. Enrolling ServiceAccount with Admin rights gives full access to an assigned application or a VM. A ServiceAccount Access holder can perform critical actions like delete, update change settings, etc. without user intervention. For this reason, it's recommended that service accounts not have Admin rights.",
           "ImpactStatement": "Removing `*Admin` or `*admin` or `Editor` or `Owner` role assignments from service accounts may break functionality that uses impacted service accounts. Required role(s) should be assigned to impacted service accounts in order to restore broken functionalities.",
-          "RemediationProcedure": "**From Google Cloud Console**\n\n1. Go to `IAM & admin/IAM` using `https://console.cloud.google.com/iam-admin/iam`\n2. Go to the `Members`\n3. Identify `User-Managed user created` service account with roles containing `*Admin` or `*admin` or role matching `Editor` or role matching `Owner`\n4. Click the `Delete bin` icon to remove the role from the member (service account in this case)\n\n**From Google Cloud CLI**\n\n```\ngcloud projects get-iam-policy PROJECT_ID --format json > iam.json\n```\n\n1. Using a text editor, Remove `Role` which contains `roles/*Admin` or `roles/*admin` or matched `roles/editor` or matches 'roles/owner`. Add a role to the bindings array that defines the group members and the role for those members. \n\nFor example, to grant the role roles/appengine.appViewer to the `ServiceAccount` which is roles/editor, you would change the example shown below as follows:\n\n {\n \"bindings\": \n {\n \"members\": \n \"serviceAccount:our-project-123@appspot.gserviceaccount.com\",\n ,\n \"role\": \"roles/appengine.appViewer\"\n },\n {\n \"members\": \n \"user:email1@gmail.com\"\n ,\n \"role\": \"roles/owner\"\n },\n {\n \"members\": \n \"serviceAccount:our-project-123@appspot.gserviceaccount.com\",\n \"serviceAccount:123456789012-compute@developer.gserviceaccount.com\"\n ,\n \"role\": \"roles/editor\"\n }\n ,\n \"etag\": \"BwUjMhCsNvY=\"\n }\n2. Update the project's IAM policy:\n\n```\ngcloud projects set-iam-policy PROJECT_ID iam.json\n```",
-          "AuditProcedure": "**From Google Cloud Console**\n\n1. Go to `IAM & admin/IAM` using `https://console.cloud.google.com/iam-admin/iam`\n2. Go to the `Members`\n3. Ensure that there are no `User-Managed user created service account(s)` with roles containing `*Admin` or `*admin` or role matching `Editor` or role matching `Owner`\n\n**From Google Cloud CLI**\n\n1. Get the policy that you want to modify, and write it to a JSON file:\n\n```\ngcloud projects get-iam-policy PROJECT_ID --format json > iam.json\n```\n\n2. The contents of the JSON file will look similar to the following. Note that `role` of members group associated with each `serviceaccount` does not contain `*Admin` or `*admin` or does not match `roles/editor` or does not match `roles/owner`.\n\nThis recommendation is only applicable to `User-Managed user-created` service accounts. These accounts have the nomenclature: `SERVICE_ACCOUNT_NAME@PROJECT_ID.iam.gserviceaccount.com`. Note that some Google-managed, Google-created service accounts have the same naming format, and should be excluded (e.g., `appsdev-apps-dev-script-auth@system.gserviceaccount.com` which needs the Owner role).\n\n**Sample Json output:**\n\n {\n \"bindings\": \n {\n \"members\": \n \"serviceAccount:our-project-123@appspot.gserviceaccount.com\",\n ,\n \"role\": \"roles/appengine.appAdmin\"\n },\n {\n \"members\": \n \"user:email1@gmail.com\"\n ,\n \"role\": \"roles/owner\"\n },\n {\n \"members\": \n \"serviceAccount:our-project-123@appspot.gserviceaccount.com\",\n \"serviceAccount:123456789012-compute@developer.gserviceaccount.com\"\n ,\n \"role\": \"roles/editor\"\n }\n ,\n \"etag\": \"BwUjMhCsNvY=\",\n \"version\": 1\n }",
-          "AdditionalInformation": "Default (user-managed but not user-created) service accounts have the `Editor (roles/editor)` role assigned to them to support GCP services they offer. \nSuch Service accounts are: `PROJECT_NUMBER-compute@developer.gserviceaccount.com`, `PROJECT_ID@appspot.gserviceaccount.com`.",
+          "RemediationProcedure": "**From Google Cloud Console**  1. Go to `IAM & admin/IAM` using `https://console.cloud.google.com/iam-admin/iam` 2. Go to the `Members` 3. Identify `User-Managed user created` service account with roles containing `*Admin` or `*admin` or role matching `Editor` or role matching `Owner` 4. Click the `Delete bin` icon to remove the role from the member (service account in this case)  **From Google Cloud CLI**  ``` gcloud projects get-iam-policy PROJECT_ID --format json > iam.json ```  1. Using a text editor, Remove `Role` which contains `roles/*Admin` or `roles/*admin` or matched `roles/editor` or matches 'roles/owner`. Add a role to the bindings array that defines the group members and the role for those members.   For example, to grant the role roles/appengine.appViewer to the `ServiceAccount` which is roles/editor, you would change the example shown below as follows:   {  \"bindings\":   {  \"members\":   \"serviceAccount:our-project-123@appspot.gserviceaccount.com\",  ,  \"role\": \"roles/appengine.appViewer\"  },  {  \"members\":   \"user:email1@gmail.com\"  ,  \"role\": \"roles/owner\"  },  {  \"members\":   \"serviceAccount:our-project-123@appspot.gserviceaccount.com\",  \"serviceAccount:123456789012-compute@developer.gserviceaccount.com\"  ,  \"role\": \"roles/editor\"  }  ,  \"etag\": \"BwUjMhCsNvY=\"  } 2. Update the project's IAM policy:  ``` gcloud projects set-iam-policy PROJECT_ID iam.json ```",
+          "AuditProcedure": "**From Google Cloud Console**  1. Go to `IAM & admin/IAM` using `https://console.cloud.google.com/iam-admin/iam` 2. Go to the `Members` 3. Ensure that there are no `User-Managed user created service account(s)` with roles containing `*Admin` or `*admin` or role matching `Editor` or role matching `Owner`  **From Google Cloud CLI**  1. Get the policy that you want to modify, and write it to a JSON file:  ``` gcloud projects get-iam-policy PROJECT_ID --format json > iam.json ```  2. The contents of the JSON file will look similar to the following. Note that `role` of members group associated with each `serviceaccount` does not contain `*Admin` or `*admin` or does not match `roles/editor` or does not match `roles/owner`.  This recommendation is only applicable to `User-Managed user-created` service accounts. These accounts have the nomenclature: `SERVICE_ACCOUNT_NAME@PROJECT_ID.iam.gserviceaccount.com`. Note that some Google-managed, Google-created service accounts have the same naming format, and should be excluded (e.g., `appsdev-apps-dev-script-auth@system.gserviceaccount.com` which needs the Owner role).  **Sample Json output:**   {  \"bindings\":   {  \"members\":   \"serviceAccount:our-project-123@appspot.gserviceaccount.com\",  ,  \"role\": \"roles/appengine.appAdmin\"  },  {  \"members\":   \"user:email1@gmail.com\"  ,  \"role\": \"roles/owner\"  },  {  \"members\":   \"serviceAccount:our-project-123@appspot.gserviceaccount.com\",  \"serviceAccount:123456789012-compute@developer.gserviceaccount.com\"  ,  \"role\": \"roles/editor\"  }  ,  \"etag\": \"BwUjMhCsNvY=\",  \"version\": 1  }",
+          "AdditionalInformation": "Default (user-managed but not user-created) service accounts have the `Editor (roles/editor)` role assigned to them to support GCP services they offer.  Such Service accounts are: `PROJECT_NUMBER-compute@developer.gserviceaccount.com`, `PROJECT_ID@appspot.gserviceaccount.com`.",
           "References": "https://cloud.google.com/sdk/gcloud/reference/iam/service-accounts/:https://cloud.google.com/iam/docs/understanding-roles:https://cloud.google.com/iam/docs/understanding-service-accounts"
         }
       ]
@@ -323,10 +323,10 @@
           "Profile": "Level 1",
           "AssessmentStatus": "Automated",
           "Description": "User managed service accounts should not have user-managed keys.",
-          "RationaleStatement": "Anyone who has access to the keys will be able to access resources through the service account. \nGCP-managed keys are used by Cloud Platform services such as App Engine and Compute Engine. These keys cannot be downloaded. Google will keep the keys and automatically rotate them on an approximately weekly basis.\nUser-managed keys are created, downloadable, and managed by users. They expire 10 years from creation.\n\nFor user-managed keys, the user has to take ownership of key management activities which include:\n- Key storage\n- Key distribution\n- Key revocation\n- Key rotation\n- Protecting the keys from unauthorized users\n- Key recovery\n\nEven with key owner precautions, keys can be easily leaked by common development malpractices like checking keys into the source code or leaving them in the Downloads directory, or accidentally leaving them on support blogs/channels.\n\nIt is recommended to prevent user-managed service account keys.",
+          "RationaleStatement": "Anyone who has access to the keys will be able to access resources through the service account.  GCP-managed keys are used by Cloud Platform services such as App Engine and Compute Engine. These keys cannot be downloaded. Google will keep the keys and automatically rotate them on an approximately weekly basis. User-managed keys are created, downloadable, and managed by users. They expire 10 years from creation.  For user-managed keys, the user has to take ownership of key management activities which include: - Key storage - Key distribution - Key revocation - Key rotation - Protecting the keys from unauthorized users - Key recovery  Even with key owner precautions, keys can be easily leaked by common development malpractices like checking keys into the source code or leaving them in the Downloads directory, or accidentally leaving them on support blogs/channels.  It is recommended to prevent user-managed service account keys.",
           "ImpactStatement": "Deleting user-managed Service Account Keys may break communication with the applications using the corresponding keys.",
-          "RemediationProcedure": "**From Google Cloud Console**\n\n1. Go to the IAM page in the GCP Console using `https://console.cloud.google.com/iam-admin/iam`\n\n2. In the left navigation pane, click `Service accounts`. All service accounts and their corresponding keys are listed.\n\n3. Click the service account.\n\n4. Click the `edit` and delete the keys.\n\n**From Google Cloud CLI**\n\nTo delete a user managed Service Account Key,\n\n```\ngcloud iam service-accounts keys delete --iam-account= \n```\n\n**Prevention:**\nYou can disable service account key creation through the `Disable service account key creation` Organization policy by visiting https://console.cloud.google.com/iam-admin/orgpolicies/iam-disableServiceAccountKeyCreation(https://console.cloud.google.com/iam-admin/orgpolicies/iam-disableServiceAccountKeyCreation). Learn more at: https://cloud.google.com/resource-manager/docs/organization-policy/restricting-service-accounts(https://cloud.google.com/resource-manager/docs/organization-policy/restricting-service-accounts)\n\nIn addition, if you do not need to have service accounts in your project, you can also prevent the creation of service accounts through the `Disable service account creation` Organization policy: https://console.cloud.google.com/iam-admin/orgpolicies/iam-disableServiceAccountCreation(https://console.cloud.google.com/iam-admin/orgpolicies/iam-disableServiceAccountCreation).",
-          "AuditProcedure": "**From Google Cloud Console**\n\n1. Go to the IAM page in the GCP Console using `https://console.cloud.google.com/iam-admin/iam`\n\n2. In the left navigation pane, click `Service accounts`. All service accounts and their corresponding keys are listed.\n\n3. Click the service accounts and check if keys exist.\n\n**From Google Cloud CLI**\n\nList All the service accounts:\n\n```\ngcloud iam service-accounts list\n```\nIdentify user-managed service accounts as such account `EMAIL` ends with `iam.gserviceaccount.com`\n\nFor each user-managed service account, list the keys managed by the user:\n```\ngcloud iam service-accounts keys list --iam-account= --managed-by=user\n```\nNo keys should be listed.",
+          "RemediationProcedure": "**From Google Cloud Console**  1. Go to the IAM page in the GCP Console using `https://console.cloud.google.com/iam-admin/iam`  2. In the left navigation pane, click `Service accounts`. All service accounts and their corresponding keys are listed.  3. Click the service account.  4. Click the `edit` and delete the keys.  **From Google Cloud CLI**  To delete a user managed Service Account Key,  ``` gcloud iam service-accounts keys delete --iam-account=  ```  **Prevention:** You can disable service account key creation through the `Disable service account key creation` Organization policy by visiting https://console.cloud.google.com/iam-admin/orgpolicies/iam-disableServiceAccountKeyCreation(https://console.cloud.google.com/iam-admin/orgpolicies/iam-disableServiceAccountKeyCreation). Learn more at: https://cloud.google.com/resource-manager/docs/organization-policy/restricting-service-accounts(https://cloud.google.com/resource-manager/docs/organization-policy/restricting-service-accounts)  In addition, if you do not need to have service accounts in your project, you can also prevent the creation of service accounts through the `Disable service account creation` Organization policy: https://console.cloud.google.com/iam-admin/orgpolicies/iam-disableServiceAccountCreation(https://console.cloud.google.com/iam-admin/orgpolicies/iam-disableServiceAccountCreation).",
+          "AuditProcedure": "**From Google Cloud Console**  1. Go to the IAM page in the GCP Console using `https://console.cloud.google.com/iam-admin/iam`  2. In the left navigation pane, click `Service accounts`. All service accounts and their corresponding keys are listed.  3. Click the service accounts and check if keys exist.  **From Google Cloud CLI**  List All the service accounts:  ``` gcloud iam service-accounts list ``` Identify user-managed service accounts as such account `EMAIL` ends with `iam.gserviceaccount.com`  For each user-managed service account, list the keys managed by the user: ``` gcloud iam service-accounts keys list --iam-account= --managed-by=user ``` No keys should be listed.",
           "AdditionalInformation": "A user-managed key cannot be created on GCP-Managed Service Accounts.",
           "References": "https://cloud.google.com/iam/docs/understanding-service-accounts#managing_service_account_keys:https://cloud.google.com/resource-manager/docs/organization-policy/restricting-service-accounts"
         }
@@ -344,10 +344,10 @@
           "Profile": "Level 1",
           "AssessmentStatus": "Automated",
           "Description": "Service Account keys consist of a key ID (Private_key_Id) and Private key, which are used to sign programmatic requests users make to Google cloud services accessible to that particular service account. It is recommended that all Service Account keys are regularly rotated.",
-          "RationaleStatement": "Rotating Service Account keys will reduce the window of opportunity for an access key that is associated with a compromised or terminated account to be used. Service Account keys should be rotated to ensure that data cannot be accessed with an old key that might have been lost, cracked, or stolen.\n\nEach service account is associated with a key pair managed by Google Cloud Platform (GCP). It is used for service-to-service authentication within GCP. Google rotates the keys daily.\n\nGCP provides the option to create one or more user-managed (also called external key pairs) key pairs for use from outside GCP (for example, for use with Application Default Credentials). When a new key pair is created, the user is required to download the private key (which is not retained by Google). With external keys, users are responsible for keeping the private key secure and other management operations such as key rotation. External keys can be managed by the IAM API, gcloud command-line tool, or the Service Accounts page in the Google Cloud Platform Console. GCP facilitates up to 10 external service account keys per service account to facilitate key rotation.",
+          "RationaleStatement": "Rotating Service Account keys will reduce the window of opportunity for an access key that is associated with a compromised or terminated account to be used. Service Account keys should be rotated to ensure that data cannot be accessed with an old key that might have been lost, cracked, or stolen.  Each service account is associated with a key pair managed by Google Cloud Platform (GCP). It is used for service-to-service authentication within GCP. Google rotates the keys daily.  GCP provides the option to create one or more user-managed (also called external key pairs) key pairs for use from outside GCP (for example, for use with Application Default Credentials). When a new key pair is created, the user is required to download the private key (which is not retained by Google). With external keys, users are responsible for keeping the private key secure and other management operations such as key rotation. External keys can be managed by the IAM API, gcloud command-line tool, or the Service Accounts page in the Google Cloud Platform Console. GCP facilitates up to 10 external service account keys per service account to facilitate key rotation.",
           "ImpactStatement": "Rotating service account keys will break communication for dependent applications. Dependent applications need to be configured manually with the new key `ID` displayed in the `Service account keys` section and the `private key` downloaded by the user.",
-          "RemediationProcedure": "**From Google Cloud Console**\n\n**Delete any external (user-managed) Service Account Key older than 90 days:**\n\n1. Go to `APIs & Services\\Credentials` using `https://console.cloud.google.com/apis/credentials`\n\n2. In the Section `Service Account Keys`, for every external (user-managed) service account key where `creation date` is greater than or equal to the past 90 days, click `Delete Bin Icon` to `Delete Service Account key`\n\n**Create a new external (user-managed) Service Account Key for a Service Account:**\n\n1. Go to `APIs & Services\\Credentials` using `https://console.cloud.google.com/apis/credentials`\n\n2. Click `Create Credentials` and Select `Service Account Key`.\n\n3. Choose the service account in the drop-down list for which an External (user-managed) Service Account key needs to be created.\n\n4. Select the desired key type format among `JSON` or `P12`.\n\n5. Click `Create`. It will download the `private key`. Keep it safe. \n\n6. Click `Close` if prompted. \n\n7. The site will redirect to the `APIs & Services\\Credentials` page. Make a note of the new `ID` displayed in the `Service account keys` section.",
-          "AuditProcedure": "**From Google Cloud Console**\n\n1. Go to `APIs & Services\\Credentials` using `https://console.cloud.google.com/apis/credentials`\n\n2. In the section `Service Account Keys`, for every External (user-managed) service account key listed ensure the `creation date` is within the past 90 days.\n\n**From Google Cloud CLI**\n\n1. List all Service accounts from a project.\n\n```\ngcloud iam service-accounts list\n```\n\n2. For every service account list service account keys.\n\n```\ngcloud iam service-accounts keys list --iam-account Service_Account_Email_Id --format=json\n```\n\n3. Ensure every service account key for a service account has a `\"validAfterTime\"` value within the past 90 days.",
+          "RemediationProcedure": "**From Google Cloud Console**  **Delete any external (user-managed) Service Account Key older than 90 days:**  1. Go to `APIs & Services\\Credentials` using `https://console.cloud.google.com/apis/credentials`  2. In the Section `Service Account Keys`, for every external (user-managed) service account key where `creation date` is greater than or equal to the past 90 days, click `Delete Bin Icon` to `Delete Service Account key`  **Create a new external (user-managed) Service Account Key for a Service Account:**  1. Go to `APIs & Services\\Credentials` using `https://console.cloud.google.com/apis/credentials`  2. Click `Create Credentials` and Select `Service Account Key`.  3. Choose the service account in the drop-down list for which an External (user-managed) Service Account key needs to be created.  4. Select the desired key type format among `JSON` or `P12`.  5. Click `Create`. It will download the `private key`. Keep it safe.   6. Click `Close` if prompted.   7. The site will redirect to the `APIs & Services\\Credentials` page. Make a note of the new `ID` displayed in the `Service account keys` section.",
+          "AuditProcedure": "**From Google Cloud Console**  1. Go to `APIs & Services\\Credentials` using `https://console.cloud.google.com/apis/credentials`  2. In the section `Service Account Keys`, for every External (user-managed) service account key listed ensure the `creation date` is within the past 90 days.  **From Google Cloud CLI**  1. List all Service accounts from a project.  ``` gcloud iam service-accounts list ```  2. For every service account list service account keys.  ``` gcloud iam service-accounts keys list --iam-account Service_Account_Email_Id --format=json ```  3. Ensure every service account key for a service account has a `\"validAfterTime\"` value within the past 90 days.",
           "AdditionalInformation": "For user-managed Service Account key(s), key management is entirely the user's responsibility.",
           "References": "https://cloud.google.com/iam/docs/understanding-service-accounts#managing_service_account_keys:https://cloud.google.com/sdk/gcloud/reference/iam/service-accounts/keys/list:https://cloud.google.com/iam/docs/service-accounts"
         }
@@ -365,8 +365,8 @@
           "Description": "Google Cloud Functions allow you to host serverless code that is executed when an event is triggered, without the requiring the management a host operating system. These functions can also store environment variables to be used by the code that may contain authentication or other information that needs to remain confidential.",
           "RationaleStatement": "It is recommended to use the Secret Manager, because environment variables are stored unencrypted, and accessible for all users who have access to the code.",
           "ImpactStatement": "There should be no impact on the Cloud Function. There are minor costs after 10,000 requests a month to the Secret Manager API as well for a high use of other functions. Modifying the Cloud Function to use the Secret Manager may prevent it running to completion.",
-          "RemediationProcedure": "Enable Secret Manager API for your Project\n\n**From Google Cloud Console**\n1. Within the project you wish to enable, select the Navigation hamburger menu in the top left. Hover over 'APIs & Services' to under the heading 'Serverless', then select 'Enabled APIs & Services' in the menu that opens up.\n2. Click the button '+ Enable APIS and Services'\n3. In the Search bar, search for 'Secret Manager API' and select it.\n4. Click the blue box that says 'Enable'.\n\n**From Google Cloud CLI**\n1. Within the project you wish to enable the API in, run the following command.\n```\ngcloud services enable Secret Manager API \n```\n\nReviewing Environment Variables That Should Be Migrated to Secret Manager\n\n**From Google Cloud Console**\n1. Log in to the Google Cloud Web Portal (https://console.cloud.google.com/)\n1. Go to Cloud Functions\n1. Click on a function name from the list\n1. Click on Edit and review the Runtime environment for variables that should be secrets. Leave this list open for the next step.\n\n**From Google Cloud CLI**\n1. To view a list of your cloud functions run\n```\ngcloud functions list\n```\n2. For each cloud function run the following command.\n```\ngcloud functions describe \n```\n3. Review the settings of the buildEnvironmentVariables and environmentVariables. Keep this information for the next step.\n\nMigrating Environment Variables to Secrets within the Secret Manager\n\n**From Google Cloud Console**\n1. Go to the Secret Manager page in the Cloud Console.\n1. On the Secret Manager page, click Create Secret.\n1. On the Create secret page, under Name, enter the name of the Environment Variable you are replacing. This will then be the Secret Variable you will reference in your code.\n1. You will also need to add a version. This is the actual value of the variable that will be referenced from the code. To add a secret version when creating the initial secret, in the Secret value field, enter the value from the Environment Variable you are replacing.\n1. Leave the Regions section unchanged.\n1. Click the Create secret button.\n1. Repeat for all Environment Variables\n\n**From Google Cloud CLI**\n1. Run the following command with the Environment Variable name you are replacing in the ``. It is most secure to point this command to a file with the Environment Variable value located in it, as if you entered it via command line it would show up in your shell’s command history.\n```\ngcloud secrets create  --data-file=\"/path/to/file.txt\"\n```\n\nGranting your Runtime's Service Account Access to Secrets\n\n**From Google Cloud Console**\n1. Within the project containing your runtime login with account that has the 'roles/secretmanager.secretAccessor' permission. \n2. Select the Navigation hamburger menu in the top left. Hover over 'Security' to under the then select 'Secret Manager' in the menu that opens up.\n3. Click the name of a secret listed in this screen.\n4. If it is not already open, click Show Info Panel in this screen to open the panel.\n5.In the info panel, click Add principal.\n6.In the New principals field, enter the service account your function uses for its identity. (If you need help locating or updating your runtime's service account, please see the 'docs/securing/function-identity#runtime_service_account' reference.)\n7. In the Select a role dropdown, choose Secret Manager and then Secret Manager Secret Accessor.\n\n**From Google Cloud CLI**\nAs of the time of writing, using Google CLI to list Runtime variables is only in beta. Because this is likely to change we are not including it here.\n\nModifying the Code to use the Secrets in Secret Manager\n\n**From Google Cloud Console**\nThis depends heavily on which language your runtime is in. For the sake of the brevity of this recommendation, please see the '/docs/creating-and-accessing-secrets#access' reference for language specific instructions.\n\n**From Google Cloud CLI**\nThis depends heavily on which language your runtime is in. For the sake of the brevity of this recommendation, please see the' /docs/creating-and-accessing-secrets#access' reference for language specific instructions.\n\nDeleting the Insecure Environment Variables\n\n**Be certain to do this step last.** Removing variables from code actively referencing them will prevent it from completing successfully.\n\n**From Google Cloud Console**\n1. Select the Navigation hamburger menu in the top left. Hover over 'Security' then select 'Secret Manager' in the menu that opens up.\n1. Click the name of a function. Click Edit.\n1. Click Runtime, build and connections settings to expand the advanced configuration options.\n1. Click 'Security’. Hover over the secret you want to remove, then click 'Delete'.\n1. Click Next. Click Deploy. The latest version of the runtime will now reference the secrets in Secret Manager.\n\n**From Google Cloud CLI**\n```\ngcloud functions deploy --remove-env-vars \n```\nIf you need to find the env vars to remove, they are from the step where ‘gcloud functions describe ``’ was run.",
-          "AuditProcedure": "Determine if Confidential Information is Stored in your Functions in Cleartext\n\n**From Google Cloud Console**\n1. Within the project you wish to audit, select the Navigation hamburger menu in the top left. Scroll down to under the heading 'Serverless', then select 'Cloud Functions'\n1. Click on a function name from the list\n1. Open the Variables tab and you will see both buildEnvironmentVariables and environmentVariables\n1. Review the variables whether they are secrets\n1. Repeat step 3-5 until all functions are reviewed\n\n**From Google Cloud CLI**\n1. To view a list of your cloud functions run\n```\ngcloud functions list\n```\n2. For each cloud function in the list run the following command.\n```\ngcloud functions describe \n```\n3. Review the settings of the buildEnvironmentVariables and environmentVariables. Determine if this is data that should not be publicly accessible.\n\nDetermine if Secret Manager API is 'Enabled' for your Project\n\n**From Google Cloud Console**\n1. Within the project you wish to audit, select the Navigation hamburger menu in the top left. Hover over 'APIs & Services' to under the heading 'Serverless', then select 'Enabled APIs & Services' in the menu that opens up.\n1. Click the button '+ Enable APIS and Services'\n1. In the Search bar, search for 'Secret Manager API' and select it.\n1. If it is enabled, the blue box that normally says 'Enable' will instead say 'Manage'.\n\n**From Google Cloud CLI**\n1. Within the project you wish to audit, run the following command.\n```\ngcloud services list\n```\n2. If 'Secret Manager API' is in the list, it is enabled.",
+          "RemediationProcedure": "Enable Secret Manager API for your Project  **From Google Cloud Console** 1. Within the project you wish to enable, select the Navigation hamburger menu in the top left. Hover over 'APIs & Services' to under the heading 'Serverless', then select 'Enabled APIs & Services' in the menu that opens up. 2. Click the button '+ Enable APIS and Services' 3. In the Search bar, search for 'Secret Manager API' and select it. 4. Click the blue box that says 'Enable'.  **From Google Cloud CLI** 1. Within the project you wish to enable the API in, run the following command. ``` gcloud services enable Secret Manager API  ```  Reviewing Environment Variables That Should Be Migrated to Secret Manager  **From Google Cloud Console** 1. Log in to the Google Cloud Web Portal (https://console.cloud.google.com/) 1. Go to Cloud Functions 1. Click on a function name from the list 1. Click on Edit and review the Runtime environment for variables that should be secrets. Leave this list open for the next step.  **From Google Cloud CLI** 1. To view a list of your cloud functions run ``` gcloud functions list ``` 2. For each cloud function run the following command. ``` gcloud functions describe  ``` 3. Review the settings of the buildEnvironmentVariables and environmentVariables. Keep this information for the next step.  Migrating Environment Variables to Secrets within the Secret Manager  **From Google Cloud Console** 1. Go to the Secret Manager page in the Cloud Console. 1. On the Secret Manager page, click Create Secret. 1. On the Create secret page, under Name, enter the name of the Environment Variable you are replacing. This will then be the Secret Variable you will reference in your code. 1. You will also need to add a version. This is the actual value of the variable that will be referenced from the code. To add a secret version when creating the initial secret, in the Secret value field, enter the value from the Environment Variable you are replacing. 1. Leave the Regions section unchanged. 1. Click the Create secret button. 1. Repeat for all Environment Variables  **From Google Cloud CLI** 1. Run the following command with the Environment Variable name you are replacing in the ``. It is most secure to point this command to a file with the Environment Variable value located in it, as if you entered it via command line it would show up in your shell’s command history. ``` gcloud secrets create  --data-file=\"/path/to/file.txt\" ```  Granting your Runtime's Service Account Access to Secrets  **From Google Cloud Console** 1. Within the project containing your runtime login with account that has the 'roles/secretmanager.secretAccessor' permission.  2. Select the Navigation hamburger menu in the top left. Hover over 'Security' to under the then select 'Secret Manager' in the menu that opens up. 3. Click the name of a secret listed in this screen. 4. If it is not already open, click Show Info Panel in this screen to open the panel. 5.In the info panel, click Add principal. 6.In the New principals field, enter the service account your function uses for its identity. (If you need help locating or updating your runtime's service account, please see the 'docs/securing/function-identity#runtime_service_account' reference.) 7. In the Select a role dropdown, choose Secret Manager and then Secret Manager Secret Accessor.  **From Google Cloud CLI** As of the time of writing, using Google CLI to list Runtime variables is only in beta. Because this is likely to change we are not including it here.  Modifying the Code to use the Secrets in Secret Manager  **From Google Cloud Console** This depends heavily on which language your runtime is in. For the sake of the brevity of this recommendation, please see the '/docs/creating-and-accessing-secrets#access' reference for language specific instructions.  **From Google Cloud CLI** This depends heavily on which language your runtime is in. For the sake of the brevity of this recommendation, please see the' /docs/creating-and-accessing-secrets#access' reference for language specific instructions.  Deleting the Insecure Environment Variables  **Be certain to do this step last.** Removing variables from code actively referencing them will prevent it from completing successfully.  **From Google Cloud Console** 1. Select the Navigation hamburger menu in the top left. Hover over 'Security' then select 'Secret Manager' in the menu that opens up. 1. Click the name of a function. Click Edit. 1. Click Runtime, build and connections settings to expand the advanced configuration options. 1. Click 'Security’. Hover over the secret you want to remove, then click 'Delete'. 1. Click Next. Click Deploy. The latest version of the runtime will now reference the secrets in Secret Manager.  **From Google Cloud CLI** ``` gcloud functions deploy --remove-env-vars  ``` If you need to find the env vars to remove, they are from the step where ‘gcloud functions describe ``’ was run.",
+          "AuditProcedure": "Determine if Confidential Information is Stored in your Functions in Cleartext  **From Google Cloud Console** 1. Within the project you wish to audit, select the Navigation hamburger menu in the top left. Scroll down to under the heading 'Serverless', then select 'Cloud Functions' 1. Click on a function name from the list 1. Open the Variables tab and you will see both buildEnvironmentVariables and environmentVariables 1. Review the variables whether they are secrets 1. Repeat step 3-5 until all functions are reviewed  **From Google Cloud CLI** 1. To view a list of your cloud functions run ``` gcloud functions list ``` 2. For each cloud function in the list run the following command. ``` gcloud functions describe  ``` 3. Review the settings of the buildEnvironmentVariables and environmentVariables. Determine if this is data that should not be publicly accessible.  Determine if Secret Manager API is 'Enabled' for your Project  **From Google Cloud Console** 1. Within the project you wish to audit, select the Navigation hamburger menu in the top left. Hover over 'APIs & Services' to under the heading 'Serverless', then select 'Enabled APIs & Services' in the menu that opens up. 1. Click the button '+ Enable APIS and Services' 1. In the Search bar, search for 'Secret Manager API' and select it. 1. If it is enabled, the blue box that normally says 'Enable' will instead say 'Manage'.  **From Google Cloud CLI** 1. Within the project you wish to audit, run the following command. ``` gcloud services list ``` 2. If 'Secret Manager API' is in the list, it is enabled.",
           "AdditionalInformation": "There are slight additional costs to using the Secret Manager API. Review the documentation to determine your organizations' needs.",
           "References": "https://cloud.google.com/functions/docs/configuring/env-var#managing_secrets:https://cloud.google.com/secret-manager/docs/overview"
         }
@@ -386,8 +386,8 @@
           "Description": "GCP Access Approval enables you to require your organizations' explicit approval whenever Google support try to access your projects. You can then select users within your organization who can approve these requests through giving them a security role in IAM. All access requests display which Google Employee requested them in an email or Pub/Sub message that you can choose to Approve. This adds an additional control and logging of who in your organization approved/denied these requests.",
           "RationaleStatement": "Controlling access to your information is one of the foundations of information security. Google Employees do have access to your organizations' projects for support reasons. With Access Approval, organizations can then be certain that their information is accessed by only approved Google Personnel.",
           "ImpactStatement": "To use Access Approval your organization will need have enabled Access Transparency and have at one of the following support level: Enhanced or Premium. There will be subscription costs associated with these support levels, as well as increased storage costs for storing the logs. You will also not be able to turn the Access Transparency which Access Approval depends on, off yourself. To do so you will need to submit a service request to Google Cloud Support. There will also be additional overhead in managing user permissions. There may also be a potential delay in support times as Google Personnel will have to wait for their access to be approved.",
-          "RemediationProcedure": "**From Google Cloud Console**\n\n1. From the Google Cloud Home, within the project you wish to enable, click on the Navigation hamburger menu in the top left. Hover over the `Security` Menu. Select `Access Approval` in the middle of the column that opens. \n\n2. The status will be displayed here. On this screen, there is an option to click `Enroll`. If it is greyed out and you see an error bar at the top of the screen that says `Access Transparency is not enabled` please view the corresponding reference within this section to enable it.\n\n3. In the second screen click `Enroll`.\n\n**Grant an IAM Group or User the role with permissions to Add Users to be Access Approval message Recipients**\n\n1. From the Google Cloud Home, within the project you wish to enable, click on the Navigation hamburger menu in the top left. Hover over the `IAM and Admin`. Select `IAM` in the middle of the column that opens. \n\n2. Click the blue button the says `+ ADD` at the top of the screen.\n\n3. In the `principals` field, select a user or group by typing in their associated email address.\n\n4. Click on the role field to expand it. In the filter field enter `Access Approval Approver` and select it.\n\n5. Click `save`.\n\n**Add a Group or User as an Approver for Access Approval Requests**\n\n1. As a user with the `Access Approval Approver` permission, within the project where you wish to add an email address to which request will be sent, click on the Navigation hamburger menu in the top left. Hover over the `Security` Menu. Select `Access Approval` in the middle of the column that opens. \n\n2. Click `Manage Settings`\n\n3. Under `Set up approval notifications`, enter the email address associated with a Google Cloud User or Group you wish to send Access Approval requests to. All future access approvals will be sent as emails to this address.\n\n**From Google Cloud CLI**\n\n1. To update all services in an entire project, run the following command from an account that has permissions as an 'Approver for Access Approval Requests'\n\n```\ngcloud access-approval settings update --project= --enrolled_services=all --notification_emails='@'\n```",
-          "AuditProcedure": "**From Google Cloud Console**\n\n**Determine if Access Transparency is Enabled as it is a Dependency**\n\n1. From the Google Cloud Home inside the project you wish to audit, click on the Navigation hamburger menu in the top left. Hover over the `IAM & Admin` Menu. Select `settings` in the middle of the column that opens.\n\n2. The status should be \"Enabled' under the heading `Access Transparency`\n\n**Determine if Access Approval is Enabled**\n\n1. From the Google Cloud Home, within the project you wish to check, click on the Navigation hamburger menu in the top left. Hover over the `Security` Menu. Select `Access Approval` in the middle of the column that opens. \n\n2. The status will be displayed here. If you see a screen saying you need to enroll in Access Approval, it is not enabled.\n\n**From Google Cloud CLI**\n\n**Determine if Access Approval is Enabled**\n1. From within the project you wish to audit, run the following command.\n```\ngcloud access-approval settings get\n```\n2. The status will be displayed in the output.\n\nIF Access Approval is not enabled you should get this output:\n```\nAPI accessapproval.googleapis.com not enabled on project -----. Would you like to enable and retry (this will take a few minutes)? (y/N)?\n```\nAfter entering `Y` if you get the following output, it means that `Access Transparency` is not enabled:\n```\nERROR: (gcloud.access-approval.settings.get) FAILED_PRECONDITION: Precondition check failed.\n```",
+          "RemediationProcedure": "**From Google Cloud Console**  1. From the Google Cloud Home, within the project you wish to enable, click on the Navigation hamburger menu in the top left. Hover over the `Security` Menu. Select `Access Approval` in the middle of the column that opens.   2. The status will be displayed here. On this screen, there is an option to click `Enroll`. If it is greyed out and you see an error bar at the top of the screen that says `Access Transparency is not enabled` please view the corresponding reference within this section to enable it.  3. In the second screen click `Enroll`.  **Grant an IAM Group or User the role with permissions to Add Users to be Access Approval message Recipients**  1. From the Google Cloud Home, within the project you wish to enable, click on the Navigation hamburger menu in the top left. Hover over the `IAM and Admin`. Select `IAM` in the middle of the column that opens.   2. Click the blue button the says `+ ADD` at the top of the screen.  3. In the `principals` field, select a user or group by typing in their associated email address.  4. Click on the role field to expand it. In the filter field enter `Access Approval Approver` and select it.  5. Click `save`.  **Add a Group or User as an Approver for Access Approval Requests**  1. As a user with the `Access Approval Approver` permission, within the project where you wish to add an email address to which request will be sent, click on the Navigation hamburger menu in the top left. Hover over the `Security` Menu. Select `Access Approval` in the middle of the column that opens.   2. Click `Manage Settings`  3. Under `Set up approval notifications`, enter the email address associated with a Google Cloud User or Group you wish to send Access Approval requests to. All future access approvals will be sent as emails to this address.  **From Google Cloud CLI**  1. To update all services in an entire project, run the following command from an account that has permissions as an 'Approver for Access Approval Requests'  ``` gcloud access-approval settings update --project= --enrolled_services=all --notification_emails='@' ```",
+          "AuditProcedure": "**From Google Cloud Console**  **Determine if Access Transparency is Enabled as it is a Dependency**  1. From the Google Cloud Home inside the project you wish to audit, click on the Navigation hamburger menu in the top left. Hover over the `IAM & Admin` Menu. Select `settings` in the middle of the column that opens.  2. The status should be \"Enabled' under the heading `Access Transparency`  **Determine if Access Approval is Enabled**  1. From the Google Cloud Home, within the project you wish to check, click on the Navigation hamburger menu in the top left. Hover over the `Security` Menu. Select `Access Approval` in the middle of the column that opens.   2. The status will be displayed here. If you see a screen saying you need to enroll in Access Approval, it is not enabled.  **From Google Cloud CLI**  **Determine if Access Approval is Enabled** 1. From within the project you wish to audit, run the following command. ``` gcloud access-approval settings get ``` 2. The status will be displayed in the output.  IF Access Approval is not enabled you should get this output: ``` API accessapproval.googleapis.com not enabled on project -----. Would you like to enable and retry (this will take a few minutes)? (y/N)? ``` After entering `Y` if you get the following output, it means that `Access Transparency` is not enabled: ``` ERROR: (gcloud.access-approval.settings.get) FAILED_PRECONDITION: Precondition check failed. ```",
           "AdditionalInformation": "The recipients of Access Requests will also need to be logged into a Google Cloud account associated with an email address in this list. To approve requests they can click approve within the email. Or they can view requests at the the Access Approval page within the Security submenu.",
           "References": "https://cloud.google.com/cloud-provider-access-management/access-approval/docs:https://cloud.google.com/cloud-provider-access-management/access-approval/docs/overview:https://cloud.google.com/cloud-provider-access-management/access-approval/docs/quickstart-custom-key:https://cloud.google.com/cloud-provider-access-management/access-approval/docs/supported-services:https://cloud.google.com/cloud-provider-access-management/access-approval/docs/view-historical-requests"
         }
@@ -405,18 +405,18 @@
           "Profile": "Level 1",
           "AssessmentStatus": "Automated",
           "Description": "GCP Cloud Asset Inventory is services that provides a historical view of GCP resources and IAM policies through a time-series database. The information recorded includes metadata on Google Cloud resources, metadata on policies set on Google Cloud projects or resources, and runtime information gathered within a Google Cloud resource.",
-          "RationaleStatement": "The GCP resources and IAM policies captured by GCP Cloud Asset Inventory enables security analysis, resource change tracking, and compliance auditing.\n\nIt is recommended GCP Cloud Asset Inventory be enabled for all GCP projects.",
+          "RationaleStatement": "The GCP resources and IAM policies captured by GCP Cloud Asset Inventory enables security analysis, resource change tracking, and compliance auditing.  It is recommended GCP Cloud Asset Inventory be enabled for all GCP projects.",
           "ImpactStatement": "",
-          "RemediationProcedure": "**From Google Cloud Console**\n\nEnable the Cloud Asset API:\n\n1. Go to `API & Services/Library` by visiting https://console.cloud.google.com/apis/library(https://console.cloud.google.com/apis/library)\n2. Search for `Cloud Asset API` and select the result for _Cloud Asset API_\n3. Click the `ENABLE` button.\n\n**From Google Cloud CLI**\n\nEnable the Cloud Asset API:\n\n1. Enable the Cloud Asset API through the services interface:\n```\ngcloud services enable cloudasset.googleapis.com\n```",
-          "AuditProcedure": "**From Google Cloud Console**\n\nEnsure that the Cloud Asset API is enabled:\n\n1. Go to `API & Services/Library` by visiting https://console.cloud.google.com/apis/library(https://console.cloud.google.com/apis/library)\n2. Search for `Cloud Asset API` and select the result for _Cloud Asset API_\n3. Ensure that `API Enabled` is displayed.\n\n**From Google Cloud CLI**\n\nEnsure that the Cloud Asset API is enabled:\n\n1. Query enabled services:\n```\ngcloud services list --enabled --filter=name:cloudasset.googleapis.com\n```\nIf the API is listed, then it is enabled. If the response is `Listed 0 items` the API is not enabled.",
-          "AdditionalInformation": "Additional info\n- Cloud Asset Inventory only keeps a five-week history of Google Cloud asset metadata. If a longer history is desired, automation to export the history to Cloud Storage or BigQuery should be evaluated.",
+          "RemediationProcedure": "**From Google Cloud Console**  Enable the Cloud Asset API:  1. Go to `API & Services/Library` by visiting https://console.cloud.google.com/apis/library(https://console.cloud.google.com/apis/library) 2. Search for `Cloud Asset API` and select the result for _Cloud Asset API_ 3. Click the `ENABLE` button.  **From Google Cloud CLI**  Enable the Cloud Asset API:  1. Enable the Cloud Asset API through the services interface: ``` gcloud services enable cloudasset.googleapis.com ```",
+          "AuditProcedure": "**From Google Cloud Console**  Ensure that the Cloud Asset API is enabled:  1. Go to `API & Services/Library` by visiting https://console.cloud.google.com/apis/library(https://console.cloud.google.com/apis/library) 2. Search for `Cloud Asset API` and select the result for _Cloud Asset API_ 3. Ensure that `API Enabled` is displayed.  **From Google Cloud CLI**  Ensure that the Cloud Asset API is enabled:  1. Query enabled services: ``` gcloud services list --enabled --filter=name:cloudasset.googleapis.com ``` If the API is listed, then it is enabled. If the response is `Listed 0 items` the API is not enabled.",
+          "AdditionalInformation": "Additional info - Cloud Asset Inventory only keeps a five-week history of Google Cloud asset metadata. If a longer history is desired, automation to export the history to Cloud Storage or BigQuery should be evaluated.",
           "References": "https://cloud.google.com/asset-inventory/docs"
         }
       ]
     },
     {
       "Id": "2.4",
-      "Description": "In order to prevent unnecessary project ownership assignments to users/service-accounts and further misuses of projects and resources, all `roles/Owner` assignments should be monitored.\n\nMembers (users/Service-Accounts) with a role assignment to primitive role `roles/Owner` are project owners.\n\nThe project owner has all the privileges on the project the role belongs to. These are summarized below:\n- All viewer permissions on all GCP Services within the project\n- Permissions for actions that modify the state of all GCP services within the project\n- Manage roles and permissions for a project and all resources within the project\n- Set up billing for a project\n\nGranting the owner role to a member (user/Service-Account) will allow that member to modify the Identity and Access Management (IAM) policy. Therefore, grant the owner role only if the member has a legitimate purpose to manage the IAM policy. This is because the project IAM policy contains sensitive access control data. Having a minimal set of users allowed to manage IAM policy will simplify any auditing that may be necessary.",
+      "Description": "In order to prevent unnecessary project ownership assignments to users/service-accounts and further misuses of projects and resources, all `roles/Owner` assignments should be monitored.  Members (users/Service-Accounts) with a role assignment to primitive role `roles/Owner` are project owners.  The project owner has all the privileges on the project the role belongs to. These are summarized below: - All viewer permissions on all GCP Services within the project - Permissions for actions that modify the state of all GCP services within the project - Manage roles and permissions for a project and all resources within the project - Set up billing for a project  Granting the owner role to a member (user/Service-Account) will allow that member to modify the Identity and Access Management (IAM) policy. Therefore, grant the owner role only if the member has a legitimate purpose to manage the IAM policy. This is because the project IAM policy contains sensitive access control data. Having a minimal set of users allowed to manage IAM policy will simplify any auditing that may be necessary.",
       "Checks": [
         "logging_log_metric_filter_and_alert_for_project_ownership_changes_enabled"
       ],
@@ -425,12 +425,12 @@
           "Section": "2. Logging and Monitoring",
           "Profile": "Level 1",
           "AssessmentStatus": "Automated",
-          "Description": "In order to prevent unnecessary project ownership assignments to users/service-accounts and further misuses of projects and resources, all `roles/Owner` assignments should be monitored.\n\nMembers (users/Service-Accounts) with a role assignment to primitive role `roles/Owner` are project owners.\n\nThe project owner has all the privileges on the project the role belongs to. These are summarized below:\n- All viewer permissions on all GCP Services within the project\n- Permissions for actions that modify the state of all GCP services within the project\n- Manage roles and permissions for a project and all resources within the project\n- Set up billing for a project\n\nGranting the owner role to a member (user/Service-Account) will allow that member to modify the Identity and Access Management (IAM) policy. Therefore, grant the owner role only if the member has a legitimate purpose to manage the IAM policy. This is because the project IAM policy contains sensitive access control data. Having a minimal set of users allowed to manage IAM policy will simplify any auditing that may be necessary.",
-          "RationaleStatement": "Project ownership has the highest level of privileges on a project. To avoid misuse of project resources, the project ownership assignment/change actions mentioned above should be monitored and alerted to concerned recipients.\n- Sending project ownership invites\n- Acceptance/Rejection of project ownership invite by user\n- Adding `role\\Owner` to a user/service-account\n- Removing a user/Service account from `role\\Owner`",
+          "Description": "In order to prevent unnecessary project ownership assignments to users/service-accounts and further misuses of projects and resources, all `roles/Owner` assignments should be monitored.  Members (users/Service-Accounts) with a role assignment to primitive role `roles/Owner` are project owners.  The project owner has all the privileges on the project the role belongs to. These are summarized below: - All viewer permissions on all GCP Services within the project - Permissions for actions that modify the state of all GCP services within the project - Manage roles and permissions for a project and all resources within the project - Set up billing for a project  Granting the owner role to a member (user/Service-Account) will allow that member to modify the Identity and Access Management (IAM) policy. Therefore, grant the owner role only if the member has a legitimate purpose to manage the IAM policy. This is because the project IAM policy contains sensitive access control data. Having a minimal set of users allowed to manage IAM policy will simplify any auditing that may be necessary.",
+          "RationaleStatement": "Project ownership has the highest level of privileges on a project. To avoid misuse of project resources, the project ownership assignment/change actions mentioned above should be monitored and alerted to concerned recipients. - Sending project ownership invites - Acceptance/Rejection of project ownership invite by user - Adding `role\\Owner` to a user/service-account - Removing a user/Service account from `role\\Owner`",
           "ImpactStatement": "Enabling of logging may result in your project being charged for the additional logs usage.",
-          "RemediationProcedure": "**From Google Cloud Console**\n\n**Create the prescribed log metric:**\n\n1. Go to `Logging/Logs-based Metrics` by visiting https://console.cloud.google.com/logs/metrics(https://console.cloud.google.com/logs/metrics) and click \"CREATE METRIC\".\n\n2. Click the down arrow symbol on the `Filter Bar` at the rightmost corner and select `Convert to Advanced Filter`.\n\n3. Clear any text and add: \n\n```\n(protoPayload.serviceName=\"cloudresourcemanager.googleapis.com\") \nAND (ProjectOwnership OR projectOwnerInvitee) \nOR (protoPayload.serviceData.policyDelta.bindingDeltas.action=\"REMOVE\" \nAND protoPayload.serviceData.policyDelta.bindingDeltas.role=\"roles/owner\") \nOR (protoPayload.serviceData.policyDelta.bindingDeltas.action=\"ADD\" \nAND protoPayload.serviceData.policyDelta.bindingDeltas.role=\"roles/owner\")\n```\n\n4. Click `Submit Filter`. The logs display based on the filter text entered by the user.\n\n5. In the `Metric Editor` menu on the right, fill out the name field. Set `Units` to `1` (default) and the `Type` to `Counter`. This ensures that the log metric counts the number of log entries matching the advanced logs query.\n\n6. Click `Create Metric`. \n\n**Create the display prescribed Alert Policy:** \n\n1. Identify the newly created metric under the section `User-defined Metrics` at https://console.cloud.google.com/logs/metrics(https://console.cloud.google.com/logs/metrics).\n\n2. Click the 3-dot icon in the rightmost column for the desired metric and select `Create alert from Metric`. A new page opens.\n\n3. Fill out the alert policy configuration and click `Save`. Choose the alerting threshold and configuration that makes sense for the user's organization. For example, a threshold of zero(0) for the most recent value will ensure that a notification is triggered for every owner change in the project:\n```\nSet `Aggregator` to `Count`\n\nSet `Configuration`:\n\n- Condition: above\n\n- Threshold: 0\n\n- For: most recent value\n```\n\n4. Configure the desired notifications channels in the section `Notifications`.\n\n5. Name the policy and click `Save`.\n\n**From Google Cloud CLI**\n\nCreate a prescribed Log Metric:\n- Use the command: gcloud beta logging metrics create \n- Reference for Command Usage: https://cloud.google.com/sdk/gcloud/reference/beta/logging/metrics/create\n\nCreate prescribed Alert Policy \n- Use the command: gcloud alpha monitoring policies create\n- Reference for Command Usage: https://cloud.google.com/sdk/gcloud/reference/alpha/monitoring/policies/create",
-          "AuditProcedure": "**From Google Cloud Console**\n\n**Ensure that the prescribed log metric is present:**\n\n1. Go to `Logging/Log-based Metrics` by visiting https://console.cloud.google.com/logs/metrics(https://console.cloud.google.com/logs/metrics).\n\n2. In the `User-defined Metrics` section, ensure that at least one metric `` is present with filter text:\n\n```\n(protoPayload.serviceName=\"cloudresourcemanager.googleapis.com\") \nAND (ProjectOwnership OR projectOwnerInvitee) \nOR (protoPayload.serviceData.policyDelta.bindingDeltas.action=\"REMOVE\" \nAND protoPayload.serviceData.policyDelta.bindingDeltas.role=\"roles/owner\") \nOR (protoPayload.serviceData.policyDelta.bindingDeltas.action=\"ADD\" \nAND protoPayload.serviceData.policyDelta.bindingDeltas.role=\"roles/owner\")\n```\n\n**Ensure that the prescribed Alerting Policy is present:**\n\n3. Go to `Alerting` by visiting https://console.cloud.google.com/monitoring/alerting(https://console.cloud.google.com/monitoring/alerting).\n\n4. Under the `Policies` section, ensure that at least one alert policy exists for the log metric above. Clicking on the policy should show that it is configured with a condition. For example, `Violates when: Any logging.googleapis.com/user/ stream` `is above a threshold of zero(0) for greater than zero(0) seconds` means that the alert will trigger for any new owner change. Verify that the chosen alerting thresholds make sense for your organization.\n\n5. Ensure that the appropriate notifications channels have been set up.\n\n**From Google Cloud CLI**\n\n**Ensure that the prescribed log metric is present:**\n\n1. List the log metrics:\n```\ngcloud logging metrics list --format json\n```\n2. Ensure that the output contains at least one metric with filter set to: \n```\n(protoPayload.serviceName=\"cloudresourcemanager.googleapis.com\") \nAND (ProjectOwnership OR projectOwnerInvitee) \nOR (protoPayload.serviceData.policyDelta.bindingDeltas.action=\"REMOVE\" \nAND protoPayload.serviceData.policyDelta.bindingDeltas.role=\"roles/owner\") \nOR (protoPayload.serviceData.policyDelta.bindingDeltas.action=\"ADD\" \nAND protoPayload.serviceData.policyDelta.bindingDeltas.role=\"roles/owner\")\n```\n\n3. Note the value of the property `metricDescriptor.type` for the identified metric, in the format `logging.googleapis.com/user/`.\n\n**Ensure that the prescribed alerting policy is present:**\n\n4. List the alerting policies:\n```\ngcloud alpha monitoring policies list --format json\n```\n5. Ensure that the output contains an least one alert policy where:\n- `conditions.conditionThreshold.filter` is set to `metric.type=\\\"logging.googleapis.com/user/\\\"`\n- AND `enabled` is set to `true`",
-          "AdditionalInformation": "1. Project ownership assignments for a user cannot be done using the gcloud utility as assigning project ownership to a user requires sending, and the user accepting, an invitation. \n\n2. Project Ownership assignment to a service account does not send any invites. SetIAMPolicy to `role/owner`is directly performed on service accounts.",
+          "RemediationProcedure": "**From Google Cloud Console**  **Create the prescribed log metric:**  1. Go to `Logging/Logs-based Metrics` by visiting https://console.cloud.google.com/logs/metrics(https://console.cloud.google.com/logs/metrics) and click \"CREATE METRIC\".  2. Click the down arrow symbol on the `Filter Bar` at the rightmost corner and select `Convert to Advanced Filter`.  3. Clear any text and add:   ``` (protoPayload.serviceName=\"cloudresourcemanager.googleapis.com\")  AND (ProjectOwnership OR projectOwnerInvitee)  OR (protoPayload.serviceData.policyDelta.bindingDeltas.action=\"REMOVE\"  AND protoPayload.serviceData.policyDelta.bindingDeltas.role=\"roles/owner\")  OR (protoPayload.serviceData.policyDelta.bindingDeltas.action=\"ADD\"  AND protoPayload.serviceData.policyDelta.bindingDeltas.role=\"roles/owner\") ```  4. Click `Submit Filter`. The logs display based on the filter text entered by the user.  5. In the `Metric Editor` menu on the right, fill out the name field. Set `Units` to `1` (default) and the `Type` to `Counter`. This ensures that the log metric counts the number of log entries matching the advanced logs query.  6. Click `Create Metric`.   **Create the display prescribed Alert Policy:**   1. Identify the newly created metric under the section `User-defined Metrics` at https://console.cloud.google.com/logs/metrics(https://console.cloud.google.com/logs/metrics).  2. Click the 3-dot icon in the rightmost column for the desired metric and select `Create alert from Metric`. A new page opens.  3. Fill out the alert policy configuration and click `Save`. Choose the alerting threshold and configuration that makes sense for the user's organization. For example, a threshold of zero(0) for the most recent value will ensure that a notification is triggered for every owner change in the project: ``` Set `Aggregator` to `Count`  Set `Configuration`:  - Condition: above  - Threshold: 0  - For: most recent value ```  4. Configure the desired notifications channels in the section `Notifications`.  5. Name the policy and click `Save`.  **From Google Cloud CLI**  Create a prescribed Log Metric: - Use the command: gcloud beta logging metrics create  - Reference for Command Usage: https://cloud.google.com/sdk/gcloud/reference/beta/logging/metrics/create  Create prescribed Alert Policy  - Use the command: gcloud alpha monitoring policies create - Reference for Command Usage: https://cloud.google.com/sdk/gcloud/reference/alpha/monitoring/policies/create",
+          "AuditProcedure": "**From Google Cloud Console**  **Ensure that the prescribed log metric is present:**  1. Go to `Logging/Log-based Metrics` by visiting https://console.cloud.google.com/logs/metrics(https://console.cloud.google.com/logs/metrics).  2. In the `User-defined Metrics` section, ensure that at least one metric `` is present with filter text:  ``` (protoPayload.serviceName=\"cloudresourcemanager.googleapis.com\")  AND (ProjectOwnership OR projectOwnerInvitee)  OR (protoPayload.serviceData.policyDelta.bindingDeltas.action=\"REMOVE\"  AND protoPayload.serviceData.policyDelta.bindingDeltas.role=\"roles/owner\")  OR (protoPayload.serviceData.policyDelta.bindingDeltas.action=\"ADD\"  AND protoPayload.serviceData.policyDelta.bindingDeltas.role=\"roles/owner\") ```  **Ensure that the prescribed Alerting Policy is present:**  3. Go to `Alerting` by visiting https://console.cloud.google.com/monitoring/alerting(https://console.cloud.google.com/monitoring/alerting).  4. Under the `Policies` section, ensure that at least one alert policy exists for the log metric above. Clicking on the policy should show that it is configured with a condition. For example, `Violates when: Any logging.googleapis.com/user/ stream` `is above a threshold of zero(0) for greater than zero(0) seconds` means that the alert will trigger for any new owner change. Verify that the chosen alerting thresholds make sense for your organization.  5. Ensure that the appropriate notifications channels have been set up.  **From Google Cloud CLI**  **Ensure that the prescribed log metric is present:**  1. List the log metrics: ``` gcloud logging metrics list --format json ``` 2. Ensure that the output contains at least one metric with filter set to:  ``` (protoPayload.serviceName=\"cloudresourcemanager.googleapis.com\")  AND (ProjectOwnership OR projectOwnerInvitee)  OR (protoPayload.serviceData.policyDelta.bindingDeltas.action=\"REMOVE\"  AND protoPayload.serviceData.policyDelta.bindingDeltas.role=\"roles/owner\")  OR (protoPayload.serviceData.policyDelta.bindingDeltas.action=\"ADD\"  AND protoPayload.serviceData.policyDelta.bindingDeltas.role=\"roles/owner\") ```  3. Note the value of the property `metricDescriptor.type` for the identified metric, in the format `logging.googleapis.com/user/`.  **Ensure that the prescribed alerting policy is present:**  4. List the alerting policies: ``` gcloud alpha monitoring policies list --format json ``` 5. Ensure that the output contains an least one alert policy where: - `conditions.conditionThreshold.filter` is set to `metric.type=\\\"logging.googleapis.com/user/\\\"` - AND `enabled` is set to `true`",
+          "AdditionalInformation": "1. Project ownership assignments for a user cannot be done using the gcloud utility as assigning project ownership to a user requires sending, and the user accepting, an invitation.   2. Project Ownership assignment to a service account does not send any invites. SetIAMPolicy to `role/owner`is directly performed on service accounts.",
           "References": "https://cloud.google.com/logging/docs/logs-based-metrics/:https://cloud.google.com/monitoring/custom-metrics/:https://cloud.google.com/monitoring/alerts/:https://cloud.google.com/logging/docs/reference/tools/gcloud-logging"
         }
       ]
@@ -449,8 +449,8 @@
           "Description": "Logging enabled on a HTTPS Load Balancer will show all network traffic and its destination.",
           "RationaleStatement": "Logging will allow you to view HTTPS network traffic to your web applications.",
           "ImpactStatement": "On high use systems with a high percentage sample rate, the logging file may grow to high capacity in a short amount of time. Ensure that the sample rate is set appropriately so that storage costs are not exorbitant.",
-          "RemediationProcedure": "**From Google Cloud Console**\n\n1. From Google Cloud home open the Navigation Menu in the top left.\n\n1. Under the `Networking` heading select `Network services`.\n\n1. Select the HTTPS load-balancer you wish to audit.\n\n1. Select `Edit` then `Backend Configuration`. \n\n1. Select `Edit` on the corresponding backend service.\n\n1. Click `Enable Logging`.\n\n1. Set `Sample Rate` to a desired value. This is a percentage as a decimal point. 1.0 is 100%.\n\n**From Google Cloud CLI**\n\n1. Run the following command\n\n```\ngcloud compute backend-services update  --region=REGION --enable-logging --logging-sample-rate=\n```",
-          "AuditProcedure": "**From Google Cloud Console**\n\n1. From Google Cloud home open the Navigation Menu in the top left.\n\n1. Under the `Networking` heading select `Network services`.\n\n1. Select the HTTPS load-balancer you wish to audit.\n\n1. Select `Edit` then `Backend Configuration`. \n\n1. Select `Edit` on the corresponding backend service.\n\n1. Ensure that `Enable Logging` is selected. Also ensure that `Sample Rate` is set to an appropriate level for your needs.\n\n**From Google Cloud CLI**\n\n1. Run the following command\n\n```\ngcloud compute backend-services describe \n```\n\n1. Ensure that ```enable-logging``` is enabled and ```sample rate``` is set to your desired level.",
+          "RemediationProcedure": "**From Google Cloud Console**  1. From Google Cloud home open the Navigation Menu in the top left.  1. Under the `Networking` heading select `Network services`.  1. Select the HTTPS load-balancer you wish to audit.  1. Select `Edit` then `Backend Configuration`.   1. Select `Edit` on the corresponding backend service.  1. Click `Enable Logging`.  1. Set `Sample Rate` to a desired value. This is a percentage as a decimal point. 1.0 is 100%.  **From Google Cloud CLI**  1. Run the following command  ``` gcloud compute backend-services update  --region=REGION --enable-logging --logging-sample-rate= ```",
+          "AuditProcedure": "**From Google Cloud Console**  1. From Google Cloud home open the Navigation Menu in the top left.  1. Under the `Networking` heading select `Network services`.  1. Select the HTTPS load-balancer you wish to audit.  1. Select `Edit` then `Backend Configuration`.   1. Select `Edit` on the corresponding backend service.  1. Ensure that `Enable Logging` is selected. Also ensure that `Sample Rate` is set to an appropriate level for your needs.  **From Google Cloud CLI**  1. Run the following command  ``` gcloud compute backend-services describe  ```  1. Ensure that ```enable-logging``` is enabled and ```sample rate``` is set to your desired level.",
           "AdditionalInformation": "",
           "References": "https://cloud.google.com/load-balancing/:https://cloud.google.com/load-balancing/docs/https/https-logging-monitoring#gcloud:-global-mode:https://cloud.google.com/sdk/gcloud/reference/compute/backend-services/"
         }
@@ -468,11 +468,11 @@
           "Profile": "Level 1",
           "AssessmentStatus": "Automated",
           "Description": "It is recommended that Cloud Audit Logging is configured to track all admin activities and read, write access to user data.",
-          "RationaleStatement": "Cloud Audit Logging maintains two audit logs for each project, folder, and organization: Admin Activity and Data Access.\n\n1. Admin Activity logs contain log entries for API calls or other administrative actions that modify the configuration or metadata of resources. Admin Activity audit logs are enabled for all services and cannot be configured.\n\n2. Data Access audit logs record API calls that create, modify, or read user-provided data. These are disabled by default and should be enabled.\n\n There are three kinds of Data Access audit log information:\n\n - Admin read: Records operations that read metadata or configuration information. Admin Activity audit logs record writes of metadata and configuration information that cannot be disabled.\n - Data read: Records operations that read user-provided data.\n - Data write: Records operations that write user-provided data.\n\nIt is recommended to have an effective default audit config configured in such a way that:\n\n1. logtype is set to DATA_READ (to log user activity tracking) and DATA_WRITES (to log changes/tampering to user data).\n\n2. audit config is enabled for all the services supported by the Data Access audit logs feature.\n\n3. Logs should be captured for all users, i.e., there are no exempted users in any of the audit config sections. This will ensure overriding the audit config will not contradict the requirement.",
-          "ImpactStatement": "There is no charge for Admin Activity audit logs.\nEnabling the Data Access audit logs might result in your project being charged for the additional logs usage.",
-          "RemediationProcedure": "**From Google Cloud Console**\n1. Go to `Audit Logs` by visiting https://console.cloud.google.com/iam-admin/audit(https://console.cloud.google.com/iam-admin/audit).\n2. Follow the steps at https://cloud.google.com/logging/docs/audit/configure-data-access(https://cloud.google.com/logging/docs/audit/configure-data-access) to enable audit logs for all Google Cloud services. Ensure that no exemptions are allowed.\n\n**From Google Cloud CLI**\n\n1. To read the project's IAM policy and store it in a file run a command:\n\n```\ngcloud projects get-iam-policy PROJECT_ID > /tmp/project_policy.yaml\n```\n\nAlternatively, the policy can be set at the organization or folder level. If setting the policy at the organization level, it is not necessary to also set it for each folder or project.\n\n```\ngcloud organizations get-iam-policy ORGANIZATION_ID > /tmp/org_policy.yaml\ngcloud resource-manager folders get-iam-policy FOLDER_ID > /tmp/folder_policy.yaml\n```\n\n2. Edit policy in /tmp/policy.yaml, adding or changing only the audit logs configuration to:\n**Note: Admin Activity Logs are enabled by default, and cannot be disabled. So they are not listed in these configuration changes.**\n```\nauditConfigs:\n- auditLogConfigs:\n - logType: DATA_WRITE\n - logType: DATA_READ\n service: allServices\n```\n\n**Note:** `exemptedMembers:` is not set as audit logging should be enabled for all the users\n\n3. To write new IAM policy run command:\n\n```\ngcloud organizations set-iam-policy ORGANIZATION_ID /tmp/org_policy.yaml\ngcloud resource-manager folders set-iam-policy FOLDER_ID /tmp/folder_policy.yaml\ngcloud projects set-iam-policy PROJECT_ID /tmp/project_policy.yaml\n```\n\nIf the preceding command reports a conflict with another change, then repeat these steps, starting with the first step.",
-          "AuditProcedure": "**From Google Cloud Console**\n\n1. Go to `Audit Logs` by visiting https://console.cloud.google.com/iam-admin/audit(https://console.cloud.google.com/iam-admin/audit).\n2. Ensure that Admin Read, Data Write, and Data Read are enabled for all Google Cloud services and that no exemptions are allowed.\n\n**From Google Cloud CLI**\n\n1. List the Identity and Access Management (IAM) policies for the project, folder, or organization: \n```\ngcloud organizations get-iam-policy ORGANIZATION_ID\ngcloud resource-manager folders get-iam-policy FOLDER_ID\ngcloud projects get-iam-policy PROJECT_ID\n```\n2. Policy should have a default auditConfigs section which has the logtype set to DATA_WRITES and DATA_READ for all services. Note that projects inherit settings from folders, which in turn inherit settings from the organization. When called, projects get-iam-policy, the result shows only the policies set in the project, not the policies inherited from the parent folder or organization. Nevertheless, if the parent folder has Cloud Audit Logging enabled, the project does as well. \n\nSample output for default audit configs may look like this:\n\n```\n auditConfigs:\n - auditLogConfigs:\n - logType: ADMIN_READ\n - logType: DATA_WRITE\n - logType: DATA_READ\n service: allServices\n```\n\n3. Any of the auditConfigs sections should not have parameter \"exemptedMembers:\" set, which will ensure that Logging is enabled for all users and no user is exempted.",
-          "AdditionalInformation": "'- Log type `DATA_READ` is equally important to that of `DATA_WRITE` to track detailed user activities.\n- BigQuery Data Access logs are handled differently from other data access logs. BigQuery logs are enabled by default and cannot be disabled. They do not count against logs allotment and cannot result in extra logs charges.",
+          "RationaleStatement": "Cloud Audit Logging maintains two audit logs for each project, folder, and organization: Admin Activity and Data Access.  1. Admin Activity logs contain log entries for API calls or other administrative actions that modify the configuration or metadata of resources. Admin Activity audit logs are enabled for all services and cannot be configured.  2. Data Access audit logs record API calls that create, modify, or read user-provided data. These are disabled by default and should be enabled.   There are three kinds of Data Access audit log information:   - Admin read: Records operations that read metadata or configuration information. Admin Activity audit logs record writes of metadata and configuration information that cannot be disabled.  - Data read: Records operations that read user-provided data.  - Data write: Records operations that write user-provided data.  It is recommended to have an effective default audit config configured in such a way that:  1. logtype is set to DATA_READ (to log user activity tracking) and DATA_WRITES (to log changes/tampering to user data).  2. audit config is enabled for all the services supported by the Data Access audit logs feature.  3. Logs should be captured for all users, i.e., there are no exempted users in any of the audit config sections. This will ensure overriding the audit config will not contradict the requirement.",
+          "ImpactStatement": "There is no charge for Admin Activity audit logs. Enabling the Data Access audit logs might result in your project being charged for the additional logs usage.",
+          "RemediationProcedure": "**From Google Cloud Console** 1. Go to `Audit Logs` by visiting https://console.cloud.google.com/iam-admin/audit(https://console.cloud.google.com/iam-admin/audit). 2. Follow the steps at https://cloud.google.com/logging/docs/audit/configure-data-access(https://cloud.google.com/logging/docs/audit/configure-data-access) to enable audit logs for all Google Cloud services. Ensure that no exemptions are allowed.  **From Google Cloud CLI**  1. To read the project's IAM policy and store it in a file run a command:  ``` gcloud projects get-iam-policy PROJECT_ID > /tmp/project_policy.yaml ```  Alternatively, the policy can be set at the organization or folder level. If setting the policy at the organization level, it is not necessary to also set it for each folder or project.  ``` gcloud organizations get-iam-policy ORGANIZATION_ID > /tmp/org_policy.yaml gcloud resource-manager folders get-iam-policy FOLDER_ID > /tmp/folder_policy.yaml ```  2. Edit policy in /tmp/policy.yaml, adding or changing only the audit logs configuration to: **Note: Admin Activity Logs are enabled by default, and cannot be disabled. So they are not listed in these configuration changes.** ``` auditConfigs: - auditLogConfigs:  - logType: DATA_WRITE  - logType: DATA_READ  service: allServices ```  **Note:** `exemptedMembers:` is not set as audit logging should be enabled for all the users  3. To write new IAM policy run command:  ``` gcloud organizations set-iam-policy ORGANIZATION_ID /tmp/org_policy.yaml gcloud resource-manager folders set-iam-policy FOLDER_ID /tmp/folder_policy.yaml gcloud projects set-iam-policy PROJECT_ID /tmp/project_policy.yaml ```  If the preceding command reports a conflict with another change, then repeat these steps, starting with the first step.",
+          "AuditProcedure": "**From Google Cloud Console**  1. Go to `Audit Logs` by visiting https://console.cloud.google.com/iam-admin/audit(https://console.cloud.google.com/iam-admin/audit). 2. Ensure that Admin Read, Data Write, and Data Read are enabled for all Google Cloud services and that no exemptions are allowed.  **From Google Cloud CLI**  1. List the Identity and Access Management (IAM) policies for the project, folder, or organization:  ``` gcloud organizations get-iam-policy ORGANIZATION_ID gcloud resource-manager folders get-iam-policy FOLDER_ID gcloud projects get-iam-policy PROJECT_ID ``` 2. Policy should have a default auditConfigs section which has the logtype set to DATA_WRITES and DATA_READ for all services. Note that projects inherit settings from folders, which in turn inherit settings from the organization. When called, projects get-iam-policy, the result shows only the policies set in the project, not the policies inherited from the parent folder or organization. Nevertheless, if the parent folder has Cloud Audit Logging enabled, the project does as well.   Sample output for default audit configs may look like this:  ```  auditConfigs:  - auditLogConfigs:  - logType: ADMIN_READ  - logType: DATA_WRITE  - logType: DATA_READ  service: allServices ```  3. Any of the auditConfigs sections should not have parameter \"exemptedMembers:\" set, which will ensure that Logging is enabled for all users and no user is exempted.",
+          "AdditionalInformation": "'- Log type `DATA_READ` is equally important to that of `DATA_WRITE` to track detailed user activities. - BigQuery Data Access logs are handled differently from other data access logs. BigQuery logs are enabled by default and cannot be disabled. They do not count against logs allotment and cannot result in extra logs charges.",
           "References": "https://cloud.google.com/logging/docs/audit/:https://cloud.google.com/logging/docs/audit/configure-data-access"
         }
       ]
@@ -489,11 +489,11 @@
           "Profile": "Level 1",
           "AssessmentStatus": "Automated",
           "Description": "Cloud DNS logging records the queries from the name servers within your VPC to Stackdriver. Logged queries can come from Compute Engine VMs, GKE containers, or other GCP resources provisioned within the VPC.",
-          "RationaleStatement": "Security monitoring and forensics cannot depend solely on IP addresses from VPC flow logs, especially when considering the dynamic IP usage of cloud resources, HTTP virtual host routing, and other technology that can obscure the DNS name used by a client from the IP address. Monitoring of Cloud DNS logs provides visibility to DNS names requested by the clients within the VPC. These logs can be monitored for anomalous domain names, evaluated against threat intelligence, and \n\nNote: For full capture of DNS, firewall must block egress UDP/53 (DNS) and TCP/443 (DNS over HTTPS) to prevent client from using external DNS name server for resolution.",
+          "RationaleStatement": "Security monitoring and forensics cannot depend solely on IP addresses from VPC flow logs, especially when considering the dynamic IP usage of cloud resources, HTTP virtual host routing, and other technology that can obscure the DNS name used by a client from the IP address. Monitoring of Cloud DNS logs provides visibility to DNS names requested by the clients within the VPC. These logs can be monitored for anomalous domain names, evaluated against threat intelligence, and   Note: For full capture of DNS, firewall must block egress UDP/53 (DNS) and TCP/443 (DNS over HTTPS) to prevent client from using external DNS name server for resolution.",
           "ImpactStatement": "Enabling of Cloud DNS logging might result in your project being charged for the additional logs usage.",
-          "RemediationProcedure": "**From Google Cloud CLI**\n\n**Add New DNS Policy With Logging Enabled**\n\nFor each VPC network that needs a DNS policy with logging enabled:\n```\ngcloud dns policies create enable-dns-logging --enable-logging --description=\"Enable DNS Logging\" --networks=VPC_NETWORK_NAME\n```\nThe VPC_NETWORK_NAME can be one or more networks in comma-separated list\n\n**Enable Logging for Existing DNS Policy**\n\nFor each VPC network that has an existing DNS policy that needs logging enabled:\n```\ngcloud dns policies update POLICY_NAME --enable-logging --networks=VPC_NETWORK_NAME\n```\nThe VPC_NETWORK_NAME can be one or more networks in comma-separated list",
-          "AuditProcedure": "**From Google Cloud CLI**\n\n1. List all VPCs networks in a project:\n```\ngcloud compute networks list --format=\"tablebox,title='All VPC Networks'(name:label='VPC Network Name')\"\n```\n2. List all DNS policies, logging enablement, and associated VPC networks:\n```\ngcloud dns policies list --flatten=\"networks\" --format=\"tablebox,title='All DNS Policies By VPC Network'(name:label='Policy Name',enableLogging:label='Logging Enabled':align=center,networks.networkUrl.basename():label='VPC Network Name')\"\n```\nEach VPC Network should be associated with a DNS policy with logging enabled.",
-          "AdditionalInformation": "Additional Info\n- Only queries that reach a name server are logged. Cloud DNS resolvers cache responses, queries answered from caches, or direct queries to an external DNS resolver outside the VPC are not logged.",
+          "RemediationProcedure": "**From Google Cloud CLI**  **Add New DNS Policy With Logging Enabled**  For each VPC network that needs a DNS policy with logging enabled: ``` gcloud dns policies create enable-dns-logging --enable-logging --description=\"Enable DNS Logging\" --networks=VPC_NETWORK_NAME ``` The VPC_NETWORK_NAME can be one or more networks in comma-separated list  **Enable Logging for Existing DNS Policy**  For each VPC network that has an existing DNS policy that needs logging enabled: ``` gcloud dns policies update POLICY_NAME --enable-logging --networks=VPC_NETWORK_NAME ``` The VPC_NETWORK_NAME can be one or more networks in comma-separated list",
+          "AuditProcedure": "**From Google Cloud CLI**  1. List all VPCs networks in a project: ``` gcloud compute networks list --format=\"tablebox,title='All VPC Networks'(name:label='VPC Network Name')\" ``` 2. List all DNS policies, logging enablement, and associated VPC networks: ``` gcloud dns policies list --flatten=\"networks\" --format=\"tablebox,title='All DNS Policies By VPC Network'(name:label='Policy Name',enableLogging:label='Logging Enabled':align=center,networks.networkUrl.basename():label='VPC Network Name')\" ``` Each VPC Network should be associated with a DNS policy with logging enabled.",
+          "AdditionalInformation": "Additional Info - Only queries that reach a name server are logged. Cloud DNS resolvers cache responses, queries answered from caches, or direct queries to an external DNS resolver outside the VPC are not logged.",
           "References": "https://cloud.google.com/dns/docs/monitoring"
         }
       ]
@@ -510,10 +510,10 @@
           "Profile": "Level 2",
           "AssessmentStatus": "Automated",
           "Description": "Enabling retention policies on log buckets will protect logs stored in cloud storage buckets from being overwritten or accidentally deleted. It is recommended to set up retention policies and configure Bucket Lock on all storage buckets that are used as log sinks.",
-          "RationaleStatement": "Logs can be exported by creating one or more sinks that include a log filter and a destination. As Cloud Logging receives new log entries, they are compared against each sink. If a log entry matches a sink's filter, then a copy of the log entry is written to the destination.\n\nSinks can be configured to export logs in storage buckets. It is recommended to configure a data retention policy for these cloud storage buckets and to lock the data retention policy; thus permanently preventing the policy from being reduced or removed. This way, if the system is ever compromised by an attacker or a malicious insider who wants to cover their tracks, the activity logs are definitely preserved for forensics and security investigations.",
+          "RationaleStatement": "Logs can be exported by creating one or more sinks that include a log filter and a destination. As Cloud Logging receives new log entries, they are compared against each sink. If a log entry matches a sink's filter, then a copy of the log entry is written to the destination.  Sinks can be configured to export logs in storage buckets. It is recommended to configure a data retention policy for these cloud storage buckets and to lock the data retention policy; thus permanently preventing the policy from being reduced or removed. This way, if the system is ever compromised by an attacker or a malicious insider who wants to cover their tracks, the activity logs are definitely preserved for forensics and security investigations.",
           "ImpactStatement": "Locking a bucket is an irreversible action. Once you lock a bucket, you cannot remove the retention policy from the bucket or decrease the retention period for the policy. You will then have to wait for the retention period for all items within the bucket before you can delete them, and then the bucket.",
-          "RemediationProcedure": "**From Google Cloud Console**\n\n1. If sinks are **not** configured, first follow the instructions in the recommendation: `Ensure that sinks are configured for all Log entries`.\n\n2. For each storage bucket configured as a sink, go to the Cloud Storage browser at `https://console.cloud.google.com/storage/browser/`.\n\n3. Select the Bucket Lock tab near the top of the page.\n\n4. In the Retention policy entry, click the Add Duration link. The `Set a retention policy` dialog box appears.\n\n5. Enter the desired length of time for the retention period and click `Save policy`.\n\n6. Set the `Lock status` for this retention policy to `Locked`.\n\n**From Google Cloud CLI**\n\n1. To list all sinks destined to storage buckets:\n```\ngcloud logging sinks list --folder=FOLDER_ID | --organization=ORGANIZATION_ID | --project=PROJECT_ID\n```\n2. For each storage bucket listed above, set a retention policy and lock it:\n```\ngsutil retention set TIME_DURATION gs://BUCKET_NAME\ngsutil retention lock gs://BUCKET_NAME\n```\n\nFor more information, visit https://cloud.google.com/storage/docs/using-bucket-lock#set-policy(https://cloud.google.com/storage/docs/using-bucket-lock#set-policy).",
-          "AuditProcedure": "**From Google Cloud Console**\n\n1. Open the Cloud Storage browser in the Google Cloud Console by visiting https://console.cloud.google.com/storage/browser(https://console.cloud.google.com/storage/browser).\n\n2. In the Column display options menu, make sure `Retention policy` is checked.\n\n3. In the list of buckets, the retention period of each bucket is found in the `Retention policy` column. If the retention policy is locked, an image of a lock appears directly to the left of the retention period.\n\n**From Google Cloud CLI**\n\n1. To list all sinks destined to storage buckets:\n```\ngcloud logging sinks list --folder=FOLDER_ID | --organization=ORGANIZATION_ID | --project=PROJECT_ID\n```\n2. For every storage bucket listed above, verify that retention policies and Bucket Lock are enabled:\n```\ngsutil retention get gs://BUCKET_NAME\n```\n\nFor more information, see https://cloud.google.com/storage/docs/using-bucket-lock#view-policy(https://cloud.google.com/storage/docs/using-bucket-lock#view-policy).",
+          "RemediationProcedure": "**From Google Cloud Console**  1. If sinks are **not** configured, first follow the instructions in the recommendation: `Ensure that sinks are configured for all Log entries`.  2. For each storage bucket configured as a sink, go to the Cloud Storage browser at `https://console.cloud.google.com/storage/browser/`.  3. Select the Bucket Lock tab near the top of the page.  4. In the Retention policy entry, click the Add Duration link. The `Set a retention policy` dialog box appears.  5. Enter the desired length of time for the retention period and click `Save policy`.  6. Set the `Lock status` for this retention policy to `Locked`.  **From Google Cloud CLI**  1. To list all sinks destined to storage buckets: ``` gcloud logging sinks list --folder=FOLDER_ID | --organization=ORGANIZATION_ID | --project=PROJECT_ID ``` 2. For each storage bucket listed above, set a retention policy and lock it: ``` gsutil retention set TIME_DURATION gs://BUCKET_NAME gsutil retention lock gs://BUCKET_NAME ```  For more information, visit https://cloud.google.com/storage/docs/using-bucket-lock#set-policy(https://cloud.google.com/storage/docs/using-bucket-lock#set-policy).",
+          "AuditProcedure": "**From Google Cloud Console**  1. Open the Cloud Storage browser in the Google Cloud Console by visiting https://console.cloud.google.com/storage/browser(https://console.cloud.google.com/storage/browser).  2. In the Column display options menu, make sure `Retention policy` is checked.  3. In the list of buckets, the retention period of each bucket is found in the `Retention policy` column. If the retention policy is locked, an image of a lock appears directly to the left of the retention period.  **From Google Cloud CLI**  1. To list all sinks destined to storage buckets: ``` gcloud logging sinks list --folder=FOLDER_ID | --organization=ORGANIZATION_ID | --project=PROJECT_ID ``` 2. For every storage bucket listed above, verify that retention policies and Bucket Lock are enabled: ``` gsutil retention get gs://BUCKET_NAME ```  For more information, see https://cloud.google.com/storage/docs/using-bucket-lock#view-policy(https://cloud.google.com/storage/docs/using-bucket-lock#view-policy).",
           "AdditionalInformation": "Caution: Locking a retention policy is an irreversible action. Once locked, you must delete the entire bucket in order to \"remove\" the bucket's retention policy. However, before you can delete the bucket, you must be able to delete all the objects in the bucket, which itself is only possible if all the objects have reached the retention period set by the retention policy.",
           "References": "https://cloud.google.com/storage/docs/bucket-lock:https://cloud.google.com/storage/docs/using-bucket-lock:https://cloud.google.com/storage/docs/bucket-lock"
         }
@@ -531,18 +531,18 @@
           "Profile": "Level 1",
           "AssessmentStatus": "Automated",
           "Description": "It is recommended to create a sink that will export copies of all the log entries. This can help aggregate logs from multiple projects and export them to a Security Information and Event Management (SIEM).",
-          "RationaleStatement": "Log entries are held in Cloud Logging. To aggregate logs, export them to a SIEM. To keep them longer, it is recommended to set up a log sink. Exporting involves writing a filter that selects the log entries to export, and choosing a destination in Cloud Storage, BigQuery, or Cloud Pub/Sub. The filter and destination are held in an object called a sink. To ensure all log entries are exported to sinks, ensure that there is no filter configured for a sink.\nSinks can be created in projects, organizations, folders, and billing accounts.",
+          "RationaleStatement": "Log entries are held in Cloud Logging. To aggregate logs, export them to a SIEM. To keep them longer, it is recommended to set up a log sink. Exporting involves writing a filter that selects the log entries to export, and choosing a destination in Cloud Storage, BigQuery, or Cloud Pub/Sub. The filter and destination are held in an object called a sink. To ensure all log entries are exported to sinks, ensure that there is no filter configured for a sink. Sinks can be created in projects, organizations, folders, and billing accounts.",
           "ImpactStatement": "There are no costs or limitations in Cloud Logging for exporting logs, but the export destinations charge for storing or transmitting the log data.",
-          "RemediationProcedure": "**From Google Cloud Console**\n\n1. Go to `Logs Router` by visiting https://console.cloud.google.com/logs/router(https://console.cloud.google.com/logs/router).\n\n2. Click on the arrow symbol with `CREATE SINK` text.\n\n3. Fill out the fields for `Sink details`.\n\n4. Choose Cloud Logging bucket in the Select sink destination drop down menu.\n\n5. Choose a log bucket in the next drop down menu.\n\n6. If an inclusion filter is not provided for this sink, all ingested logs will be routed to the destination provided above. This may result in higher than expected resource usage.\n\n7. Click `Create Sink`.\n\nFor more information, see https://cloud.google.com/logging/docs/export/configure_export_v2#dest-create(https://cloud.google.com/logging/docs/export/configure_export_v2#dest-create).\n\n**From Google Cloud CLI**\n\nTo create a sink to export all log entries in a Google Cloud Storage bucket: \n\n```\ngcloud logging sinks create  storage.googleapis.com/DESTINATION_BUCKET_NAME\n```\n\nSinks can be created for a folder or organization, which will include all projects.\n\n```\ngcloud logging sinks create  storage.googleapis.com/DESTINATION_BUCKET_NAME --include-children --folder=FOLDER_ID | --organization=ORGANIZATION_ID\n```\n\n**Note:** \n\n1. A sink created by the command-line above will export logs in storage buckets. However, sinks can be configured to export logs into BigQuery, or Cloud Pub/Sub, or `Custom Destination`.\n\n2. While creating a sink, the sink option `--log-filter` is not used to ensure the sink exports all log entries.\n\n3. A sink can be created at a folder or organization level that collects the logs of all the projects underneath bypassing the option `--include-children` in the gcloud command.",
-          "AuditProcedure": "**From Google Cloud Console**\n\n1. Go to `Logs Router` by visiting https://console.cloud.google.com/logs/router(https://console.cloud.google.com/logs/router).\n\n2. For every sink, click the 3-dot button for Menu options and select `View sink details`.\n\n3. Ensure there is at least one sink with an `empty` Inclusion filter.\n\n4. Additionally, ensure that the resource configured as `Destination` exists.\n\n**From Google Cloud CLI**\n\n1. Ensure that a sink with an `empty filter` exists. List the sinks for the project, folder or organization. If sinks are configured at a folder or organization level, they do not need to be configured for each project:\n```\ngcloud logging sinks list --folder=FOLDER_ID | --organization=ORGANIZATION_ID | --project=PROJECT_ID\n```\n\nThe output should list at least one sink with an `empty filter`.\n\n2. Additionally, ensure that the resource configured as `Destination` exists.\n\nSee https://cloud.google.com/sdk/gcloud/reference/beta/logging/sinks/list(https://cloud.google.com/sdk/gcloud/reference/beta/logging/sinks/list) for more information.",
-          "AdditionalInformation": "For Command-Line Audit and Remediation, the sink destination of type `Cloud Storage Bucket` is considered. However, the destination could be configured to\n`Cloud Storage Bucket` or `BigQuery` or `Cloud Pub\\Sub` or `Custom Destination`. Command Line Interface commands would change accordingly.",
+          "RemediationProcedure": "**From Google Cloud Console**  1. Go to `Logs Router` by visiting https://console.cloud.google.com/logs/router(https://console.cloud.google.com/logs/router).  2. Click on the arrow symbol with `CREATE SINK` text.  3. Fill out the fields for `Sink details`.  4. Choose Cloud Logging bucket in the Select sink destination drop down menu.  5. Choose a log bucket in the next drop down menu.  6. If an inclusion filter is not provided for this sink, all ingested logs will be routed to the destination provided above. This may result in higher than expected resource usage.  7. Click `Create Sink`.  For more information, see https://cloud.google.com/logging/docs/export/configure_export_v2#dest-create(https://cloud.google.com/logging/docs/export/configure_export_v2#dest-create).  **From Google Cloud CLI**  To create a sink to export all log entries in a Google Cloud Storage bucket:   ``` gcloud logging sinks create  storage.googleapis.com/DESTINATION_BUCKET_NAME ```  Sinks can be created for a folder or organization, which will include all projects.  ``` gcloud logging sinks create  storage.googleapis.com/DESTINATION_BUCKET_NAME --include-children --folder=FOLDER_ID | --organization=ORGANIZATION_ID ```  **Note:**   1. A sink created by the command-line above will export logs in storage buckets. However, sinks can be configured to export logs into BigQuery, or Cloud Pub/Sub, or `Custom Destination`.  2. While creating a sink, the sink option `--log-filter` is not used to ensure the sink exports all log entries.  3. A sink can be created at a folder or organization level that collects the logs of all the projects underneath bypassing the option `--include-children` in the gcloud command.",
+          "AuditProcedure": "**From Google Cloud Console**  1. Go to `Logs Router` by visiting https://console.cloud.google.com/logs/router(https://console.cloud.google.com/logs/router).  2. For every sink, click the 3-dot button for Menu options and select `View sink details`.  3. Ensure there is at least one sink with an `empty` Inclusion filter.  4. Additionally, ensure that the resource configured as `Destination` exists.  **From Google Cloud CLI**  1. Ensure that a sink with an `empty filter` exists. List the sinks for the project, folder or organization. If sinks are configured at a folder or organization level, they do not need to be configured for each project: ``` gcloud logging sinks list --folder=FOLDER_ID | --organization=ORGANIZATION_ID | --project=PROJECT_ID ```  The output should list at least one sink with an `empty filter`.  2. Additionally, ensure that the resource configured as `Destination` exists.  See https://cloud.google.com/sdk/gcloud/reference/beta/logging/sinks/list(https://cloud.google.com/sdk/gcloud/reference/beta/logging/sinks/list) for more information.",
+          "AdditionalInformation": "For Command-Line Audit and Remediation, the sink destination of type `Cloud Storage Bucket` is considered. However, the destination could be configured to `Cloud Storage Bucket` or `BigQuery` or `Cloud Pub\\Sub` or `Custom Destination`. Command Line Interface commands would change accordingly.",
           "References": "https://cloud.google.com/logging/docs/reference/tools/gcloud-logging:https://cloud.google.com/logging/quotas:https://cloud.google.com/logging/docs/routing/overview:https://cloud.google.com/logging/docs/export/using_exported_logs:https://cloud.google.com/logging/docs/export/configure_export_v2:https://cloud.google.com/logging/docs/export/aggregated_exports:https://cloud.google.com/sdk/gcloud/reference/beta/logging/sinks/list"
         }
       ]
     },
     {
       "Id": "2.5",
-      "Description": "Google Cloud Platform (GCP) services write audit log entries to the Admin Activity and Data Access logs to help answer the questions of, \"who did what, where, and when?\" within GCP projects.\n\nCloud audit logging records information includes the identity of the API caller, the time of the API call, the source IP address of the API caller, the request parameters, and the response elements returned by GCP services. Cloud audit logging provides a history of GCP API calls for an account, including API calls made via the console, SDKs, command-line tools, and other GCP services.",
+      "Description": "Google Cloud Platform (GCP) services write audit log entries to the Admin Activity and Data Access logs to help answer the questions of, \"who did what, where, and when?\" within GCP projects.  Cloud audit logging records information includes the identity of the API caller, the time of the API call, the source IP address of the API caller, the request parameters, and the response elements returned by GCP services. Cloud audit logging provides a history of GCP API calls for an account, including API calls made via the console, SDKs, command-line tools, and other GCP services.",
       "Checks": [
         "logging_log_metric_filter_and_alert_for_audit_configuration_changes_enabled"
       ],
@@ -551,11 +551,11 @@
           "Section": "2. Logging and Monitoring",
           "Profile": "Level 1",
           "AssessmentStatus": "Automated",
-          "Description": "Google Cloud Platform (GCP) services write audit log entries to the Admin Activity and Data Access logs to help answer the questions of, \"who did what, where, and when?\" within GCP projects.\n\nCloud audit logging records information includes the identity of the API caller, the time of the API call, the source IP address of the API caller, the request parameters, and the response elements returned by GCP services. Cloud audit logging provides a history of GCP API calls for an account, including API calls made via the console, SDKs, command-line tools, and other GCP services.",
-          "RationaleStatement": "Admin activity and data access logs produced by cloud audit logging enable security analysis, resource change tracking, and compliance auditing.\n\nConfiguring the metric filter and alerts for audit configuration changes ensures the recommended state of audit configuration is maintained so that all activities in the project are audit-able at any point in time.",
+          "Description": "Google Cloud Platform (GCP) services write audit log entries to the Admin Activity and Data Access logs to help answer the questions of, \"who did what, where, and when?\" within GCP projects.  Cloud audit logging records information includes the identity of the API caller, the time of the API call, the source IP address of the API caller, the request parameters, and the response elements returned by GCP services. Cloud audit logging provides a history of GCP API calls for an account, including API calls made via the console, SDKs, command-line tools, and other GCP services.",
+          "RationaleStatement": "Admin activity and data access logs produced by cloud audit logging enable security analysis, resource change tracking, and compliance auditing.  Configuring the metric filter and alerts for audit configuration changes ensures the recommended state of audit configuration is maintained so that all activities in the project are audit-able at any point in time.",
           "ImpactStatement": "Enabling of logging may result in your project being charged for the additional logs usage.",
-          "RemediationProcedure": "**From Google Cloud Console**\n\n**Create the prescribed log metric:**\n\n1. Go to `Logging/Logs-based Metrics` by visiting https://console.cloud.google.com/logs/metrics(https://console.cloud.google.com/logs/metrics) and click \"CREATE METRIC\".\n\n2. Click the down arrow symbol on the `Filter Bar` at the rightmost corner and select `Convert to Advanced Filter`.\n\n3. Clear any text and add: \n```\nprotoPayload.methodName=\"SetIamPolicy\" AND\nprotoPayload.serviceData.policyDelta.auditConfigDeltas:*\n```\n4. Click `Submit Filter`. Display logs appear based on the filter text entered by the user.\n\n5. In the `Metric Editor` menu on the right, fill out the name field. Set `Units` to `1` (default) and `Type` to `Counter`. This will ensure that the log metric counts the number of log entries matching the user's advanced logs query.\n\n6. Click `Create Metric`. \n\n**Create a prescribed Alert Policy:** \n\n1. Identify the new metric the user just created, under the section `User-defined Metrics` at https://console.cloud.google.com/logs/metrics(https://console.cloud.google.com/logs/metrics).\n\n2. Click the 3-dot icon in the rightmost column for the new metric and select `Create alert from Metric`. A new page opens.\n\n3. Fill out the alert policy configuration and click `Save`. Choose the alerting threshold and configuration that makes sense for the organization. For example, a threshold of zero(0) for the most recent value will ensure that a notification is triggered for every owner change in the project:\n```\nSet `Aggregator` to `Count`\n\nSet `Configuration`:\n\n- Condition: above\n\n- Threshold: 0\n\n- For: most recent value\n```\n4. Configure the desired notifications channels in the section `Notifications`.\n\n5. Name the policy and click `Save`.\n\n**From Google Cloud CLI**\n\nCreate a prescribed Log Metric:\n- Use the command: gcloud beta logging metrics create \n- Reference for command usage: https://cloud.google.com/sdk/gcloud/reference/beta/logging/metrics/create\n(https://cloud.google.com/sdk/gcloud/reference/beta/logging/metrics/create)\nCreate prescribed Alert Policy \n- Use the command: gcloud alpha monitoring policies create\n- Reference for command usage: https://cloud.google.com/sdk/gcloud/reference/alpha/monitoring/policies/create(https://cloud.google.com/sdk/gcloud/reference/alpha/monitoring/policies/create)",
-          "AuditProcedure": "**From Google Cloud Console**\n\n**Ensure the prescribed log metric is present:**\n\n1. Go to `Logging/Logs-based Metrics` by visiting https://console.cloud.google.com/logs/metrics(https://console.cloud.google.com/logs/metrics).\n\n2. In the `User-defined Metrics` section, ensure that at least one metric `` is present with the filter text:\n```\nprotoPayload.methodName=\"SetIamPolicy\" AND\nprotoPayload.serviceData.policyDelta.auditConfigDeltas:*\n```\n**Ensure that the prescribed alerting policy is present:**\n\n3. Go to `Alerting` by visiting https://console.cloud.google.com/monitoring/alerting(https://console.cloud.google.com/monitoring/alerting).\n\n4. Under the `Policies` section, ensure that at least one alert policy exists for the log metric above. Clicking on the policy should show that it is configured with a condition. For example, `Violates when: Any logging.googleapis.com/user/ stream` `is above a threshold of 0 for greater than zero(0) seconds`, means that the alert will trigger for any new owner change. Verify that the chosen alerting thresholds make sense for the user's organization.\n\n5. Ensure that appropriate notifications channels have been set up.\n\n**From Google Cloud CLI**\n\n**Ensure that the prescribed log metric is present:**\n\n1. List the log metrics:\n```\ngcloud beta logging metrics list --format json\n```\n2. Ensure that the output contains at least one metric with the filter set to: \n```\nprotoPayload.methodName=\"SetIamPolicy\" AND\nprotoPayload.serviceData.policyDelta.auditConfigDeltas:*\n```\n3. Note the value of the property `metricDescriptor.type` for the identified metric, in the format `logging.googleapis.com/user/`.\n\n**Ensure that the prescribed alerting policy is present:**\n\n4. List the alerting policies:\n```\ngcloud alpha monitoring policies list --format json\n```\n5. Ensure that the output contains at least one alert policy where:\n- `conditions.conditionThreshold.filter` is set to `metric.type=\\\"logging.googleapis.com/user/\\\"`\n- AND `enabled` is set to `true`",
+          "RemediationProcedure": "**From Google Cloud Console**  **Create the prescribed log metric:**  1. Go to `Logging/Logs-based Metrics` by visiting https://console.cloud.google.com/logs/metrics(https://console.cloud.google.com/logs/metrics) and click \"CREATE METRIC\".  2. Click the down arrow symbol on the `Filter Bar` at the rightmost corner and select `Convert to Advanced Filter`.  3. Clear any text and add:  ``` protoPayload.methodName=\"SetIamPolicy\" AND protoPayload.serviceData.policyDelta.auditConfigDeltas:* ``` 4. Click `Submit Filter`. Display logs appear based on the filter text entered by the user.  5. In the `Metric Editor` menu on the right, fill out the name field. Set `Units` to `1` (default) and `Type` to `Counter`. This will ensure that the log metric counts the number of log entries matching the user's advanced logs query.  6. Click `Create Metric`.   **Create a prescribed Alert Policy:**   1. Identify the new metric the user just created, under the section `User-defined Metrics` at https://console.cloud.google.com/logs/metrics(https://console.cloud.google.com/logs/metrics).  2. Click the 3-dot icon in the rightmost column for the new metric and select `Create alert from Metric`. A new page opens.  3. Fill out the alert policy configuration and click `Save`. Choose the alerting threshold and configuration that makes sense for the organization. For example, a threshold of zero(0) for the most recent value will ensure that a notification is triggered for every owner change in the project: ``` Set `Aggregator` to `Count`  Set `Configuration`:  - Condition: above  - Threshold: 0  - For: most recent value ``` 4. Configure the desired notifications channels in the section `Notifications`.  5. Name the policy and click `Save`.  **From Google Cloud CLI**  Create a prescribed Log Metric: - Use the command: gcloud beta logging metrics create  - Reference for command usage: https://cloud.google.com/sdk/gcloud/reference/beta/logging/metrics/create (https://cloud.google.com/sdk/gcloud/reference/beta/logging/metrics/create) Create prescribed Alert Policy  - Use the command: gcloud alpha monitoring policies create - Reference for command usage: https://cloud.google.com/sdk/gcloud/reference/alpha/monitoring/policies/create(https://cloud.google.com/sdk/gcloud/reference/alpha/monitoring/policies/create)",
+          "AuditProcedure": "**From Google Cloud Console**  **Ensure the prescribed log metric is present:**  1. Go to `Logging/Logs-based Metrics` by visiting https://console.cloud.google.com/logs/metrics(https://console.cloud.google.com/logs/metrics).  2. In the `User-defined Metrics` section, ensure that at least one metric `` is present with the filter text: ``` protoPayload.methodName=\"SetIamPolicy\" AND protoPayload.serviceData.policyDelta.auditConfigDeltas:* ``` **Ensure that the prescribed alerting policy is present:**  3. Go to `Alerting` by visiting https://console.cloud.google.com/monitoring/alerting(https://console.cloud.google.com/monitoring/alerting).  4. Under the `Policies` section, ensure that at least one alert policy exists for the log metric above. Clicking on the policy should show that it is configured with a condition. For example, `Violates when: Any logging.googleapis.com/user/ stream` `is above a threshold of 0 for greater than zero(0) seconds`, means that the alert will trigger for any new owner change. Verify that the chosen alerting thresholds make sense for the user's organization.  5. Ensure that appropriate notifications channels have been set up.  **From Google Cloud CLI**  **Ensure that the prescribed log metric is present:**  1. List the log metrics: ``` gcloud beta logging metrics list --format json ``` 2. Ensure that the output contains at least one metric with the filter set to:  ``` protoPayload.methodName=\"SetIamPolicy\" AND protoPayload.serviceData.policyDelta.auditConfigDeltas:* ``` 3. Note the value of the property `metricDescriptor.type` for the identified metric, in the format `logging.googleapis.com/user/`.  **Ensure that the prescribed alerting policy is present:**  4. List the alerting policies: ``` gcloud alpha monitoring policies list --format json ``` 5. Ensure that the output contains at least one alert policy where: - `conditions.conditionThreshold.filter` is set to `metric.type=\\\"logging.googleapis.com/user/\\\"` - AND `enabled` is set to `true`",
           "AdditionalInformation": "",
           "References": "https://cloud.google.com/logging/docs/logs-based-metrics/:https://cloud.google.com/monitoring/custom-metrics/:https://cloud.google.com/monitoring/alerts/:https://cloud.google.com/logging/docs/reference/tools/gcloud-logging:https://cloud.google.com/logging/docs/audit/configure-data-access#getiampolicy-setiampolicy"
         }
@@ -575,8 +575,8 @@
           "Description": "It is recommended that a metric filter and alarm be established for Cloud Storage Bucket IAM changes.",
           "RationaleStatement": "Monitoring changes to cloud storage bucket permissions may reduce the time needed to detect and correct permissions on sensitive cloud storage buckets and objects inside the bucket.",
           "ImpactStatement": "Enabling of logging may result in your project being charged for the additional logs usage. These charges could be significant depending on the size of the organization.",
-          "RemediationProcedure": "**From Google Cloud Console**\n\n**Create the prescribed log metric:**\n\n1. Go to `Logging/Logs-based Metrics` by visiting https://console.cloud.google.com/logs/metrics(https://console.cloud.google.com/logs/metrics) and click \"CREATE METRIC\".\n\n2. Click the down arrow symbol on the `Filter Bar` at the rightmost corner and select `Convert to Advanced Filter`.\n\n3. Clear any text and add: \n```\nresource.type=\"gcs_bucket\" \nAND protoPayload.methodName=\"storage.setIamPermissions\"\n```\n4. Click `Submit Filter`. Display logs appear based on the filter text entered by the user.\n\n5. In the `Metric Editor` menu on right, fill out the name field. Set `Units` to `1` (default) and `Type` to `Counter`. This ensures that the log metric counts the number of log entries matching the user's advanced logs query.\n\n6. Click `Create Metric`. \n\n**Create the prescribed Alert Policy:** \n\n1. Identify the newly created metric under the section `User-defined Metrics` at https://console.cloud.google.com/logs/metrics(https://console.cloud.google.com/logs/metrics).\n\n2. Click the 3-dot icon in the rightmost column for the new metric and select `Create alert from Metric`. A new page appears.\n\n3. Fill out the alert policy configuration and click `Save`. Choose the alerting threshold and configuration that makes sense for the user's organization. For example, a threshold of zero(0) for the most recent value will ensure that a notification is triggered for every owner change in the project:\n```\nSet `Aggregator` to `Count`\n\nSet `Configuration`:\n\n- Condition: above\n\n- Threshold: 0\n\n- For: most recent value\n```\n\n4. Configure the desired notifications channels in the section `Notifications`.\n\n5. Name the policy and click `Save`.\n\n**From Google Cloud CLI**\n\nCreate the prescribed Log Metric:\n- Use the command: gcloud beta logging metrics create \n\nCreate the prescribed alert policy: \n- Use the command: gcloud alpha monitoring policies create",
-          "AuditProcedure": "**From Google Cloud Console**\n\n**Ensure the prescribed log metric is present:**\n\n1. For each project that contains cloud storage buckets, go to `Logging/Logs-based Metrics` by visiting https://console.cloud.google.com/logs/metrics(https://console.cloud.google.com/logs/metrics).\n\n2. In the `User-defined Metrics` section, ensure at least one metric `` is present with the filter text:\n\n```\nresource.type=\"gcs_bucket\"\nAND protoPayload.methodName=\"storage.setIamPermissions\"\n```\n\n**Ensure that the prescribed alerting policy is present:**\n\n3. Go to `Alerting` by visiting https://console.cloud.google.com/monitoring/alerting(https://console.cloud.google.com/monitoring/alerting).\n\n4. Under the `Policies` section, ensure that at least one alert policy exists for the log metric above. Clicking on the policy should show that it is configured with a condition. For example, `Violates when: Any logging.googleapis.com/user/ stream` `is above a threshold of 0 for greater than 0 seconds` means that the alert will trigger for any new owner change. Verify that the chosen alerting thresholds make sense for the user's organization.\n\n5. Ensure that the appropriate notifications channels have been set up.\n\n**From Google Cloud CLI**\n\n**Ensure that the prescribed log metric is present:**\n\n1. List the log metrics:\n```\ngcloud logging metrics list --format json\n```\n2. Ensure that the output contains at least one metric with the filter set to: \n```\nresource.type=gcs_bucket \nAND protoPayload.methodName=\"storage.setIamPermissions\"\n```\n\n3. Note the value of the property `metricDescriptor.type` for the identified metric, in the format `logging.googleapis.com/user/`.\n\n**Ensure the prescribed alerting policy is present:**\n\n4. List the alerting policies:\n```\ngcloud alpha monitoring policies list --format json\n```\n5. Ensure that the output contains an least one alert policy where:\n- `conditions.conditionThreshold.filter` is set to `metric.type=\\\"logging.googleapis.com/user/\\\"`\n- AND `enabled` is set to `true`",
+          "RemediationProcedure": "**From Google Cloud Console**  **Create the prescribed log metric:**  1. Go to `Logging/Logs-based Metrics` by visiting https://console.cloud.google.com/logs/metrics(https://console.cloud.google.com/logs/metrics) and click \"CREATE METRIC\".  2. Click the down arrow symbol on the `Filter Bar` at the rightmost corner and select `Convert to Advanced Filter`.  3. Clear any text and add:  ``` resource.type=\"gcs_bucket\"  AND protoPayload.methodName=\"storage.setIamPermissions\" ``` 4. Click `Submit Filter`. Display logs appear based on the filter text entered by the user.  5. In the `Metric Editor` menu on right, fill out the name field. Set `Units` to `1` (default) and `Type` to `Counter`. This ensures that the log metric counts the number of log entries matching the user's advanced logs query.  6. Click `Create Metric`.   **Create the prescribed Alert Policy:**   1. Identify the newly created metric under the section `User-defined Metrics` at https://console.cloud.google.com/logs/metrics(https://console.cloud.google.com/logs/metrics).  2. Click the 3-dot icon in the rightmost column for the new metric and select `Create alert from Metric`. A new page appears.  3. Fill out the alert policy configuration and click `Save`. Choose the alerting threshold and configuration that makes sense for the user's organization. For example, a threshold of zero(0) for the most recent value will ensure that a notification is triggered for every owner change in the project: ``` Set `Aggregator` to `Count`  Set `Configuration`:  - Condition: above  - Threshold: 0  - For: most recent value ```  4. Configure the desired notifications channels in the section `Notifications`.  5. Name the policy and click `Save`.  **From Google Cloud CLI**  Create the prescribed Log Metric: - Use the command: gcloud beta logging metrics create   Create the prescribed alert policy:  - Use the command: gcloud alpha monitoring policies create",
+          "AuditProcedure": "**From Google Cloud Console**  **Ensure the prescribed log metric is present:**  1. For each project that contains cloud storage buckets, go to `Logging/Logs-based Metrics` by visiting https://console.cloud.google.com/logs/metrics(https://console.cloud.google.com/logs/metrics).  2. In the `User-defined Metrics` section, ensure at least one metric `` is present with the filter text:  ``` resource.type=\"gcs_bucket\" AND protoPayload.methodName=\"storage.setIamPermissions\" ```  **Ensure that the prescribed alerting policy is present:**  3. Go to `Alerting` by visiting https://console.cloud.google.com/monitoring/alerting(https://console.cloud.google.com/monitoring/alerting).  4. Under the `Policies` section, ensure that at least one alert policy exists for the log metric above. Clicking on the policy should show that it is configured with a condition. For example, `Violates when: Any logging.googleapis.com/user/ stream` `is above a threshold of 0 for greater than 0 seconds` means that the alert will trigger for any new owner change. Verify that the chosen alerting thresholds make sense for the user's organization.  5. Ensure that the appropriate notifications channels have been set up.  **From Google Cloud CLI**  **Ensure that the prescribed log metric is present:**  1. List the log metrics: ``` gcloud logging metrics list --format json ``` 2. Ensure that the output contains at least one metric with the filter set to:  ``` resource.type=gcs_bucket  AND protoPayload.methodName=\"storage.setIamPermissions\" ```  3. Note the value of the property `metricDescriptor.type` for the identified metric, in the format `logging.googleapis.com/user/`.  **Ensure the prescribed alerting policy is present:**  4. List the alerting policies: ``` gcloud alpha monitoring policies list --format json ``` 5. Ensure that the output contains an least one alert policy where: - `conditions.conditionThreshold.filter` is set to `metric.type=\\\"logging.googleapis.com/user/\\\"` - AND `enabled` is set to `true`",
           "AdditionalInformation": "",
           "References": "https://cloud.google.com/logging/docs/logs-based-metrics/:https://cloud.google.com/monitoring/custom-metrics/:https://cloud.google.com/monitoring/alerts/:https://cloud.google.com/logging/docs/reference/tools/gcloud-logging:https://cloud.google.com/storage/docs/overview:https://cloud.google.com/storage/docs/access-control/iam-roles"
         }
@@ -594,10 +594,10 @@
           "Profile": "Level 1",
           "AssessmentStatus": "Automated",
           "Description": "It is recommended that a metric filter and alarm be established for changes to Identity and Access Management (IAM) role creation, deletion and updating activities.",
-          "RationaleStatement": "Google Cloud IAM provides predefined roles that give granular access to specific Google Cloud Platform resources and prevent unwanted access to other resources. However, to cater to organization-specific needs, Cloud IAM also provides the ability to create custom roles. Project owners and administrators with the Organization Role Administrator role or the IAM Role Administrator role can create custom roles. \nMonitoring role creation, deletion and updating activities will help in identifying any over-privileged role at early stages.",
+          "RationaleStatement": "Google Cloud IAM provides predefined roles that give granular access to specific Google Cloud Platform resources and prevent unwanted access to other resources. However, to cater to organization-specific needs, Cloud IAM also provides the ability to create custom roles. Project owners and administrators with the Organization Role Administrator role or the IAM Role Administrator role can create custom roles.  Monitoring role creation, deletion and updating activities will help in identifying any over-privileged role at early stages.",
           "ImpactStatement": "Enabling of logging may result in your project being charged for the additional logs usage.",
-          "RemediationProcedure": "**From Console:**\n\n**Create the prescribed log metric:**\n\n1. Go to `Logging/Logs-based Metrics` by visiting https://console.cloud.google.com/logs/metrics(https://console.cloud.google.com/logs/metrics) and click \"CREATE METRIC\".\n\n1. Click the down arrow symbol on the `Filter Bar` at the rightmost corner and select `Convert to Advanced Filter`.\n\n1. Clear any text and add: \n\n```\nresource.type=\"iam_role\" \nAND (protoPayload.methodName = \"google.iam.admin.v1.CreateRole\" \nOR protoPayload.methodName=\"google.iam.admin.v1.DeleteRole\" \nOR protoPayload.methodName=\"google.iam.admin.v1.UpdateRole\")\n```\n\n1. Click `Submit Filter`. Display logs appear based on the filter text entered by the user.\n\n1. In the `Metric Editor` menu on the right, fill out the name field. Set `Units` to `1` (default) and `Type` to `Counter`. This ensures that the log metric counts the number of log entries matching the advanced logs query.\n\n1. Click `Create Metric`. \n\n**Create a prescribed Alert Policy:** \n\n1. Identify the new metric that was just created under the section `User-defined Metrics` at https://console.cloud.google.com/logs/metrics(https://console.cloud.google.com/logs/metrics).\n\n2. Click the 3-dot icon in the rightmost column for the metric and select `Create alert from Metric`. A new page displays.\n\n3. Fill out the alert policy configuration and click `Save`. Choose the alerting threshold and configuration that makes sense for the user's organization. For example, a threshold of zero(0) for the most recent value ensures that a notification is triggered for every owner change in the project:\n```\nSet `Aggregator` to `Count`\n\nSet `Configuration`:\n\n- Condition: above\n\n- Threshold: 0\n\n- For: most recent value\n```\n\n1. Configure the desired notification channels in the section `Notifications`.\n\n1. Name the policy and click `Save`.\n\n**From Google Cloud CLI**\n\nCreate the prescribed Log Metric:\n- Use the command: gcloud logging metrics create \n\nCreate the prescribed Alert Policy: \n- Use the command: gcloud alpha monitoring policies create ",
-          "AuditProcedure": "**From Console:**\n\n**Ensure that the prescribed log metric is present:**\n\n1. Go to `Logging/Logs-based Metrics` by visiting https://console.cloud.google.com/logs/metrics(https://console.cloud.google.com/logs/metrics).\n\n2. In the `User-defined Metrics` section, ensure that at least one metric `` is present with filter text:\n\n```\nresource.type=\"iam_role\" \nAND (protoPayload.methodName=\"google.iam.admin.v1.CreateRole\" \nOR protoPayload.methodName=\"google.iam.admin.v1.DeleteRole\" \nOR protoPayload.methodName=\"google.iam.admin.v1.UpdateRole\")\n```\n\n**Ensure that the prescribed alerting policy is present:**\n\n3. Go to `Alerting` by visiting https://console.cloud.google.com/monitoring/alerting(https://console.cloud.google.com/monitoring/alerting).\n\n4. Under the `Policies` section, ensure that at least one alert policy exists for the log metric above. Clicking on the policy should show that it is configured with a condition. For example, `Violates when: Any logging.googleapis.com/user/ stream` `is above a threshold of zero(0) for greater than zero(0) seconds` means that the alert will trigger for any new owner change. Verify that the chosen alerting thresholds make sense for the user's organization.\n\n5. Ensure that the appropriate notifications channels have been set up.\n\n**From Google Cloud CLI**\n\nEnsure that the prescribed log metric is present:\n\n1. List the log metrics:\n\n```\ngcloud logging metrics list --format json\n```\n2. Ensure that the output contains at least one metric with the filter set to:\n\n```\nresource.type=\"iam_role\"\nAND (protoPayload.methodName = \"google.iam.admin.v1.CreateRole\" OR\nprotoPayload.methodName=\"google.iam.admin.v1.DeleteRole\" OR\nprotoPayload.methodName=\"google.iam.admin.v1.UpdateRole\")\n```\n\n3. Note the value of the property `metricDescriptor.type` for the identified metric, in the format `logging.googleapis.com/user/`.\n\n**Ensure that the prescribed alerting policy is present:**\n\n4. List the alerting policies:\n```\ngcloud alpha monitoring policies list --format json\n```\n5. Ensure that the output contains an least one alert policy where:\n- `conditions.conditionThreshold.filter` is set to `metric.type=\\\"logging.googleapis.com/user/\\\"`\n- AND `enabled` is set to `true`.",
+          "RemediationProcedure": "**From Console:**  **Create the prescribed log metric:**  1. Go to `Logging/Logs-based Metrics` by visiting https://console.cloud.google.com/logs/metrics(https://console.cloud.google.com/logs/metrics) and click \"CREATE METRIC\".  1. Click the down arrow symbol on the `Filter Bar` at the rightmost corner and select `Convert to Advanced Filter`.  1. Clear any text and add:   ``` resource.type=\"iam_role\"  AND (protoPayload.methodName = \"google.iam.admin.v1.CreateRole\"  OR protoPayload.methodName=\"google.iam.admin.v1.DeleteRole\"  OR protoPayload.methodName=\"google.iam.admin.v1.UpdateRole\") ```  1. Click `Submit Filter`. Display logs appear based on the filter text entered by the user.  1. In the `Metric Editor` menu on the right, fill out the name field. Set `Units` to `1` (default) and `Type` to `Counter`. This ensures that the log metric counts the number of log entries matching the advanced logs query.  1. Click `Create Metric`.   **Create a prescribed Alert Policy:**   1. Identify the new metric that was just created under the section `User-defined Metrics` at https://console.cloud.google.com/logs/metrics(https://console.cloud.google.com/logs/metrics).  2. Click the 3-dot icon in the rightmost column for the metric and select `Create alert from Metric`. A new page displays.  3. Fill out the alert policy configuration and click `Save`. Choose the alerting threshold and configuration that makes sense for the user's organization. For example, a threshold of zero(0) for the most recent value ensures that a notification is triggered for every owner change in the project: ``` Set `Aggregator` to `Count`  Set `Configuration`:  - Condition: above  - Threshold: 0  - For: most recent value ```  1. Configure the desired notification channels in the section `Notifications`.  1. Name the policy and click `Save`.  **From Google Cloud CLI**  Create the prescribed Log Metric: - Use the command: gcloud logging metrics create   Create the prescribed Alert Policy:  - Use the command: gcloud alpha monitoring policies create ",
+          "AuditProcedure": "**From Console:**  **Ensure that the prescribed log metric is present:**  1. Go to `Logging/Logs-based Metrics` by visiting https://console.cloud.google.com/logs/metrics(https://console.cloud.google.com/logs/metrics).  2. In the `User-defined Metrics` section, ensure that at least one metric `` is present with filter text:  ``` resource.type=\"iam_role\"  AND (protoPayload.methodName=\"google.iam.admin.v1.CreateRole\"  OR protoPayload.methodName=\"google.iam.admin.v1.DeleteRole\"  OR protoPayload.methodName=\"google.iam.admin.v1.UpdateRole\") ```  **Ensure that the prescribed alerting policy is present:**  3. Go to `Alerting` by visiting https://console.cloud.google.com/monitoring/alerting(https://console.cloud.google.com/monitoring/alerting).  4. Under the `Policies` section, ensure that at least one alert policy exists for the log metric above. Clicking on the policy should show that it is configured with a condition. For example, `Violates when: Any logging.googleapis.com/user/ stream` `is above a threshold of zero(0) for greater than zero(0) seconds` means that the alert will trigger for any new owner change. Verify that the chosen alerting thresholds make sense for the user's organization.  5. Ensure that the appropriate notifications channels have been set up.  **From Google Cloud CLI**  Ensure that the prescribed log metric is present:  1. List the log metrics:  ``` gcloud logging metrics list --format json ``` 2. Ensure that the output contains at least one metric with the filter set to:  ``` resource.type=\"iam_role\" AND (protoPayload.methodName = \"google.iam.admin.v1.CreateRole\" OR protoPayload.methodName=\"google.iam.admin.v1.DeleteRole\" OR protoPayload.methodName=\"google.iam.admin.v1.UpdateRole\") ```  3. Note the value of the property `metricDescriptor.type` for the identified metric, in the format `logging.googleapis.com/user/`.  **Ensure that the prescribed alerting policy is present:**  4. List the alerting policies: ``` gcloud alpha monitoring policies list --format json ``` 5. Ensure that the output contains an least one alert policy where: - `conditions.conditionThreshold.filter` is set to `metric.type=\\\"logging.googleapis.com/user/\\\"` - AND `enabled` is set to `true`.",
           "AdditionalInformation": "",
           "References": "https://cloud.google.com/logging/docs/logs-based-metrics/:https://cloud.google.com/monitoring/custom-metrics/:https://cloud.google.com/monitoring/alerts/:https://cloud.google.com/logging/docs/reference/tools/gcloud-logging:https://cloud.google.com/iam/docs/understanding-custom-roles"
         }
@@ -615,10 +615,10 @@
           "Profile": "Level 2",
           "AssessmentStatus": "Automated",
           "Description": "It is recommended that a metric filter and alarm be established for SQL instance configuration changes.",
-          "RationaleStatement": "Monitoring changes to SQL instance configuration changes may reduce the time needed to detect and correct misconfigurations done on the SQL server. \n\nBelow are a few of the configurable options which may the impact security posture of an SQL instance:\n\n- Enable auto backups and high availability: Misconfiguration may adversely impact business continuity, disaster recovery, and high availability \n\n- Authorize networks: Misconfiguration may increase exposure to untrusted networks",
+          "RationaleStatement": "Monitoring changes to SQL instance configuration changes may reduce the time needed to detect and correct misconfigurations done on the SQL server.   Below are a few of the configurable options which may the impact security posture of an SQL instance:  - Enable auto backups and high availability: Misconfiguration may adversely impact business continuity, disaster recovery, and high availability   - Authorize networks: Misconfiguration may increase exposure to untrusted networks",
           "ImpactStatement": "Enabling of logging may result in your project being charged for the additional logs usage. These charges could be significant depending on the size of the organization.",
-          "RemediationProcedure": "**From Google Cloud Console**\n\n**Create the prescribed Log Metric:**\n\n1. Go to `Logging/Logs-based Metrics` by visiting https://console.cloud.google.com/logs/metrics(https://console.cloud.google.com/logs/metrics) and click \"CREATE METRIC\".\n\n2. Click the down arrow symbol on the `Filter Bar` at the rightmost corner and select `Convert to Advanced Filter`.\n\n3. Clear any text and add: \n\n```\nprotoPayload.methodName=\"cloudsql.instances.update\"\n```\n\n4. Click `Submit Filter`. Display logs appear based on the filter text entered by the user.\n\n5. In the `Metric Editor` menu on right, fill out the name field. Set `Units` to `1` (default) and `Type` to `Counter`. This ensures that the log metric counts the number of log entries matching the user's advanced logs query.\n\n6. Click `Create Metric`. \n\n**Create the prescribed alert policy:** \n\n1. Identify the newly created metric under the section `User-defined Metrics` at https://console.cloud.google.com/logs/metrics(https://console.cloud.google.com/logs/metrics).\n\n2. Click the 3-dot icon in the rightmost column for the new metric and select `Create alert from Metric`. A new page appears.\n\n3. Fill out the alert policy configuration and click `Save`. Choose the alerting threshold and configuration that makes sense for the user's organization. For example, a threshold of zero(0) for the most recent value will ensure that a notification is triggered for every owner change in the user's project:\n```\nSet `Aggregator` to `Count`\n\nSet `Configuration`:\n\n- Condition: above\n\n- Threshold: 0\n\n- For: most recent value\n```\n\n4. Configure the desired notification channels in the section `Notifications`.\n\n5. Name the policy and click `Save`.\n\n**From Google Cloud CLI**\n\nCreate the prescribed log metric:\n- Use the command: gcloud logging metrics create \n\nCreate the prescribed alert policy: \n- Use the command: gcloud alpha monitoring policies create\n- Reference for command usage: https://cloud.google.com/sdk/gcloud/reference/alpha/monitoring/policies/create(https://cloud.google.com/sdk/gcloud/reference/alpha/monitoring/policies/create)",
-          "AuditProcedure": "**From Google Cloud Console**\n\n**Ensure the prescribed log metric is present:**\n\n1. For each project that contains Cloud SQL instances, go to `Logging/Logs-based Metrics` by visiting https://console.cloud.google.com/logs/metrics(https://console.cloud.google.com/logs/metrics).\n\n2. In the `User-defined Metrics` section, ensure that at least one metric `` is present with the filter text:\n\n```\nprotoPayload.methodName=\"cloudsql.instances.update\"\n```\n\n**Ensure that the prescribed alerting policy is present:**\n\n3. Go to `Alerting` by visiting https://console.cloud.google.com/monitoring/alerting(https://console.cloud.google.com/monitoring/alerting).\n\n4. Under the `Policies` section, ensure that at least one alert policy exists for the log metric above. Clicking on the policy should show that it is configured with a condition. For example, `Violates when: Any logging.googleapis.com/user/ stream` `is above a threshold of zero(0) for greater than zero(0) seconds` means that the alert will trigger for any new owner change. Verify that the chosen alerting thresholds make sense for the user's organization.\n\n5. Ensure that the appropriate notifications channels have been set up.\n\n**From Google Cloud CLI**\n\n**Ensure that the prescribed log metric is present:**\n\n1. List the log metrics:\n```\ngcloud logging metrics list --format json\n```\n2. Ensure that the output contains at least one metric with the filter set to \n```\nprotoPayload.methodName=\"cloudsql.instances.update\"\n```\n\n3. Note the value of the property `metricDescriptor.type` for the identified metric, in the format `logging.googleapis.com/user/`.\n\n**Ensure that the prescribed alerting policy is present:**\n\n4. List the alerting policies:\n```\ngcloud alpha monitoring policies list --format json\n```\n5. Ensure that the output contains at least one alert policy where:\n- `conditions.conditionThreshold.filter` is set to `metric.type=\\\"logging.googleapis.com/user/\\\"`\n- AND `enabled` is set to `true`",
+          "RemediationProcedure": "**From Google Cloud Console**  **Create the prescribed Log Metric:**  1. Go to `Logging/Logs-based Metrics` by visiting https://console.cloud.google.com/logs/metrics(https://console.cloud.google.com/logs/metrics) and click \"CREATE METRIC\".  2. Click the down arrow symbol on the `Filter Bar` at the rightmost corner and select `Convert to Advanced Filter`.  3. Clear any text and add:   ``` protoPayload.methodName=\"cloudsql.instances.update\" ```  4. Click `Submit Filter`. Display logs appear based on the filter text entered by the user.  5. In the `Metric Editor` menu on right, fill out the name field. Set `Units` to `1` (default) and `Type` to `Counter`. This ensures that the log metric counts the number of log entries matching the user's advanced logs query.  6. Click `Create Metric`.   **Create the prescribed alert policy:**   1. Identify the newly created metric under the section `User-defined Metrics` at https://console.cloud.google.com/logs/metrics(https://console.cloud.google.com/logs/metrics).  2. Click the 3-dot icon in the rightmost column for the new metric and select `Create alert from Metric`. A new page appears.  3. Fill out the alert policy configuration and click `Save`. Choose the alerting threshold and configuration that makes sense for the user's organization. For example, a threshold of zero(0) for the most recent value will ensure that a notification is triggered for every owner change in the user's project: ``` Set `Aggregator` to `Count`  Set `Configuration`:  - Condition: above  - Threshold: 0  - For: most recent value ```  4. Configure the desired notification channels in the section `Notifications`.  5. Name the policy and click `Save`.  **From Google Cloud CLI**  Create the prescribed log metric: - Use the command: gcloud logging metrics create   Create the prescribed alert policy:  - Use the command: gcloud alpha monitoring policies create - Reference for command usage: https://cloud.google.com/sdk/gcloud/reference/alpha/monitoring/policies/create(https://cloud.google.com/sdk/gcloud/reference/alpha/monitoring/policies/create)",
+          "AuditProcedure": "**From Google Cloud Console**  **Ensure the prescribed log metric is present:**  1. For each project that contains Cloud SQL instances, go to `Logging/Logs-based Metrics` by visiting https://console.cloud.google.com/logs/metrics(https://console.cloud.google.com/logs/metrics).  2. In the `User-defined Metrics` section, ensure that at least one metric `` is present with the filter text:  ``` protoPayload.methodName=\"cloudsql.instances.update\" ```  **Ensure that the prescribed alerting policy is present:**  3. Go to `Alerting` by visiting https://console.cloud.google.com/monitoring/alerting(https://console.cloud.google.com/monitoring/alerting).  4. Under the `Policies` section, ensure that at least one alert policy exists for the log metric above. Clicking on the policy should show that it is configured with a condition. For example, `Violates when: Any logging.googleapis.com/user/ stream` `is above a threshold of zero(0) for greater than zero(0) seconds` means that the alert will trigger for any new owner change. Verify that the chosen alerting thresholds make sense for the user's organization.  5. Ensure that the appropriate notifications channels have been set up.  **From Google Cloud CLI**  **Ensure that the prescribed log metric is present:**  1. List the log metrics: ``` gcloud logging metrics list --format json ``` 2. Ensure that the output contains at least one metric with the filter set to  ``` protoPayload.methodName=\"cloudsql.instances.update\" ```  3. Note the value of the property `metricDescriptor.type` for the identified metric, in the format `logging.googleapis.com/user/`.  **Ensure that the prescribed alerting policy is present:**  4. List the alerting policies: ``` gcloud alpha monitoring policies list --format json ``` 5. Ensure that the output contains at least one alert policy where: - `conditions.conditionThreshold.filter` is set to `metric.type=\\\"logging.googleapis.com/user/\\\"` - AND `enabled` is set to `true`",
           "AdditionalInformation": "",
           "References": "https://cloud.google.com/logging/docs/logs-based-metrics/:https://cloud.google.com/monitoring/custom-metrics/:https://cloud.google.com/monitoring/alerts/:https://cloud.google.com/logging/docs/reference/tools/gcloud-logging:https://cloud.google.com/storage/docs/overview:https://cloud.google.com/sql/docs/:https://cloud.google.com/sql/docs/mysql/:https://cloud.google.com/sql/docs/postgres/"
         }
@@ -636,10 +636,10 @@
           "Profile": "Level 2",
           "AssessmentStatus": "Automated",
           "Description": "It is recommended that a metric filter and alarm be established for Virtual Private Cloud (VPC) network changes.",
-          "RationaleStatement": "It is possible to have more than one VPC within a project. In addition, it is also possible to create a peer connection between two VPCs enabling network traffic to route between VPCs. \n\nMonitoring changes to a VPC will help ensure VPC traffic flow is not getting impacted.",
+          "RationaleStatement": "It is possible to have more than one VPC within a project. In addition, it is also possible to create a peer connection between two VPCs enabling network traffic to route between VPCs.   Monitoring changes to a VPC will help ensure VPC traffic flow is not getting impacted.",
           "ImpactStatement": "Enabling of logging may result in your project being charged for the additional logs usage. These charges could be significant depending on the size of the organization.",
-          "RemediationProcedure": "**From Google Cloud Console**\n\n**Create the prescribed log metric:**\n\n1. Go to `Logging/Logs-based Metrics` by visiting https://console.cloud.google.com/logs/metrics(https://console.cloud.google.com/logs/metrics) and click \"CREATE METRIC\".\n\n2. Click the down arrow symbol on `Filter Bar` at the rightmost corner and select `Convert to Advanced Filter`.\n\n3. Clear any text and add: \n\n```\nresource.type=\"gce_network\" \nAND (protoPayload.methodName:\"compute.networks.insert\" \nOR protoPayload.methodName:\"compute.networks.patch\" \nOR protoPayload.methodName:\"compute.networks.delete\" \nOR protoPayload.methodName:\"compute.networks.removePeering\" \nOR protoPayload.methodName:\"compute.networks.addPeering\")\n```\n\n4. Click `Submit Filter`. Display logs appear based on the filter text entered by the user.\n\n5. In the `Metric Editor` menu on the right, fill out the name field. Set `Units` to `1` (default) and `Type` to `Counter`. This ensures that the log metric counts the number of log entries matching the user's advanced logs query.\n\n6. Click `Create Metric`. \n\n**Create the prescribed alert policy:** \n\n1. Identify the newly created metric under the section `User-defined Metrics` at https://console.cloud.google.com/logs/metrics(https://console.cloud.google.com/logs/metrics).\n\n2. Click the 3-dot icon in the rightmost column for the new metric and select `Create alert from Metric`. A new page appears.\n\n3. Fill out the alert policy configuration and click `Save`. Choose the alerting threshold and configuration that makes sense for the user's organization. For example, a threshold of 0 for the most recent value will ensure that a notification is triggered for every owner change in the project:\n```\nSet `Aggregator` to `Count`\n\nSet `Configuration`:\n\n- Condition: above\n\n- Threshold: 0\n\n- For: most recent value\n```\n\n4. Configure the desired notification channels in the section `Notifications`.\n\n5. Name the policy and click `Save`.\n\n**From Google Cloud CLI**\n\nCreate the prescribed Log Metric:\n- Use the command: gcloud logging metrics create \n\nCreate the prescribed alert policy: \n- Use the command: gcloud alpha monitoring policies create",
-          "AuditProcedure": "**From Google Cloud Console**\n\n**Ensure the prescribed log metric is present:**\n\n1. Go to `Logging/Logs-based Metrics` by visiting https://console.cloud.google.com/logs/metrics(https://console.cloud.google.com/logs/metrics).\n\n2. In the `User-defined Metrics` section, ensure at least one metric `` is present with filter text:\n\n```\nresource.type=\"gce_network\" \nAND (protoPayload.methodName:\"compute.networks.insert\" \nOR protoPayload.methodName:\"compute.networks.patch\" \nOR protoPayload.methodName:\"compute.networks.delete\" \nOR protoPayload.methodName:\"compute.networks.removePeering\" \nOR protoPayload.methodName:\"compute.networks.addPeering\")\n```\n\n**Ensure the prescribed alerting policy is present:**\n\n3. Go to `Alerting` by visiting https://console.cloud.google.com/monitoring/alerting(https://console.cloud.google.com/monitoring/alerting).\n\n4. Under the `Policies` section, ensure that at least one alert policy exists for the log metric above. Clicking on the policy should show that it is configured with a condition. For example, `Violates when: Any logging.googleapis.com/user/ stream` `is above a threshold of 0 for greater than 0 seconds` means that the alert will trigger for any new owner change. Verify that the chosen alerting thresholds make sense for the user's organization.\n\n5. Ensure that appropriate notification channels have been set up.\n\n**From Google Cloud CLI**\n\n**Ensure the log metric is present:**\n\n1. List the log metrics:\n```\ngcloud logging metrics list --format json\n```\n2. Ensure that the output contains at least one metric with filter set to: \n```\nresource.type=\"gce_network\" \nAND protoPayload.methodName=\"beta.compute.networks.insert\" \nOR protoPayload.methodName=\"beta.compute.networks.patch\" \nOR protoPayload.methodName=\"v1.compute.networks.delete\" \nOR protoPayload.methodName=\"v1.compute.networks.removePeering\" \nOR protoPayload.methodName=\"v1.compute.networks.addPeering\"\n```\n\n3. Note the value of the property `metricDescriptor.type` for the identified metric, in the format `logging.googleapis.com/user/`.\n\n**Ensure the prescribed alerting policy is present:**\n\n4. List the alerting policies:\n```\ngcloud alpha monitoring policies list --format json\n```\n5. Ensure that the output contains at least one alert policy where:\n- `conditions.conditionThreshold.filter` is set to `metric.type=\\\"logging.googleapis.com/user/\\\"`\n- AND `enabled` is set to `true`",
+          "RemediationProcedure": "**From Google Cloud Console**  **Create the prescribed log metric:**  1. Go to `Logging/Logs-based Metrics` by visiting https://console.cloud.google.com/logs/metrics(https://console.cloud.google.com/logs/metrics) and click \"CREATE METRIC\".  2. Click the down arrow symbol on `Filter Bar` at the rightmost corner and select `Convert to Advanced Filter`.  3. Clear any text and add:   ``` resource.type=\"gce_network\"  AND (protoPayload.methodName:\"compute.networks.insert\"  OR protoPayload.methodName:\"compute.networks.patch\"  OR protoPayload.methodName:\"compute.networks.delete\"  OR protoPayload.methodName:\"compute.networks.removePeering\"  OR protoPayload.methodName:\"compute.networks.addPeering\") ```  4. Click `Submit Filter`. Display logs appear based on the filter text entered by the user.  5. In the `Metric Editor` menu on the right, fill out the name field. Set `Units` to `1` (default) and `Type` to `Counter`. This ensures that the log metric counts the number of log entries matching the user's advanced logs query.  6. Click `Create Metric`.   **Create the prescribed alert policy:**   1. Identify the newly created metric under the section `User-defined Metrics` at https://console.cloud.google.com/logs/metrics(https://console.cloud.google.com/logs/metrics).  2. Click the 3-dot icon in the rightmost column for the new metric and select `Create alert from Metric`. A new page appears.  3. Fill out the alert policy configuration and click `Save`. Choose the alerting threshold and configuration that makes sense for the user's organization. For example, a threshold of 0 for the most recent value will ensure that a notification is triggered for every owner change in the project: ``` Set `Aggregator` to `Count`  Set `Configuration`:  - Condition: above  - Threshold: 0  - For: most recent value ```  4. Configure the desired notification channels in the section `Notifications`.  5. Name the policy and click `Save`.  **From Google Cloud CLI**  Create the prescribed Log Metric: - Use the command: gcloud logging metrics create   Create the prescribed alert policy:  - Use the command: gcloud alpha monitoring policies create",
+          "AuditProcedure": "**From Google Cloud Console**  **Ensure the prescribed log metric is present:**  1. Go to `Logging/Logs-based Metrics` by visiting https://console.cloud.google.com/logs/metrics(https://console.cloud.google.com/logs/metrics).  2. In the `User-defined Metrics` section, ensure at least one metric `` is present with filter text:  ``` resource.type=\"gce_network\"  AND (protoPayload.methodName:\"compute.networks.insert\"  OR protoPayload.methodName:\"compute.networks.patch\"  OR protoPayload.methodName:\"compute.networks.delete\"  OR protoPayload.methodName:\"compute.networks.removePeering\"  OR protoPayload.methodName:\"compute.networks.addPeering\") ```  **Ensure the prescribed alerting policy is present:**  3. Go to `Alerting` by visiting https://console.cloud.google.com/monitoring/alerting(https://console.cloud.google.com/monitoring/alerting).  4. Under the `Policies` section, ensure that at least one alert policy exists for the log metric above. Clicking on the policy should show that it is configured with a condition. For example, `Violates when: Any logging.googleapis.com/user/ stream` `is above a threshold of 0 for greater than 0 seconds` means that the alert will trigger for any new owner change. Verify that the chosen alerting thresholds make sense for the user's organization.  5. Ensure that appropriate notification channels have been set up.  **From Google Cloud CLI**  **Ensure the log metric is present:**  1. List the log metrics: ``` gcloud logging metrics list --format json ``` 2. Ensure that the output contains at least one metric with filter set to:  ``` resource.type=\"gce_network\"  AND protoPayload.methodName=\"beta.compute.networks.insert\"  OR protoPayload.methodName=\"beta.compute.networks.patch\"  OR protoPayload.methodName=\"v1.compute.networks.delete\"  OR protoPayload.methodName=\"v1.compute.networks.removePeering\"  OR protoPayload.methodName=\"v1.compute.networks.addPeering\" ```  3. Note the value of the property `metricDescriptor.type` for the identified metric, in the format `logging.googleapis.com/user/`.  **Ensure the prescribed alerting policy is present:**  4. List the alerting policies: ``` gcloud alpha monitoring policies list --format json ``` 5. Ensure that the output contains at least one alert policy where: - `conditions.conditionThreshold.filter` is set to `metric.type=\\\"logging.googleapis.com/user/\\\"` - AND `enabled` is set to `true`",
           "AdditionalInformation": "",
           "References": "https://cloud.google.com/logging/docs/logs-based-metrics/:https://cloud.google.com/monitoring/custom-metrics/:https://cloud.google.com/monitoring/alerts/:https://cloud.google.com/logging/docs/reference/tools/gcloud-logging:https://cloud.google.com/vpc/docs/overview"
         }
@@ -657,8 +657,8 @@
           "Description": "GCP Access Transparency provides audit logs for all actions that Google personnel take in your Google Cloud resources.",
           "RationaleStatement": "Controlling access to your information is one of the foundations of information security. Given that Google Employees do have access to your organizations' projects for support reasons, you should have logging in place to view who, when, and why your information is being accessed.",
           "ImpactStatement": "To use Access Transparency your organization will need to have at one of the following support level: Premium, Enterprise, Platinum, or Gold. There will be subscription costs associated with support, as well as increased storage costs for storing the logs. You will also not be able to turn Access Transparency off yourself, and you will need to submit a service request to Google Cloud Support.",
-          "RemediationProcedure": "**From Google Cloud Console**\n\n**Add privileges to enable Access Transparency**\n\n1. From the Google Cloud Home, within the project you wish to check, click on the Navigation hamburger menu in the top left. Hover over the 'IAM and Admin'. Select `IAM` in the top of the column that opens. \n\n2. Click the blue button the says `+add` at the top of the screen.\n\n3. In the `principals` field, select a user or group by typing in their associated email address.\n\n4. Click on the `role` field to expand it. In the filter field enter `Access Transparency Admin` and select it.\n\n5. Click `save`.\n\n**Verify that the Google Cloud project is associated with a billing account**\n\n1. From the Google Cloud Home, click on the Navigation hamburger menu in the top left. Select `Billing`.\n\n2. If you see `This project is not associated with a billing account` you will need to enter billing information or switch to a project with a billing account.\n\n**Enable Access Transparency**\n\n1. From the Google Cloud Home, click on the Navigation hamburger menu in the top left. Hover over the IAM & Admin Menu. Select `settings` in the middle of the column that opens.\n\n2. Click the blue button labeled Enable `Access Transparency for Organization`",
-          "AuditProcedure": "**From Google Cloud Console**\n\n**Determine if Access Transparency is Enabled**\n\n1. From the Google Cloud Home, click on the Navigation hamburger menu in the top left. Hover over the IAM & Admin Menu. Select `settings` in the middle of the column that opens.\n\n2. The status will be under the heading `Access Transparency`. Status should be `Enabled`",
+          "RemediationProcedure": "**From Google Cloud Console**  **Add privileges to enable Access Transparency**  1. From the Google Cloud Home, within the project you wish to check, click on the Navigation hamburger menu in the top left. Hover over the 'IAM and Admin'. Select `IAM` in the top of the column that opens.   2. Click the blue button the says `+add` at the top of the screen.  3. In the `principals` field, select a user or group by typing in their associated email address.  4. Click on the `role` field to expand it. In the filter field enter `Access Transparency Admin` and select it.  5. Click `save`.  **Verify that the Google Cloud project is associated with a billing account**  1. From the Google Cloud Home, click on the Navigation hamburger menu in the top left. Select `Billing`.  2. If you see `This project is not associated with a billing account` you will need to enter billing information or switch to a project with a billing account.  **Enable Access Transparency**  1. From the Google Cloud Home, click on the Navigation hamburger menu in the top left. Hover over the IAM & Admin Menu. Select `settings` in the middle of the column that opens.  2. Click the blue button labeled Enable `Access Transparency for Organization`",
+          "AuditProcedure": "**From Google Cloud Console**  **Determine if Access Transparency is Enabled**  1. From the Google Cloud Home, click on the Navigation hamburger menu in the top left. Hover over the IAM & Admin Menu. Select `settings` in the middle of the column that opens.  2. The status will be under the heading `Access Transparency`. Status should be `Enabled`",
           "AdditionalInformation": "To enable Access Transparency for your Google Cloud organization, your Google Cloud organization must have one of the following customer support levels: Premium, Enterprise, Platinum, or Gold.",
           "References": "https://cloud.google.com/cloud-provider-access-management/access-transparency/docs/overview:https://cloud.google.com/cloud-provider-access-management/access-transparency/docs/enable:https://cloud.google.com/cloud-provider-access-management/access-transparency/docs/reading-logs:https://cloud.google.com/cloud-provider-access-management/access-transparency/docs/reading-logs#justification_reason_codes:https://cloud.google.com/cloud-provider-access-management/access-transparency/docs/supported-services"
         }
@@ -678,8 +678,8 @@
           "Description": "It is recommended that a metric filter and alarm be established for Virtual Private Cloud (VPC) Network Firewall rule changes.",
           "RationaleStatement": "Monitoring for Create or Update Firewall rule events gives insight to network access changes and may reduce the time it takes to detect suspicious activity.",
           "ImpactStatement": "Enabling of logging may result in your project being charged for the additional logs usage. These charges could be significant depending on the size of the organization.",
-          "RemediationProcedure": "**From Google Cloud Console**\n\n**Create the prescribed log metric:**\n\n1. Go to `Logging/Logs-based Metrics` by visiting https://console.cloud.google.com/logs/metrics(https://console.cloud.google.com/logs/metrics) and click \"CREATE METRIC\".\n\n2. Click the down arrow symbol on the `Filter Bar` at the rightmost corner and select `Convert to Advanced Filter`.\n\n3. Clear any text and add: \n\n```\nresource.type=\"gce_firewall_rule\" \nAND (protoPayload.methodName:\"compute.firewalls.patch\" \nOR protoPayload.methodName:\"compute.firewalls.insert\"\nOR protoPayload.methodName:\"compute.firewalls.delete\")\n```\n\n4. Click `Submit Filter`. Display logs appear based on the filter text entered by the user.\n\n5. In the `Metric Editor` menu on the right, fill out the name field. Set `Units` to `1` (default) and `Type` to `Counter`. This ensures that the log metric counts the number of log entries matching the advanced logs query.\n\n6. Click `Create Metric`. \n\n**Create the prescribed Alert Policy:** \n\n1. Identify the newly created metric under the section `User-defined Metrics` at https://console.cloud.google.com/logs/metrics(https://console.cloud.google.com/logs/metrics).\n\n2. Click the 3-dot icon in the rightmost column for the new metric and select `Create alert from Metric`. A new page displays.\n\n3. Fill out the alert policy configuration and click `Save`. Choose the alerting threshold and configuration that makes sense for the user's organization. For example, a threshold of zero(0) for the most recent value ensures that a notification is triggered for every owner change in the project:\n```\nSet `Aggregator` to `Count`\n\nSet `Configuration`:\n\n- Condition: above\n\n- Threshold: 0\n\n- For: most recent value\n```\n\n4. Configure the desired notifications channels in the section `Notifications`.\n\n5. Name the policy and click `Save`.\n\n**From Google Cloud CLI**\n\nCreate the prescribed Log Metric\n- Use the command: gcloud logging metrics create \n\nCreate the prescribed alert policy: \n- Use the command: gcloud alpha monitoring policies create",
-          "AuditProcedure": "**From Google Cloud Console**\n\n**Ensure that the prescribed log metric is present:**\n\n1. Go to `Logging/Logs-based Metrics` by visiting https://console.cloud.google.com/logs/metrics(https://console.cloud.google.com/logs/metrics).\n\n2. In the `User-defined Metrics` section, ensure at least one metric `` is present with this filter text:\n\n```\nresource.type=\"gce_firewall_rule\" \nAND (protoPayload.methodName:\"compute.firewalls.patch\" \nOR protoPayload.methodName:\"compute.firewalls.insert\"\nOR protoPayload.methodName:\"compute.firewalls.delete\")\n```\n\n**Ensure that the prescribed alerting policy is present:**\n\n3. Go to `Alerting` by visiting https://console.cloud.google.com/monitoring/alerting(https://console.cloud.google.com/monitoring/alerting).\n\n4. Under the `Policies` section, ensure that at least one alert policy exists for the log metric above. Clicking on the policy should show that it is configured with a condition. For example, `Violates when: Any logging.googleapis.com/user/ stream` `is above a threshold of zero(0) for greater than zero(0) seconds` means that the alert will trigger for any new owner change. Verify that the chosen alerting thresholds make sense for the user's organization.\n\n5. Ensure that appropriate notification channels have been set up.\n\n**From Google Cloud CLI**\n\n**Ensure that the prescribed log metric is present:**\n\n1. List the log metrics:\n```\ngcloud logging metrics list --format json\n```\n2. Ensure that the output contains at least one metric with the filter set to: \n\n```\nresource.type=\"gce_firewall_rule\" \nAND (protoPayload.methodName:\"compute.firewalls.patch\" \nOR protoPayload.methodName:\"compute.firewalls.insert\"\nOR protoPayload.methodName:\"compute.firewalls.delete\")\n```\n\n3. Note the value of the property `metricDescriptor.type` for the identified metric, in the format `logging.googleapis.com/user/`.\n\n**Ensure that the prescribed alerting policy is present:**\n\n4. List the alerting policies:\n```\ngcloud alpha monitoring policies list --format json\n```\n5. Ensure that the output contains an least one alert policy where:\n- `conditions.conditionThreshold.filter` is set to `metric.type=\\\"logging.googleapis.com/user/\\\"`\n- AND `enabled` is set to `true`",
+          "RemediationProcedure": "**From Google Cloud Console**  **Create the prescribed log metric:**  1. Go to `Logging/Logs-based Metrics` by visiting https://console.cloud.google.com/logs/metrics(https://console.cloud.google.com/logs/metrics) and click \"CREATE METRIC\".  2. Click the down arrow symbol on the `Filter Bar` at the rightmost corner and select `Convert to Advanced Filter`.  3. Clear any text and add:   ``` resource.type=\"gce_firewall_rule\"  AND (protoPayload.methodName:\"compute.firewalls.patch\"  OR protoPayload.methodName:\"compute.firewalls.insert\" OR protoPayload.methodName:\"compute.firewalls.delete\") ```  4. Click `Submit Filter`. Display logs appear based on the filter text entered by the user.  5. In the `Metric Editor` menu on the right, fill out the name field. Set `Units` to `1` (default) and `Type` to `Counter`. This ensures that the log metric counts the number of log entries matching the advanced logs query.  6. Click `Create Metric`.   **Create the prescribed Alert Policy:**   1. Identify the newly created metric under the section `User-defined Metrics` at https://console.cloud.google.com/logs/metrics(https://console.cloud.google.com/logs/metrics).  2. Click the 3-dot icon in the rightmost column for the new metric and select `Create alert from Metric`. A new page displays.  3. Fill out the alert policy configuration and click `Save`. Choose the alerting threshold and configuration that makes sense for the user's organization. For example, a threshold of zero(0) for the most recent value ensures that a notification is triggered for every owner change in the project: ``` Set `Aggregator` to `Count`  Set `Configuration`:  - Condition: above  - Threshold: 0  - For: most recent value ```  4. Configure the desired notifications channels in the section `Notifications`.  5. Name the policy and click `Save`.  **From Google Cloud CLI**  Create the prescribed Log Metric - Use the command: gcloud logging metrics create   Create the prescribed alert policy:  - Use the command: gcloud alpha monitoring policies create",
+          "AuditProcedure": "**From Google Cloud Console**  **Ensure that the prescribed log metric is present:**  1. Go to `Logging/Logs-based Metrics` by visiting https://console.cloud.google.com/logs/metrics(https://console.cloud.google.com/logs/metrics).  2. In the `User-defined Metrics` section, ensure at least one metric `` is present with this filter text:  ``` resource.type=\"gce_firewall_rule\"  AND (protoPayload.methodName:\"compute.firewalls.patch\"  OR protoPayload.methodName:\"compute.firewalls.insert\" OR protoPayload.methodName:\"compute.firewalls.delete\") ```  **Ensure that the prescribed alerting policy is present:**  3. Go to `Alerting` by visiting https://console.cloud.google.com/monitoring/alerting(https://console.cloud.google.com/monitoring/alerting).  4. Under the `Policies` section, ensure that at least one alert policy exists for the log metric above. Clicking on the policy should show that it is configured with a condition. For example, `Violates when: Any logging.googleapis.com/user/ stream` `is above a threshold of zero(0) for greater than zero(0) seconds` means that the alert will trigger for any new owner change. Verify that the chosen alerting thresholds make sense for the user's organization.  5. Ensure that appropriate notification channels have been set up.  **From Google Cloud CLI**  **Ensure that the prescribed log metric is present:**  1. List the log metrics: ``` gcloud logging metrics list --format json ``` 2. Ensure that the output contains at least one metric with the filter set to:   ``` resource.type=\"gce_firewall_rule\"  AND (protoPayload.methodName:\"compute.firewalls.patch\"  OR protoPayload.methodName:\"compute.firewalls.insert\" OR protoPayload.methodName:\"compute.firewalls.delete\") ```  3. Note the value of the property `metricDescriptor.type` for the identified metric, in the format `logging.googleapis.com/user/`.  **Ensure that the prescribed alerting policy is present:**  4. List the alerting policies: ``` gcloud alpha monitoring policies list --format json ``` 5. Ensure that the output contains an least one alert policy where: - `conditions.conditionThreshold.filter` is set to `metric.type=\\\"logging.googleapis.com/user/\\\"` - AND `enabled` is set to `true`",
           "AdditionalInformation": "",
           "References": "https://cloud.google.com/logging/docs/logs-based-metrics/:https://cloud.google.com/monitoring/custom-metrics/:https://cloud.google.com/monitoring/alerts/:https://cloud.google.com/logging/docs/reference/tools/gcloud-logging:https://cloud.google.com/vpc/docs/firewalls"
         }
@@ -697,10 +697,10 @@
           "Profile": "Level 2",
           "AssessmentStatus": "Automated",
           "Description": "It is recommended that a metric filter and alarm be established for Virtual Private Cloud (VPC) network route changes.",
-          "RationaleStatement": "Google Cloud Platform (GCP) routes define the paths network traffic takes from a VM instance to another destination. The other destination can be inside the organization VPC network (such as another VM) or outside of it. Every route consists of a destination and a next hop. Traffic whose destination IP is within the destination range is sent to the next hop for delivery. \n\nMonitoring changes to route tables will help ensure that all VPC traffic flows through an expected path.",
+          "RationaleStatement": "Google Cloud Platform (GCP) routes define the paths network traffic takes from a VM instance to another destination. The other destination can be inside the organization VPC network (such as another VM) or outside of it. Every route consists of a destination and a next hop. Traffic whose destination IP is within the destination range is sent to the next hop for delivery.   Monitoring changes to route tables will help ensure that all VPC traffic flows through an expected path.",
           "ImpactStatement": "Enabling of logging may result in your project being charged for the additional logs usage. These charges could be significant depending on the size of the organization.",
-          "RemediationProcedure": "**From Google Cloud Console**\n\n**Create the prescribed Log Metric:**\n\n1. Go to `Logging/Logs-based Metrics` by visiting https://console.cloud.google.com/logs/metrics(https://console.cloud.google.com/logs/metrics) and click \"CREATE METRIC\".\n\n2. Click the down arrow symbol on the `Filter Bar` at the rightmost corner and select `Convert to Advanced Filter`\n\n3. Clear any text and add: \n\n```\nresource.type=\"gce_route\" \nAND (protoPayload.methodName:\"compute.routes.delete\" \nOR protoPayload.methodName:\"compute.routes.insert\")\n```\n\n4. Click `Submit Filter`. Display logs appear based on the filter text entered by the user.\n\n5. In the `Metric Editor` menu on the right, fill out the name field. Set `Units` to `1` (default) and `Type` to `Counter`. This ensures that the log metric counts the number of log entries matching the user's advanced logs query.\n\n6. Click `Create Metric`. \n\n**Create the prescribed alert policy:** \n\n1. Identify the newly created metric under the section `User-defined Metrics` at https://console.cloud.google.com/logs/metrics(https://console.cloud.google.com/logs/metrics).\n\n2. Click the 3-dot icon in the rightmost column for the new metric and select `Create alert from Metric`. A new page displays.\n\n3. Fill out the alert policy configuration and click `Save`. Choose the alerting threshold and configuration that makes sense for the user's organization. For example, a threshold of zero(0) for the most recent value ensures that a notification is triggered for every owner change in the project:\n```\nSet `Aggregator` to `Count`\n\nSet `Configuration`:\n\n- Condition: above\n\n- Threshold: 0\n\n- For: most recent value\n```\n\n4. Configure the desired notification channels in the section `Notifications`.\n\n5. Name the policy and click `Save`.\n\n**From Google Cloud CLI**\n\nCreate the prescribed Log Metric:\n- Use the command: gcloud logging metrics create \n\nCreate the prescribed the alert policy: \n- Use the command: gcloud alpha monitoring policies create",
-          "AuditProcedure": "**From Google Cloud Console**\n\n**Ensure that the prescribed Log metric is present:**\n\n1. Go to `Logging/Logs-based Metrics` by visiting https://console.cloud.google.com/logs/metrics(https://console.cloud.google.com/logs/metrics).\n\n2. In the `User-defined Metrics` section, ensure that at least one metric `` is present with the filter text:\n\n```\nresource.type=\"gce_route\" \nAND (protoPayload.methodName:\"compute.routes.delete\" \nOR protoPayload.methodName:\"compute.routes.insert\")\n```\n\n**Ensure the prescribed alerting policy is present:**\n\n3. Go to `Alerting` by visiting: https://console.cloud.google.com/monitoring/alerting(https://console.cloud.google.com/monitoring/alerting).\n\n4. Under the `Policies` section, ensure that at least one alert policy exists for the log metric above. Clicking on the policy should show that it is configured with a condition. For example, `Violates when: Any logging.googleapis.com/user/ stream` `is above a threshold of 0 for greater than zero(0) seconds` means that the alert will trigger for any new owner change. Verify that the chosen alert thresholds make sense for the user's organization.\n\n5. Ensure that the appropriate notification channels have been set up.\n\n**From Google Cloud CLI**\n\n**Ensure the prescribed log metric is present:**\n\n1. List the log metrics:\n```\ngcloud logging metrics list --format json\n```\n2. Ensure that the output contains at least one metric with the filter set to: \n\n```\nresource.type=\"gce_route\" \nAND (protoPayload.methodName:\"compute.routes.delete\" \nOR protoPayload.methodName:\"compute.routes.insert\")\n```\n\n3. Note the value of the property `metricDescriptor.type` for the identified metric, in the format `logging.googleapis.com/user/`.\n\n**Ensure that the prescribed alerting policy is present:**\n\n4. List the alerting policies:\n```\ngcloud alpha monitoring policies list --format json\n```\n5. Ensure that the output contains an least one alert policy where:\n- `conditions.conditionThreshold.filter` is set to `metric.type=\\\"logging.googleapis.com/user/\\\"`\n- AND `enabled` is set to `true`",
+          "RemediationProcedure": "**From Google Cloud Console**  **Create the prescribed Log Metric:**  1. Go to `Logging/Logs-based Metrics` by visiting https://console.cloud.google.com/logs/metrics(https://console.cloud.google.com/logs/metrics) and click \"CREATE METRIC\".  2. Click the down arrow symbol on the `Filter Bar` at the rightmost corner and select `Convert to Advanced Filter`  3. Clear any text and add:   ``` resource.type=\"gce_route\"  AND (protoPayload.methodName:\"compute.routes.delete\"  OR protoPayload.methodName:\"compute.routes.insert\") ```  4. Click `Submit Filter`. Display logs appear based on the filter text entered by the user.  5. In the `Metric Editor` menu on the right, fill out the name field. Set `Units` to `1` (default) and `Type` to `Counter`. This ensures that the log metric counts the number of log entries matching the user's advanced logs query.  6. Click `Create Metric`.   **Create the prescribed alert policy:**   1. Identify the newly created metric under the section `User-defined Metrics` at https://console.cloud.google.com/logs/metrics(https://console.cloud.google.com/logs/metrics).  2. Click the 3-dot icon in the rightmost column for the new metric and select `Create alert from Metric`. A new page displays.  3. Fill out the alert policy configuration and click `Save`. Choose the alerting threshold and configuration that makes sense for the user's organization. For example, a threshold of zero(0) for the most recent value ensures that a notification is triggered for every owner change in the project: ``` Set `Aggregator` to `Count`  Set `Configuration`:  - Condition: above  - Threshold: 0  - For: most recent value ```  4. Configure the desired notification channels in the section `Notifications`.  5. Name the policy and click `Save`.  **From Google Cloud CLI**  Create the prescribed Log Metric: - Use the command: gcloud logging metrics create   Create the prescribed the alert policy:  - Use the command: gcloud alpha monitoring policies create",
+          "AuditProcedure": "**From Google Cloud Console**  **Ensure that the prescribed Log metric is present:**  1. Go to `Logging/Logs-based Metrics` by visiting https://console.cloud.google.com/logs/metrics(https://console.cloud.google.com/logs/metrics).  2. In the `User-defined Metrics` section, ensure that at least one metric `` is present with the filter text:  ``` resource.type=\"gce_route\"  AND (protoPayload.methodName:\"compute.routes.delete\"  OR protoPayload.methodName:\"compute.routes.insert\") ```  **Ensure the prescribed alerting policy is present:**  3. Go to `Alerting` by visiting: https://console.cloud.google.com/monitoring/alerting(https://console.cloud.google.com/monitoring/alerting).  4. Under the `Policies` section, ensure that at least one alert policy exists for the log metric above. Clicking on the policy should show that it is configured with a condition. For example, `Violates when: Any logging.googleapis.com/user/ stream` `is above a threshold of 0 for greater than zero(0) seconds` means that the alert will trigger for any new owner change. Verify that the chosen alert thresholds make sense for the user's organization.  5. Ensure that the appropriate notification channels have been set up.  **From Google Cloud CLI**  **Ensure the prescribed log metric is present:**  1. List the log metrics: ``` gcloud logging metrics list --format json ``` 2. Ensure that the output contains at least one metric with the filter set to:   ``` resource.type=\"gce_route\"  AND (protoPayload.methodName:\"compute.routes.delete\"  OR protoPayload.methodName:\"compute.routes.insert\") ```  3. Note the value of the property `metricDescriptor.type` for the identified metric, in the format `logging.googleapis.com/user/`.  **Ensure that the prescribed alerting policy is present:**  4. List the alerting policies: ``` gcloud alpha monitoring policies list --format json ``` 5. Ensure that the output contains an least one alert policy where: - `conditions.conditionThreshold.filter` is set to `metric.type=\\\"logging.googleapis.com/user/\\\"` - AND `enabled` is set to `true`",
           "AdditionalInformation": "",
           "References": "https://cloud.google.com/logging/docs/logs-based-metrics/:https://cloud.google.com/monitoring/custom-metrics/:https://cloud.google.com/monitoring/alerts/:https://cloud.google.com/logging/docs/reference/tools/gcloud-logging:https://cloud.google.com/storage/docs/access-control/iam:https://cloud.google.com/sdk/gcloud/reference/beta/logging/metrics/create:https://cloud.google.com/sdk/gcloud/reference/alpha/monitoring/policies/create"
         }
@@ -720,8 +720,8 @@
           "Description": "In order to prevent use of legacy networks, a project should not have a legacy network configured. As of now, Legacy Networks are gradually being phased out, and you can no longer create projects with them. This recommendation is to check older projects to ensure that they are not using Legacy Networks.",
           "RationaleStatement": "Legacy networks have a single network IPv4 prefix range and a single gateway IP address for the whole network. The network is global in scope and spans all cloud regions. Subnetworks cannot be created in a legacy network and are unable to switch from legacy to auto or custom subnet networks. Legacy networks can have an impact for high network traffic projects and are subject to a single point of contention or failure.",
           "ImpactStatement": "None.",
-          "RemediationProcedure": "**From Google Cloud CLI**\n\nFor each Google Cloud Platform project,\n\n1. Follow the documentation and create a non-legacy network suitable for the organization's requirements.\n\n2. Follow the documentation and delete the networks in the `legacy` mode.",
-          "AuditProcedure": "**From Google Cloud CLI**\n\nFor each Google Cloud Platform project,\n\n1. Set the project name in the Google Cloud Shell:\n```\n\ngcloud config set project  \n```\n2. List the networks configured in that project:\n```\n\ngcloud compute networks list \n```\nNone of the listed networks should be in the `legacy` mode.",
+          "RemediationProcedure": "**From Google Cloud CLI**  For each Google Cloud Platform project,  1. Follow the documentation and create a non-legacy network suitable for the organization's requirements.  2. Follow the documentation and delete the networks in the `legacy` mode.",
+          "AuditProcedure": "**From Google Cloud CLI**  For each Google Cloud Platform project,  1. Set the project name in the Google Cloud Shell: ```  gcloud config set project   ``` 2. List the networks configured in that project: ```  gcloud compute networks list  ``` None of the listed networks should be in the `legacy` mode.",
           "AdditionalInformation": "",
           "References": "https://cloud.google.com/vpc/docs/using-legacy#creating_a_legacy_network:https://cloud.google.com/vpc/docs/using-legacy#deleting_a_legacy_network"
         }
@@ -741,8 +741,8 @@
           "Description": "Cloud Domain Name System (DNS) is a fast, reliable and cost-effective domain name system that powers millions of domains on the internet. Domain Name System Security Extensions (DNSSEC) in Cloud DNS enables domain owners to take easy steps to protect their domains against DNS hijacking and man-in-the-middle and other attacks.",
           "RationaleStatement": "Domain Name System Security Extensions (DNSSEC) adds security to the DNS protocol by enabling DNS responses to be validated. Having a trustworthy DNS that translates a domain name like www.example.com into its associated IP address is an increasingly important building block of today’s web-based applications. Attackers can hijack this process of domain/IP lookup and redirect users to a malicious site through DNS hijacking and man-in-the-middle attacks. DNSSEC helps mitigate the risk of such attacks by cryptographically signing DNS records. As a result, it prevents attackers from issuing fake DNS responses that may misdirect browsers to nefarious websites.",
           "ImpactStatement": "",
-          "RemediationProcedure": "**From Google Cloud Console**\n\n1. Go to `Cloud DNS` by visiting https://console.cloud.google.com/net-services/dns/zones(https://console.cloud.google.com/net-services/dns/zones).\n2. For each zone of `Type` `Public`, set `DNSSEC` to `On`.\n\n**From Google Cloud CLI**\n\nUse the below command to enable `DNSSEC` for Cloud DNS Zone Name.\n```\ngcloud dns managed-zones update ZONE_NAME --dnssec-state on\n```",
-          "AuditProcedure": "**From Google Cloud Console**\n\n1. Go to `Cloud DNS` by visiting https://console.cloud.google.com/net-services/dns/zones(https://console.cloud.google.com/net-services/dns/zones).\n2. For each zone of `Type` `Public`, ensure that `DNSSEC` is set to `On`.\n\n**From Google Cloud CLI**\n\n1. List all the Managed Zones in a project:\n```\ngcloud dns managed-zones list\n```\n\n2. For each zone of `VISIBILITY` `public`, get its metadata: \n\n```\ngcloud dns managed-zones describe ZONE_NAME\n```\n\n3. Ensure that `dnssecConfig.state` property is `on`.",
+          "RemediationProcedure": "**From Google Cloud Console**  1. Go to `Cloud DNS` by visiting https://console.cloud.google.com/net-services/dns/zones(https://console.cloud.google.com/net-services/dns/zones). 2. For each zone of `Type` `Public`, set `DNSSEC` to `On`.  **From Google Cloud CLI**  Use the below command to enable `DNSSEC` for Cloud DNS Zone Name. ``` gcloud dns managed-zones update ZONE_NAME --dnssec-state on ```",
+          "AuditProcedure": "**From Google Cloud Console**  1. Go to `Cloud DNS` by visiting https://console.cloud.google.com/net-services/dns/zones(https://console.cloud.google.com/net-services/dns/zones). 2. For each zone of `Type` `Public`, ensure that `DNSSEC` is set to `On`.  **From Google Cloud CLI**  1. List all the Managed Zones in a project: ``` gcloud dns managed-zones list ```  2. For each zone of `VISIBILITY` `public`, get its metadata:   ``` gcloud dns managed-zones describe ZONE_NAME ```  3. Ensure that `dnssecConfig.state` property is `on`.",
           "AdditionalInformation": "",
           "References": "https://cloudplatform.googleblog.com/2017/11/DNSSEC-now-available-in-Cloud-DNS.html:https://cloud.google.com/dns/dnssec-config#enabling:https://cloud.google.com/dns/dnssec"
         }
@@ -750,7 +750,7 @@
     },
     {
       "Id": "3.7",
-      "Description": "GCP `Firewall Rules` are specific to a `VPC Network`. Each rule either `allows` or `denies` traffic when its conditions are met. Its conditions allow users to specify the type of traffic, such as ports and protocols, and the source or destination of the traffic, including IP addresses, subnets, and instances.\n\nFirewall rules are defined at the VPC network level and are specific to the network in which they are defined. The rules themselves cannot be shared among networks. Firewall rules only support IPv4 traffic. When specifying a source for an ingress rule or a destination for an egress rule by address, an `IPv4` address or `IPv4 block in CIDR` notation can be used. Generic `(0.0.0.0/0)` incoming traffic from the Internet to a VPC or VM instance using `RDP` on `Port 3389` can be avoided.",
+      "Description": "GCP `Firewall Rules` are specific to a `VPC Network`. Each rule either `allows` or `denies` traffic when its conditions are met. Its conditions allow users to specify the type of traffic, such as ports and protocols, and the source or destination of the traffic, including IP addresses, subnets, and instances.  Firewall rules are defined at the VPC network level and are specific to the network in which they are defined. The rules themselves cannot be shared among networks. Firewall rules only support IPv4 traffic. When specifying a source for an ingress rule or a destination for an egress rule by address, an `IPv4` address or `IPv4 block in CIDR` notation can be used. Generic `(0.0.0.0/0)` incoming traffic from the Internet to a VPC or VM instance using `RDP` on `Port 3389` can be avoided.",
       "Checks": [
         "compute_firewall_rdp_access_from_the_internet_allowed"
       ],
@@ -759,11 +759,11 @@
           "Section": "3. Networking",
           "Profile": "Level 2",
           "AssessmentStatus": "Automated",
-          "Description": "GCP `Firewall Rules` are specific to a `VPC Network`. Each rule either `allows` or `denies` traffic when its conditions are met. Its conditions allow users to specify the type of traffic, such as ports and protocols, and the source or destination of the traffic, including IP addresses, subnets, and instances.\n\nFirewall rules are defined at the VPC network level and are specific to the network in which they are defined. The rules themselves cannot be shared among networks. Firewall rules only support IPv4 traffic. When specifying a source for an ingress rule or a destination for an egress rule by address, an `IPv4` address or `IPv4 block in CIDR` notation can be used. Generic `(0.0.0.0/0)` incoming traffic from the Internet to a VPC or VM instance using `RDP` on `Port 3389` can be avoided.",
-          "RationaleStatement": "GCP `Firewall Rules` within a `VPC Network`. These rules apply to outgoing (egress) traffic from instances and incoming (ingress) traffic to instances in the network. Egress and ingress traffic flows are controlled even if the traffic stays within the network (for example, instance-to-instance communication).\nFor an instance to have outgoing Internet access, the network must have a valid Internet gateway route or custom route whose destination IP is specified. This route simply defines the path to the Internet, to avoid the most general `(0.0.0.0/0)` destination `IP Range` specified from the Internet through `RDP` with the default `Port 3389`. Generic access from the Internet to a specific IP Range should be restricted.",
+          "Description": "GCP `Firewall Rules` are specific to a `VPC Network`. Each rule either `allows` or `denies` traffic when its conditions are met. Its conditions allow users to specify the type of traffic, such as ports and protocols, and the source or destination of the traffic, including IP addresses, subnets, and instances.  Firewall rules are defined at the VPC network level and are specific to the network in which they are defined. The rules themselves cannot be shared among networks. Firewall rules only support IPv4 traffic. When specifying a source for an ingress rule or a destination for an egress rule by address, an `IPv4` address or `IPv4 block in CIDR` notation can be used. Generic `(0.0.0.0/0)` incoming traffic from the Internet to a VPC or VM instance using `RDP` on `Port 3389` can be avoided.",
+          "RationaleStatement": "GCP `Firewall Rules` within a `VPC Network`. These rules apply to outgoing (egress) traffic from instances and incoming (ingress) traffic to instances in the network. Egress and ingress traffic flows are controlled even if the traffic stays within the network (for example, instance-to-instance communication). For an instance to have outgoing Internet access, the network must have a valid Internet gateway route or custom route whose destination IP is specified. This route simply defines the path to the Internet, to avoid the most general `(0.0.0.0/0)` destination `IP Range` specified from the Internet through `RDP` with the default `Port 3389`. Generic access from the Internet to a specific IP Range should be restricted.",
           "ImpactStatement": "All Remote Desktop Protocol (RDP) connections from outside of the network to the concerned VPC(s) will be blocked. There could be a business need where secure shell access is required from outside of the network to access resources associated with the VPC. In that case, specific source IP(s) should be mentioned in firewall rules to white-list access to RDP port for the concerned VPC(s).",
-          "RemediationProcedure": "**From Google Cloud Console**\n\n1. Go to `VPC Network`.\n2. Go to the `Firewall Rules`.\n3. Click the `Firewall Rule` to be modified.\n4. Click `Edit`.\n5. Modify `Source IP ranges` to specific `IP`.\n6. Click `Save`.\n\n**From Google Cloud CLI**\n\n1.Update RDP Firewall rule with new `SOURCE_RANGE` from the below command:\n\n gcloud compute firewall-rules update FirewallName --allow=PROTOCOL:PORT-PORT,... --source-ranges=CIDR_RANGE,...",
-          "AuditProcedure": "**From Google Cloud Console**\n\n1. Go to `VPC network`.\n2. Go to the `Firewall Rules`.\n3. Ensure `Port` is not equal to `3389` and `Action` is not `Allow`.\n4. Ensure `IP Ranges` is not equal to `0.0.0.0/0` under `Source filters`.\n\n**From Google Cloud CLI**\n\n gcloud compute firewall-rules list --format=table'(name,direction,sourceRanges,allowed.ports)'\n\nEnsure that there is no rule matching the below criteria:\n- `SOURCE_RANGES` is `0.0.0.0/0`\n- AND `DIRECTION` is `INGRESS`\n- AND IPProtocol is `TCP` or `ALL`\n- AND `PORTS` is set to `3389` or `range containing 3389` or `Null (not set)`\n\nNote: \n- When ALL TCP ports are allowed in a rule, PORT does not have any value set (`NULL`)\n- When ALL Protocols are allowed in a rule, PORT does not have any value set (`NULL`)",
+          "RemediationProcedure": "**From Google Cloud Console**  1. Go to `VPC Network`. 2. Go to the `Firewall Rules`. 3. Click the `Firewall Rule` to be modified. 4. Click `Edit`. 5. Modify `Source IP ranges` to specific `IP`. 6. Click `Save`.  **From Google Cloud CLI**  1.Update RDP Firewall rule with new `SOURCE_RANGE` from the below command:   gcloud compute firewall-rules update FirewallName --allow=PROTOCOL:PORT-PORT,... --source-ranges=CIDR_RANGE,...",
+          "AuditProcedure": "**From Google Cloud Console**  1. Go to `VPC network`. 2. Go to the `Firewall Rules`. 3. Ensure `Port` is not equal to `3389` and `Action` is not `Allow`. 4. Ensure `IP Ranges` is not equal to `0.0.0.0/0` under `Source filters`.  **From Google Cloud CLI**   gcloud compute firewall-rules list --format=table'(name,direction,sourceRanges,allowed.ports)'  Ensure that there is no rule matching the below criteria: - `SOURCE_RANGES` is `0.0.0.0/0` - AND `DIRECTION` is `INGRESS` - AND IPProtocol is `TCP` or `ALL` - AND `PORTS` is set to `3389` or `range containing 3389` or `Null (not set)`  Note:  - When ALL TCP ports are allowed in a rule, PORT does not have any value set (`NULL`) - When ALL Protocols are allowed in a rule, PORT does not have any value set (`NULL`)",
           "AdditionalInformation": "Currently, GCP VPC only supports IPV4; however, Google is already working on adding IPV6 support for VPC. In that case along with source IP range `0.0.0.0`, the rule should be checked for IPv6 equivalent `::0` as well.",
           "References": "https://cloud.google.com/vpc/docs/firewalls#blockedtraffic:https://cloud.google.com/blog/products/identity-security/cloud-iap-enables-context-aware-access-to-vms-via-ssh-and-rdp-without-bastion-hosts"
         }
@@ -771,7 +771,7 @@
     },
     {
       "Id": "3.4",
-      "Description": "NOTE: Currently, the SHA1 algorithm has been removed from general use by Google, and, if being used, needs to be whitelisted on a project basis by Google and will also, therefore, require a Google Cloud support contract.\n\nDNSSEC algorithm numbers in this registry may be used in CERT RRs. Zone signing (DNSSEC) and transaction security mechanisms (SIG(0) and TSIG) make use of particular subsets of these algorithms. The algorithm used for key signing should be a recommended one and it should be strong.",
+      "Description": "NOTE: Currently, the SHA1 algorithm has been removed from general use by Google, and, if being used, needs to be whitelisted on a project basis by Google and will also, therefore, require a Google Cloud support contract.  DNSSEC algorithm numbers in this registry may be used in CERT RRs. Zone signing (DNSSEC) and transaction security mechanisms (SIG(0) and TSIG) make use of particular subsets of these algorithms. The algorithm used for key signing should be a recommended one and it should be strong.",
       "Checks": [
         "dns_rsasha1_in_use_to_key_sign_in_dnssec"
       ],
@@ -780,19 +780,19 @@
           "Section": "3. Networking",
           "Profile": "Level 1",
           "AssessmentStatus": "Automated",
-          "Description": "NOTE: Currently, the SHA1 algorithm has been removed from general use by Google, and, if being used, needs to be whitelisted on a project basis by Google and will also, therefore, require a Google Cloud support contract.\n\nDNSSEC algorithm numbers in this registry may be used in CERT RRs. Zone signing (DNSSEC) and transaction security mechanisms (SIG(0) and TSIG) make use of particular subsets of these algorithms. The algorithm used for key signing should be a recommended one and it should be strong.",
-          "RationaleStatement": "Domain Name System Security Extensions (DNSSEC) algorithm numbers in this registry may be used in CERT RRs. Zonesigning (DNSSEC) and transaction security mechanisms (SIG(0) and TSIG) make use of particular subsets of these algorithms.\n\nThe algorithm used for key signing should be a recommended one and it should be strong. When enabling DNSSEC for a managed zone, or creating a managed zone with DNSSEC, the user can select the DNSSEC signing algorithms and the denial-of-existence type. Changing the DNSSEC settings is only effective for a managed zone if DNSSEC is not already enabled. If there is a need to change the settings for a managed zone where it has been enabled, turn DNSSEC off and then re-enable it with different settings.",
+          "Description": "NOTE: Currently, the SHA1 algorithm has been removed from general use by Google, and, if being used, needs to be whitelisted on a project basis by Google and will also, therefore, require a Google Cloud support contract.  DNSSEC algorithm numbers in this registry may be used in CERT RRs. Zone signing (DNSSEC) and transaction security mechanisms (SIG(0) and TSIG) make use of particular subsets of these algorithms. The algorithm used for key signing should be a recommended one and it should be strong.",
+          "RationaleStatement": "Domain Name System Security Extensions (DNSSEC) algorithm numbers in this registry may be used in CERT RRs. Zonesigning (DNSSEC) and transaction security mechanisms (SIG(0) and TSIG) make use of particular subsets of these algorithms.  The algorithm used for key signing should be a recommended one and it should be strong. When enabling DNSSEC for a managed zone, or creating a managed zone with DNSSEC, the user can select the DNSSEC signing algorithms and the denial-of-existence type. Changing the DNSSEC settings is only effective for a managed zone if DNSSEC is not already enabled. If there is a need to change the settings for a managed zone where it has been enabled, turn DNSSEC off and then re-enable it with different settings.",
           "ImpactStatement": "",
-          "RemediationProcedure": "**From Google Cloud CLI**\n\n1. If it is necessary to change the settings for a managed zone where it has been enabled, NSSEC must be turned off and re-enabled with different settings. To turn off DNSSEC, run the following command:\n\n```\ngcloud dns managed-zones update ZONE_NAME --dnssec-state off\n```\n\n2. To update key-signing for a reported managed DNS Zone, run the following command:\n\n```\ngcloud dns managed-zones update ZONE_NAME --dnssec-state on --ksk-algorithm KSK_ALGORITHM --ksk-key-length KSK_KEY_LENGTH --zsk-algorithm ZSK_ALGORITHM --zsk-key-length ZSK_KEY_LENGTH --denial-of-existence DENIAL_OF_EXISTENCE\n```\n\nSupported algorithm options and key lengths are as follows.\n\n Algorithm KSK Length ZSK Length\n --------- ---------- ----------\n RSASHA1 1024,2048 1024,2048\n RSASHA256 1024,2048 1024,2048\n RSASHA512 1024,2048 1024,2048\n ECDSAP256SHA256 256 256\n ECDSAP384SHA384 384 384",
-          "AuditProcedure": "**From Google Cloud CLI**\n\nEnsure the property algorithm for keyType keySigning is not using `RSASHA1`.\n\n gcloud dns managed-zones describe ZONENAME --format=\"json(dnsName,dnssecConfig.state,dnssecConfig.defaultKeySpecs)\"",
-          "AdditionalInformation": "1. RSASHA1 key-signing support may be required for compatibility reasons.\n2. Remediation CLI works well with gcloud-cli version 221.0.0 and later.",
+          "RemediationProcedure": "**From Google Cloud CLI**  1. If it is necessary to change the settings for a managed zone where it has been enabled, NSSEC must be turned off and re-enabled with different settings. To turn off DNSSEC, run the following command:  ``` gcloud dns managed-zones update ZONE_NAME --dnssec-state off ```  2. To update key-signing for a reported managed DNS Zone, run the following command:  ``` gcloud dns managed-zones update ZONE_NAME --dnssec-state on --ksk-algorithm KSK_ALGORITHM --ksk-key-length KSK_KEY_LENGTH --zsk-algorithm ZSK_ALGORITHM --zsk-key-length ZSK_KEY_LENGTH --denial-of-existence DENIAL_OF_EXISTENCE ```  Supported algorithm options and key lengths are as follows.   Algorithm KSK Length ZSK Length  --------- ---------- ----------  RSASHA1 1024,2048 1024,2048  RSASHA256 1024,2048 1024,2048  RSASHA512 1024,2048 1024,2048  ECDSAP256SHA256 256 256  ECDSAP384SHA384 384 384",
+          "AuditProcedure": "**From Google Cloud CLI**  Ensure the property algorithm for keyType keySigning is not using `RSASHA1`.   gcloud dns managed-zones describe ZONENAME --format=\"json(dnsName,dnssecConfig.state,dnssecConfig.defaultKeySpecs)\"",
+          "AdditionalInformation": "1. RSASHA1 key-signing support may be required for compatibility reasons. 2. Remediation CLI works well with gcloud-cli version 221.0.0 and later.",
           "References": "https://cloud.google.com/dns/dnssec-advanced#advanced_signing_options"
         }
       ]
     },
     {
       "Id": "3.5",
-      "Description": "NOTE: Currently, the SHA1 algorithm has been removed from general use by Google, and, if being used, needs to be whitelisted on a project basis by Google and will also, therefore, require a Google Cloud support contract.\n\nDNSSEC algorithm numbers in this registry may be used in CERT RRs. Zone signing (DNSSEC) and transaction security mechanisms (SIG(0) and TSIG) make use of particular subsets of these algorithms. The algorithm used for key signing should be a recommended one and it should be strong.",
+      "Description": "NOTE: Currently, the SHA1 algorithm has been removed from general use by Google, and, if being used, needs to be whitelisted on a project basis by Google and will also, therefore, require a Google Cloud support contract.  DNSSEC algorithm numbers in this registry may be used in CERT RRs. Zone signing (DNSSEC) and transaction security mechanisms (SIG(0) and TSIG) make use of particular subsets of these algorithms. The algorithm used for key signing should be a recommended one and it should be strong.",
       "Checks": [
         "dns_rsasha1_in_use_to_zone_sign_in_dnssec"
       ],
@@ -801,19 +801,19 @@
           "Section": "3. Networking",
           "Profile": "Level 1",
           "AssessmentStatus": "Automated",
-          "Description": "NOTE: Currently, the SHA1 algorithm has been removed from general use by Google, and, if being used, needs to be whitelisted on a project basis by Google and will also, therefore, require a Google Cloud support contract.\n\nDNSSEC algorithm numbers in this registry may be used in CERT RRs. Zone signing (DNSSEC) and transaction security mechanisms (SIG(0) and TSIG) make use of particular subsets of these algorithms. The algorithm used for key signing should be a recommended one and it should be strong.",
-          "RationaleStatement": "DNSSEC algorithm numbers in this registry may be used in CERT RRs. Zone signing (DNSSEC) and transaction security mechanisms (SIG(0) and TSIG) make use of particular subsets of these algorithms.\n\nThe algorithm used for key signing should be a recommended one and it should be strong. When enabling DNSSEC for a managed zone, or creating a managed zone with DNSSEC, the DNSSEC signing algorithms and the denial-of-existence type can be selected. Changing the DNSSEC settings is only effective for a managed zone if DNSSEC is not already enabled. If the need exists to change the settings for a managed zone where it has been enabled, turn DNSSEC off and then re-enable it with different settings.",
+          "Description": "NOTE: Currently, the SHA1 algorithm has been removed from general use by Google, and, if being used, needs to be whitelisted on a project basis by Google and will also, therefore, require a Google Cloud support contract.  DNSSEC algorithm numbers in this registry may be used in CERT RRs. Zone signing (DNSSEC) and transaction security mechanisms (SIG(0) and TSIG) make use of particular subsets of these algorithms. The algorithm used for key signing should be a recommended one and it should be strong.",
+          "RationaleStatement": "DNSSEC algorithm numbers in this registry may be used in CERT RRs. Zone signing (DNSSEC) and transaction security mechanisms (SIG(0) and TSIG) make use of particular subsets of these algorithms.  The algorithm used for key signing should be a recommended one and it should be strong. When enabling DNSSEC for a managed zone, or creating a managed zone with DNSSEC, the DNSSEC signing algorithms and the denial-of-existence type can be selected. Changing the DNSSEC settings is only effective for a managed zone if DNSSEC is not already enabled. If the need exists to change the settings for a managed zone where it has been enabled, turn DNSSEC off and then re-enable it with different settings.",
           "ImpactStatement": "",
-          "RemediationProcedure": "**From Google Cloud CLI**\n\n1. If the need exists to change the settings for a managed zone where it has been enabled, DNSSEC must be turned off and then re-enabled with different settings. To turn off DNSSEC, run following command:\n```\ngcloud dns managed-zones update ZONE_NAME --dnssec-state off\n```\n\n2. To update zone-signing for a reported managed DNS Zone, run the following command:\n```\ngcloud dns managed-zones update ZONE_NAME --dnssec-state on --ksk-algorithm KSK_ALGORITHM --ksk-key-length KSK_KEY_LENGTH --zsk-algorithm ZSK_ALGORITHM --zsk-key-length ZSK_KEY_LENGTH --denial-of-existence DENIAL_OF_EXISTENCE\n```\n\nSupported algorithm options and key lengths are as follows.\n\n Algorithm KSK Length ZSK Length\n --------- ---------- ----------\n RSASHA1 1024,2048 1024,2048\n RSASHA256 1024,2048 1024,2048\n RSASHA512 1024,2048 1024,2048\n ECDSAP256SHA256 256 384\n ECDSAP384SHA384 384 384",
-          "AuditProcedure": "**From Google Cloud CLI**\n\nEnsure the property algorithm for keyType zone signing is not using RSASHA1.\n\n```\ngcloud dns managed-zones describe --format=\"json(dnsName,dnssecConfig.state,dnssecConfig.defaultKeySpecs)\"\n```",
-          "AdditionalInformation": "1. RSASHA1 zone-signing support may be required for compatibility reasons.\n2. The remediation CLI works well with gcloud-cli version 221.0.0 and later.",
+          "RemediationProcedure": "**From Google Cloud CLI**  1. If the need exists to change the settings for a managed zone where it has been enabled, DNSSEC must be turned off and then re-enabled with different settings. To turn off DNSSEC, run following command: ``` gcloud dns managed-zones update ZONE_NAME --dnssec-state off ```  2. To update zone-signing for a reported managed DNS Zone, run the following command: ``` gcloud dns managed-zones update ZONE_NAME --dnssec-state on --ksk-algorithm KSK_ALGORITHM --ksk-key-length KSK_KEY_LENGTH --zsk-algorithm ZSK_ALGORITHM --zsk-key-length ZSK_KEY_LENGTH --denial-of-existence DENIAL_OF_EXISTENCE ```  Supported algorithm options and key lengths are as follows.   Algorithm KSK Length ZSK Length  --------- ---------- ----------  RSASHA1 1024,2048 1024,2048  RSASHA256 1024,2048 1024,2048  RSASHA512 1024,2048 1024,2048  ECDSAP256SHA256 256 384  ECDSAP384SHA384 384 384",
+          "AuditProcedure": "**From Google Cloud CLI**  Ensure the property algorithm for keyType zone signing is not using RSASHA1.  ``` gcloud dns managed-zones describe --format=\"json(dnsName,dnssecConfig.state,dnssecConfig.defaultKeySpecs)\" ```",
+          "AdditionalInformation": "1. RSASHA1 zone-signing support may be required for compatibility reasons. 2. The remediation CLI works well with gcloud-cli version 221.0.0 and later.",
           "References": "https://cloud.google.com/dns/dnssec-advanced#advanced_signing_options"
         }
       ]
     },
     {
       "Id": "3.6",
-      "Description": "GCP `Firewall Rules` are specific to a `VPC Network`. Each rule either `allows` or `denies` traffic when its conditions are met. Its conditions allow the user to specify the type of traffic, such as ports and protocols, and the source or destination of the traffic, including IP addresses, subnets, and instances.\n\nFirewall rules are defined at the VPC network level and are specific to the network in which they are defined. The rules themselves cannot be shared among networks. Firewall rules only support IPv4 traffic. When specifying a source for an ingress rule or a destination for an egress rule by address, only an `IPv4` address or `IPv4 block in CIDR` notation can be used. Generic `(0.0.0.0/0)` incoming traffic from the internet to VPC or VM instance using `SSH` on `Port 22` can be avoided.",
+      "Description": "GCP `Firewall Rules` are specific to a `VPC Network`. Each rule either `allows` or `denies` traffic when its conditions are met. Its conditions allow the user to specify the type of traffic, such as ports and protocols, and the source or destination of the traffic, including IP addresses, subnets, and instances.  Firewall rules are defined at the VPC network level and are specific to the network in which they are defined. The rules themselves cannot be shared among networks. Firewall rules only support IPv4 traffic. When specifying a source for an ingress rule or a destination for an egress rule by address, only an `IPv4` address or `IPv4 block in CIDR` notation can be used. Generic `(0.0.0.0/0)` incoming traffic from the internet to VPC or VM instance using `SSH` on `Port 22` can be avoided.",
       "Checks": [
         "compute_firewall_ssh_access_from_the_internet_allowed"
       ],
@@ -822,11 +822,11 @@
           "Section": "3. Networking",
           "Profile": "Level 2",
           "AssessmentStatus": "Automated",
-          "Description": "GCP `Firewall Rules` are specific to a `VPC Network`. Each rule either `allows` or `denies` traffic when its conditions are met. Its conditions allow the user to specify the type of traffic, such as ports and protocols, and the source or destination of the traffic, including IP addresses, subnets, and instances.\n\nFirewall rules are defined at the VPC network level and are specific to the network in which they are defined. The rules themselves cannot be shared among networks. Firewall rules only support IPv4 traffic. When specifying a source for an ingress rule or a destination for an egress rule by address, only an `IPv4` address or `IPv4 block in CIDR` notation can be used. Generic `(0.0.0.0/0)` incoming traffic from the internet to VPC or VM instance using `SSH` on `Port 22` can be avoided.",
-          "RationaleStatement": "GCP `Firewall Rules` within a `VPC Network` apply to outgoing (egress) traffic from instances and incoming (ingress) traffic to instances in the network. Egress and ingress traffic flows are controlled even if the traffic stays within the network (for example, instance-to-instance communication).\nFor an instance to have outgoing Internet access, the network must have a valid Internet gateway route or custom route whose destination IP is specified. This route simply defines the path to the Internet, to avoid the most general `(0.0.0.0/0)` destination `IP Range` specified from the Internet through `SSH` with the default `Port 22`. Generic access from the Internet to a specific IP Range needs to be restricted.",
+          "Description": "GCP `Firewall Rules` are specific to a `VPC Network`. Each rule either `allows` or `denies` traffic when its conditions are met. Its conditions allow the user to specify the type of traffic, such as ports and protocols, and the source or destination of the traffic, including IP addresses, subnets, and instances.  Firewall rules are defined at the VPC network level and are specific to the network in which they are defined. The rules themselves cannot be shared among networks. Firewall rules only support IPv4 traffic. When specifying a source for an ingress rule or a destination for an egress rule by address, only an `IPv4` address or `IPv4 block in CIDR` notation can be used. Generic `(0.0.0.0/0)` incoming traffic from the internet to VPC or VM instance using `SSH` on `Port 22` can be avoided.",
+          "RationaleStatement": "GCP `Firewall Rules` within a `VPC Network` apply to outgoing (egress) traffic from instances and incoming (ingress) traffic to instances in the network. Egress and ingress traffic flows are controlled even if the traffic stays within the network (for example, instance-to-instance communication). For an instance to have outgoing Internet access, the network must have a valid Internet gateway route or custom route whose destination IP is specified. This route simply defines the path to the Internet, to avoid the most general `(0.0.0.0/0)` destination `IP Range` specified from the Internet through `SSH` with the default `Port 22`. Generic access from the Internet to a specific IP Range needs to be restricted.",
           "ImpactStatement": "All Secure Shell (SSH) connections from outside of the network to the concerned VPC(s) will be blocked. There could be a business need where SSH access is required from outside of the network to access resources associated with the VPC. In that case, specific source IP(s) should be mentioned in firewall rules to white-list access to SSH port for the concerned VPC(s).",
-          "RemediationProcedure": "**From Google Cloud Console**\n\n1. Go to `VPC Network`.\n2. Go to the `Firewall Rules`.\n3. Click the `Firewall Rule` you want to modify.\n4. Click `Edit`.\n5. Modify `Source IP ranges` to specific `IP`.\n6. Click `Save`.\n\n**From Google Cloud CLI**\n\n1.Update the Firewall rule with the new `SOURCE_RANGE` from the below command:\n\n gcloud compute firewall-rules update FirewallName --allow=PROTOCOL:PORT-PORT,... --source-ranges=CIDR_RANGE,...",
-          "AuditProcedure": "**From Google Cloud Console**\n\n1. Go to `VPC network`.\n2. Go to the `Firewall Rules`.\n3. Ensure that `Port` is not equal to `22` and `Action` is not set to `Allow`.\n4. Ensure `IP Ranges` is not equal to `0.0.0.0/0` under `Source filters`.\n\n**From Google Cloud CLI**\n\n gcloud compute firewall-rules list --format=table'(name,direction,sourceRanges,allowed)'\n\nEnsure that there is no rule matching the below criteria:\n- `SOURCE_RANGES` is `0.0.0.0/0`\n- AND `DIRECTION` is `INGRESS`\n- AND IPProtocol is `tcp` or `ALL`\n- AND `PORTS` is set to `22` or `range containing 22` or `Null (not set)`\n\nNote: \n- When ALL TCP ports are allowed in a rule, PORT does not have any value set (`NULL`)\n- When ALL Protocols are allowed in a rule, PORT does not have any value set (`NULL`)",
+          "RemediationProcedure": "**From Google Cloud Console**  1. Go to `VPC Network`. 2. Go to the `Firewall Rules`. 3. Click the `Firewall Rule` you want to modify. 4. Click `Edit`. 5. Modify `Source IP ranges` to specific `IP`. 6. Click `Save`.  **From Google Cloud CLI**  1.Update the Firewall rule with the new `SOURCE_RANGE` from the below command:   gcloud compute firewall-rules update FirewallName --allow=PROTOCOL:PORT-PORT,... --source-ranges=CIDR_RANGE,...",
+          "AuditProcedure": "**From Google Cloud Console**  1. Go to `VPC network`. 2. Go to the `Firewall Rules`. 3. Ensure that `Port` is not equal to `22` and `Action` is not set to `Allow`. 4. Ensure `IP Ranges` is not equal to `0.0.0.0/0` under `Source filters`.  **From Google Cloud CLI**   gcloud compute firewall-rules list --format=table'(name,direction,sourceRanges,allowed)'  Ensure that there is no rule matching the below criteria: - `SOURCE_RANGES` is `0.0.0.0/0` - AND `DIRECTION` is `INGRESS` - AND IPProtocol is `tcp` or `ALL` - AND `PORTS` is set to `22` or `range containing 22` or `Null (not set)`  Note:  - When ALL TCP ports are allowed in a rule, PORT does not have any value set (`NULL`) - When ALL Protocols are allowed in a rule, PORT does not have any value set (`NULL`)",
           "AdditionalInformation": "Currently, GCP VPC only supports IPV4; however, Google is already working on adding IPV6 support for VPC. In that case along with source IP range `0.0.0.0`, the rule should be checked for IPv6 equivalent `::0` as well.",
           "References": "https://cloud.google.com/vpc/docs/firewalls#blockedtraffic:https://cloud.google.com/blog/products/identity-security/cloud-iap-enables-context-aware-access-to-vms-via-ssh-and-rdp-without-bastion-hosts"
         }
@@ -844,10 +844,10 @@
           "Profile": "Level 2",
           "AssessmentStatus": "Automated",
           "Description": "To prevent use of `default` network, a project should not have a `default` network.",
-          "RationaleStatement": "The `default` network has a preconfigured network configuration and automatically generates the following insecure firewall rules: \n\n- default-allow-internal: Allows ingress connections for all protocols and ports among instances in the network.\n- default-allow-ssh: Allows ingress connections on TCP port 22(SSH) from any source to any instance in the network.\n- default-allow-rdp: Allows ingress connections on TCP port 3389(RDP) from any source to any instance in the network.\n- default-allow-icmp: Allows ingress ICMP traffic from any source to any instance in the network.\n\nThese automatically created firewall rules do not get audit logged and cannot be configured to enable firewall rule logging. \n\nFurthermore, the default network is an auto mode network, which means that its subnets use the same predefined range of IP addresses, and as a result, it's not possible to use Cloud VPN or VPC Network Peering with the default network. \n\nBased on organization security and networking requirements, the organization should create a new network and delete the `default` network.",
+          "RationaleStatement": "The `default` network has a preconfigured network configuration and automatically generates the following insecure firewall rules:   - default-allow-internal: Allows ingress connections for all protocols and ports among instances in the network. - default-allow-ssh: Allows ingress connections on TCP port 22(SSH) from any source to any instance in the network. - default-allow-rdp: Allows ingress connections on TCP port 3389(RDP) from any source to any instance in the network. - default-allow-icmp: Allows ingress ICMP traffic from any source to any instance in the network.  These automatically created firewall rules do not get audit logged and cannot be configured to enable firewall rule logging.   Furthermore, the default network is an auto mode network, which means that its subnets use the same predefined range of IP addresses, and as a result, it's not possible to use Cloud VPN or VPC Network Peering with the default network.   Based on organization security and networking requirements, the organization should create a new network and delete the `default` network.",
           "ImpactStatement": "When an organization deletes the default network, it may need to migrate or service onto a new network.",
-          "RemediationProcedure": "**From Google Cloud Console**\n\n1. Go to the `VPC networks` page by visiting: https://console.cloud.google.com/networking/networks/list(https://console.cloud.google.com/networking/networks/list).\n\n2. Click the network named `default`.\n\n2. On the network detail page, click `EDIT`.\n\n3. Click `DELETE VPC NETWORK`.\n\n4. If needed, create a new network to replace the default network.\n\n**From Google Cloud CLI**\n\nFor each Google Cloud Platform project,\n\n1. Delete the default network:\n```\ngcloud compute networks delete default\n```\n\n2. If needed, create a new network to replace it:\n```\ngcloud compute networks create NETWORK_NAME\n```\n\n**Prevention:**\n\nThe user can prevent the default network and its insecure default firewall rules from being created by setting up an Organization Policy to `Skip default network creation` at https://console.cloud.google.com/iam-admin/orgpolicies/compute-skipDefaultNetworkCreation(https://console.cloud.google.com/iam-admin/orgpolicies/compute-skipDefaultNetworkCreation).",
-          "AuditProcedure": "**From Google Cloud Console**\n\n1. Go to the `VPC networks` page by visiting: https://console.cloud.google.com/networking/networks/list(https://console.cloud.google.com/networking/networks/list).\n\n2. Ensure that a network with the name `default` is not present.\n\n**From Google Cloud CLI**\n\n1. Set the project name in the Google Cloud Shell:\n```\n\ngcloud config set project PROJECT_ID \n```\n2. List the networks configured in that project:\n```\ngcloud compute networks list \n```\nIt should not list `default` as one of the available networks in that project.",
+          "RemediationProcedure": "**From Google Cloud Console**  1. Go to the `VPC networks` page by visiting: https://console.cloud.google.com/networking/networks/list(https://console.cloud.google.com/networking/networks/list).  2. Click the network named `default`.  2. On the network detail page, click `EDIT`.  3. Click `DELETE VPC NETWORK`.  4. If needed, create a new network to replace the default network.  **From Google Cloud CLI**  For each Google Cloud Platform project,  1. Delete the default network: ``` gcloud compute networks delete default ```  2. If needed, create a new network to replace it: ``` gcloud compute networks create NETWORK_NAME ```  **Prevention:**  The user can prevent the default network and its insecure default firewall rules from being created by setting up an Organization Policy to `Skip default network creation` at https://console.cloud.google.com/iam-admin/orgpolicies/compute-skipDefaultNetworkCreation(https://console.cloud.google.com/iam-admin/orgpolicies/compute-skipDefaultNetworkCreation).",
+          "AuditProcedure": "**From Google Cloud Console**  1. Go to the `VPC networks` page by visiting: https://console.cloud.google.com/networking/networks/list(https://console.cloud.google.com/networking/networks/list).  2. Ensure that a network with the name `default` is not present.  **From Google Cloud CLI**  1. Set the project name in the Google Cloud Shell: ```  gcloud config set project PROJECT_ID  ``` 2. List the networks configured in that project: ``` gcloud compute networks list  ``` It should not list `default` as one of the available networks in that project.",
           "AdditionalInformation": "",
           "References": "https://cloud.google.com/compute/docs/networking#firewall_rules:https://cloud.google.com/compute/docs/reference/latest/networks/insert:https://cloud.google.com/compute/docs/reference/latest/networks/delete:https://cloud.google.com/vpc/docs/firewall-rules-logging:https://cloud.google.com/vpc/docs/vpc#default-network:https://cloud.google.com/sdk/gcloud/reference/compute/networks/delete"
         }
@@ -865,10 +865,10 @@
           "Profile": "Level 2",
           "AssessmentStatus": "Automated",
           "Description": "Flow Logs is a feature that enables users to capture information about the IP traffic going to and from network interfaces in the organization's VPC Subnets. Once a flow log is created, the user can view and retrieve its data in Stackdriver Logging. It is recommended that Flow Logs be enabled for every business-critical VPC subnet.",
-          "RationaleStatement": "VPC networks and subnetworks not reserved for internal HTTP(S) load balancing provide logically isolated and secure network partitions where GCP resources can be launched. When Flow Logs are enabled for a subnet, VMs within that subnet start reporting on all Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) flows.\nEach VM samples the TCP and UDP flows it sees, inbound and outbound, whether the flow is to or from another VM, a host in the on-premises datacenter, a Google service, or a host on the Internet. If two GCP VMs are communicating, and both are in subnets that have VPC Flow Logs enabled, both VMs report the flows.\n\nFlow Logs supports the following use cases:\n\n- Network monitoring\n- Understanding network usage and optimizing network traffic expenses\n- Network forensics\n- Real-time security analysis\n\nFlow Logs provide visibility into network traffic for each VM inside the subnet and can be used to detect anomalous traffic or provide insight during security workflows.\n\nThe Flow Logs must be configured such that all network traffic is logged, the interval of logging is granular to provide detailed information on the connections, no logs are filtered, and metadata to facilitate investigations are included.\n\n**Note**: Subnets reserved for use by internal HTTP(S) load balancers do not support VPC flow logs.",
+          "RationaleStatement": "VPC networks and subnetworks not reserved for internal HTTP(S) load balancing provide logically isolated and secure network partitions where GCP resources can be launched. When Flow Logs are enabled for a subnet, VMs within that subnet start reporting on all Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) flows. Each VM samples the TCP and UDP flows it sees, inbound and outbound, whether the flow is to or from another VM, a host in the on-premises datacenter, a Google service, or a host on the Internet. If two GCP VMs are communicating, and both are in subnets that have VPC Flow Logs enabled, both VMs report the flows.  Flow Logs supports the following use cases:  - Network monitoring - Understanding network usage and optimizing network traffic expenses - Network forensics - Real-time security analysis  Flow Logs provide visibility into network traffic for each VM inside the subnet and can be used to detect anomalous traffic or provide insight during security workflows.  The Flow Logs must be configured such that all network traffic is logged, the interval of logging is granular to provide detailed information on the connections, no logs are filtered, and metadata to facilitate investigations are included.  **Note**: Subnets reserved for use by internal HTTP(S) load balancers do not support VPC flow logs.",
           "ImpactStatement": "Standard pricing for Stackdriver Logging, BigQuery, or Cloud Pub/Sub applies. VPC Flow Logs generation will be charged starting in GA as described in reference: https://cloud.google.com/vpc/",
-          "RemediationProcedure": "**From Google Cloud Console**\n\n1. Go to the VPC network GCP Console visiting `https://console.cloud.google.com/networking/networks/list` \n\n2. Click the name of a subnet, The `Subnet details` page displays.\n\n3. Click the `EDIT` button.\n\n4. Set `Flow Logs` to `On`.\n\n5. Expand the `Configure Logs` section.\n\n6. Set `Aggregation Interval` to `5 SEC`.\n\n7. Check the box beside `Include metadata`.\n\n8. Set `Sample rate` to `100`.\n\n9. Click Save.\n\n**Note**: It is not possible to configure a Log filter from the console.\n\n**From Google Cloud CLI**\n\nTo enable VPC Flow Logs for a network subnet, run the following command:\n```\ngcloud compute networks subnets update SUBNET_NAME --region REGION --enable-flow-logs --logging-aggregation-interval=interval-5-sec --logging-flow-sampling=1 --logging-metadata=include-all\n```",
-          "AuditProcedure": "**From Google Cloud Console**\n\n1. Go to the VPC network GCP Console visiting `https://console.cloud.google.com/networking/networks/list` \n\n2. From the list of network subnets, make sure for each subnet:\n- `Flow Logs` is set to `On`\n- `Aggregation Interval` is set to `5 sec`\n- `Include metadata` checkbox is checked\n- `Sample rate` is set to `100%`\n\n**Note**: It is not possible to determine if a Log filter has been defined from the console.\n\n**From Google Cloud CLI**\n\n```\ngcloud compute networks subnets list --format json | \\\n jq -r '(\"Subnet\",\"Purpose\",\"Flow_Logs\",\"Aggregation_Interval\",\"Flow_Sampling\",\"Metadata\",\"Logs_Filtered\" | (., map(length*\"-\"))), \n (. | \n \n .name, \n .purpose,\n (if has(\"enableFlowLogs\") and .enableFlowLogs == true then \"Enabled\" else \"Disabled\" end),\n (if has(\"logConfig\") then .logConfig.aggregationInterval else \"N/A\" end),\n (if has(\"logConfig\") then .logConfig.flowSampling else \"N/A\" end),\n (if has(\"logConfig\") then .logConfig.metadata else \"N/A\" end),\n (if has(\"logConfig\") then (.logConfig | has(\"filterExpr\")) else \"N/A\" end)\n \n ) | \n @tsv' | \\\n column -t\n\n```\n\nThe output of the above command will list:\n- each subnet\n- the subnet's purpose\n- a `Enabled` or `Disabled` value if `Flow Logs` are enabled\n- the value for `Aggregation Interval` or `N/A` if disabled, the value for `Flow Sampling` or `N/A` if disabled\n- the value for `Metadata` or `N/A` if disabled\n- 'true' or 'false' if a Logging Filter is configured or 'N/A' if disabled.\n\nIf the subnet's purpose is `PRIVATE` then `Flow Logs` should be `Enabled`.\n\nIf `Flow Logs` is enabled then:\n- `Aggregation_Interval` should be `INTERVAL_5_SEC`\n- `Flow_Sampling` should be 1\n- `Metadata` should be `INCLUDE_ALL_METADATA`\n- `Logs_Filtered` should be `false`.",
+          "RemediationProcedure": "**From Google Cloud Console**  1. Go to the VPC network GCP Console visiting `https://console.cloud.google.com/networking/networks/list`   2. Click the name of a subnet, The `Subnet details` page displays.  3. Click the `EDIT` button.  4. Set `Flow Logs` to `On`.  5. Expand the `Configure Logs` section.  6. Set `Aggregation Interval` to `5 SEC`.  7. Check the box beside `Include metadata`.  8. Set `Sample rate` to `100`.  9. Click Save.  **Note**: It is not possible to configure a Log filter from the console.  **From Google Cloud CLI**  To enable VPC Flow Logs for a network subnet, run the following command: ``` gcloud compute networks subnets update SUBNET_NAME --region REGION --enable-flow-logs --logging-aggregation-interval=interval-5-sec --logging-flow-sampling=1 --logging-metadata=include-all ```",
+          "AuditProcedure": "**From Google Cloud Console**  1. Go to the VPC network GCP Console visiting `https://console.cloud.google.com/networking/networks/list`   2. From the list of network subnets, make sure for each subnet: - `Flow Logs` is set to `On` - `Aggregation Interval` is set to `5 sec` - `Include metadata` checkbox is checked - `Sample rate` is set to `100%`  **Note**: It is not possible to determine if a Log filter has been defined from the console.  **From Google Cloud CLI**  ``` gcloud compute networks subnets list --format json | \\  jq -r '(\"Subnet\",\"Purpose\",\"Flow_Logs\",\"Aggregation_Interval\",\"Flow_Sampling\",\"Metadata\",\"Logs_Filtered\" | (., map(length*\"-\"))),   (. |     .name,   .purpose,  (if has(\"enableFlowLogs\") and .enableFlowLogs == true then \"Enabled\" else \"Disabled\" end),  (if has(\"logConfig\") then .logConfig.aggregationInterval else \"N/A\" end),  (if has(\"logConfig\") then .logConfig.flowSampling else \"N/A\" end),  (if has(\"logConfig\") then .logConfig.metadata else \"N/A\" end),  (if has(\"logConfig\") then (.logConfig | has(\"filterExpr\")) else \"N/A\" end)    ) |   @tsv' | \\  column -t  ```  The output of the above command will list: - each subnet - the subnet's purpose - a `Enabled` or `Disabled` value if `Flow Logs` are enabled - the value for `Aggregation Interval` or `N/A` if disabled, the value for `Flow Sampling` or `N/A` if disabled - the value for `Metadata` or `N/A` if disabled - 'true' or 'false' if a Logging Filter is configured or 'N/A' if disabled.  If the subnet's purpose is `PRIVATE` then `Flow Logs` should be `Enabled`.  If `Flow Logs` is enabled then: - `Aggregation_Interval` should be `INTERVAL_5_SEC` - `Flow_Sampling` should be 1 - `Metadata` should be `INCLUDE_ALL_METADATA` - `Logs_Filtered` should be `false`.",
           "AdditionalInformation": "",
           "References": "https://cloud.google.com/vpc/docs/using-flow-logs#enabling_vpc_flow_logging:https://cloud.google.com/vpc/"
         }
@@ -876,18 +876,18 @@
     },
     {
       "Id": "3.9",
-      "Description": "Secure Sockets Layer (SSL) policies determine what port Transport Layer Security (TLS) features clients are permitted to use when connecting to load balancers. To prevent usage of insecure features, SSL policies should use (a) at least TLS 1.2 with the MODERN profile; or (b) the RESTRICTED profile, because it effectively requires clients to use TLS 1.2 regardless of the chosen minimum TLS version; or (3) a CUSTOM profile that does not support any of the following features: \n```\nTLS_RSA_WITH_AES_128_GCM_SHA256\nTLS_RSA_WITH_AES_256_GCM_SHA384\nTLS_RSA_WITH_AES_128_CBC_SHA\nTLS_RSA_WITH_AES_256_CBC_SHA\nTLS_RSA_WITH_3DES_EDE_CBC_SHA\n```",
+      "Description": "Secure Sockets Layer (SSL) policies determine what port Transport Layer Security (TLS) features clients are permitted to use when connecting to load balancers. To prevent usage of insecure features, SSL policies should use (a) at least TLS 1.2 with the MODERN profile; or (b) the RESTRICTED profile, because it effectively requires clients to use TLS 1.2 regardless of the chosen minimum TLS version; or (3) a CUSTOM profile that does not support any of the following features:  ``` TLS_RSA_WITH_AES_128_GCM_SHA256 TLS_RSA_WITH_AES_256_GCM_SHA384 TLS_RSA_WITH_AES_128_CBC_SHA TLS_RSA_WITH_AES_256_CBC_SHA TLS_RSA_WITH_3DES_EDE_CBC_SHA ```",
       "Checks": [],
       "Attributes": [
         {
           "Section": "3. Networking",
           "Profile": "Level 1",
           "AssessmentStatus": "Manual",
-          "Description": "Secure Sockets Layer (SSL) policies determine what port Transport Layer Security (TLS) features clients are permitted to use when connecting to load balancers. To prevent usage of insecure features, SSL policies should use (a) at least TLS 1.2 with the MODERN profile; or (b) the RESTRICTED profile, because it effectively requires clients to use TLS 1.2 regardless of the chosen minimum TLS version; or (3) a CUSTOM profile that does not support any of the following features: \n```\nTLS_RSA_WITH_AES_128_GCM_SHA256\nTLS_RSA_WITH_AES_256_GCM_SHA384\nTLS_RSA_WITH_AES_128_CBC_SHA\nTLS_RSA_WITH_AES_256_CBC_SHA\nTLS_RSA_WITH_3DES_EDE_CBC_SHA\n```",
+          "Description": "Secure Sockets Layer (SSL) policies determine what port Transport Layer Security (TLS) features clients are permitted to use when connecting to load balancers. To prevent usage of insecure features, SSL policies should use (a) at least TLS 1.2 with the MODERN profile; or (b) the RESTRICTED profile, because it effectively requires clients to use TLS 1.2 regardless of the chosen minimum TLS version; or (3) a CUSTOM profile that does not support any of the following features:  ``` TLS_RSA_WITH_AES_128_GCM_SHA256 TLS_RSA_WITH_AES_256_GCM_SHA384 TLS_RSA_WITH_AES_128_CBC_SHA TLS_RSA_WITH_AES_256_CBC_SHA TLS_RSA_WITH_3DES_EDE_CBC_SHA ```",
           "RationaleStatement": "Load balancers are used to efficiently distribute traffic across multiple servers. Both SSL proxy and HTTPS load balancers are external load balancers, meaning they distribute traffic from the Internet to a GCP network. GCP customers can configure load balancer SSL policies with a minimum TLS version (1.0, 1.1, or 1.2) that clients can use to establish a connection, along with a profile (Compatible, Modern, Restricted, or Custom) that specifies permissible cipher suites. To comply with users using outdated protocols, GCP load balancers can be configured to permit insecure cipher suites. In fact, the GCP default SSL policy uses a minimum TLS version of 1.0 and a Compatible profile, which allows the widest range of insecure cipher suites. As a result, it is easy for customers to configure a load balancer without even knowing that they are permitting outdated cipher suites.",
           "ImpactStatement": "Creating more secure SSL policies can prevent clients using older TLS versions from establishing a connection.",
-          "RemediationProcedure": "**From Google Cloud Console**\n\nIf the TargetSSLProxy or TargetHttpsProxy does not have an SSL policy configured, create a new SSL policy. Otherwise, modify the existing insecure policy. \n\n1. Navigate to the `SSL Policies` page by visiting: https://console.cloud.google.com/net-security/sslpolicies(https://console.cloud.google.com/net-security/sslpolicies)\n2. Click on the name of the insecure policy to go to its `SSL policy details` page.\n3. Click `EDIT`.\n4. Set `Minimum TLS version` to `TLS 1.2`.\n5. Set `Profile` to `Modern` or `Restricted`. \n6. Alternatively, if teh user selects the profile `Custom`, make sure that the following features are disabled: \n```\nTLS_RSA_WITH_AES_128_GCM_SHA256\nTLS_RSA_WITH_AES_256_GCM_SHA384\nTLS_RSA_WITH_AES_128_CBC_SHA\nTLS_RSA_WITH_AES_256_CBC_SHA\nTLS_RSA_WITH_3DES_EDE_CBC_SHA\n```\n\n**From Google Cloud CLI**\n\n1. For each insecure SSL policy, update it to use secure cyphers:\n```\ngcloud compute ssl-policies update NAME --profile COMPATIBLE|MODERN|RESTRICTED|CUSTOM --min-tls-version 1.2 --custom-features FEATURES\n```\n\n2. If the target proxy has a GCP default SSL policy, use the following command corresponding to the proxy type to update it.\n\n```\ngcloud compute target-ssl-proxies update TARGET_SSL_PROXY_NAME --ssl-policy SSL_POLICY_NAME\ngcloud compute target-https-proxies update TARGET_HTTPS_POLICY_NAME --ssl-policy SSL_POLICY_NAME\n```",
-          "AuditProcedure": "**From Google Cloud Console**\n\n1. See all load balancers by visiting https://console.cloud.google.com/net-services/loadbalancing/loadBalancers/list(https://console.cloud.google.com/net-services/loadbalancing/loadBalancers/list).\n2. For each load balancer for `SSL (Proxy)` or `HTTPS`, click on its name to go the `Load balancer details` page.\n3. Ensure that each target proxy entry in the `Frontend` table has an `SSL Policy` configured. \n4. Click on each SSL policy to go to its `SSL policy details` page.\n5. Ensure that the SSL policy satisfies one of the following conditions: \n- has a `Min TLS` set to `TLS 1.2` and `Profile` set to `Modern` profile, or\n- has `Profile` set to `Restricted`. Note that a Restricted profile effectively requires clients to use TLS 1.2 regardless of the chosen minimum TLS version, or\n- has `Profile` set to `Custom` and the following features are all disabled:\n```\nTLS_RSA_WITH_AES_128_GCM_SHA256\nTLS_RSA_WITH_AES_256_GCM_SHA384\nTLS_RSA_WITH_AES_128_CBC_SHA\nTLS_RSA_WITH_AES_256_CBC_SHA\nTLS_RSA_WITH_3DES_EDE_CBC_SHA\n```\n\n**From Google Cloud CLI**\n\n1. List all TargetHttpsProxies and TargetSslProxies.\n```\ngcloud compute target-https-proxies list\ngcloud compute target-ssl-proxies list\n```\n\n2. For each target proxy, list its properties:\n```\ngcloud compute target-https-proxies describe TARGET_HTTPS_PROXY_NAME\ngcloud compute target-ssl-proxies describe TARGET_SSL_PROXY_NAME\n```\n\n3. Ensure that the `sslPolicy` field is present and identifies the name of the SSL policy: \n```\nsslPolicy: https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/sslPolicies/SSL_POLICY_NAME\n```\nIf the `sslPolicy` field is missing from the configuration, it means that the GCP default policy is used, which is insecure.\n\n4. Describe the SSL policy:\n```\ngcloud compute ssl-policies describe SSL_POLICY_NAME\n```\n5. Ensure that the policy satisfies one of the following conditions:\n- has `Profile` set to `Modern` and `minTlsVersion` set to `TLS_1_2`, or\n- has `Profile` set to `Restricted`, or\n- has `Profile` set to `Custom` and  `enabledFeatures` does not contain any of the following values:\n```\nTLS_RSA_WITH_AES_128_GCM_SHA256\nTLS_RSA_WITH_AES_256_GCM_SHA384\nTLS_RSA_WITH_AES_128_CBC_SHA\nTLS_RSA_WITH_AES_256_CBC_SHA\nTLS_RSA_WITH_3DES_EDE_CBC_SHA\n```",
+          "RemediationProcedure": "**From Google Cloud Console**  If the TargetSSLProxy or TargetHttpsProxy does not have an SSL policy configured, create a new SSL policy. Otherwise, modify the existing insecure policy.   1. Navigate to the `SSL Policies` page by visiting: https://console.cloud.google.com/net-security/sslpolicies(https://console.cloud.google.com/net-security/sslpolicies) 2. Click on the name of the insecure policy to go to its `SSL policy details` page. 3. Click `EDIT`. 4. Set `Minimum TLS version` to `TLS 1.2`. 5. Set `Profile` to `Modern` or `Restricted`.  6. Alternatively, if teh user selects the profile `Custom`, make sure that the following features are disabled:  ``` TLS_RSA_WITH_AES_128_GCM_SHA256 TLS_RSA_WITH_AES_256_GCM_SHA384 TLS_RSA_WITH_AES_128_CBC_SHA TLS_RSA_WITH_AES_256_CBC_SHA TLS_RSA_WITH_3DES_EDE_CBC_SHA ```  **From Google Cloud CLI**  1. For each insecure SSL policy, update it to use secure cyphers: ``` gcloud compute ssl-policies update NAME --profile COMPATIBLE|MODERN|RESTRICTED|CUSTOM --min-tls-version 1.2 --custom-features FEATURES ```  2. If the target proxy has a GCP default SSL policy, use the following command corresponding to the proxy type to update it.  ``` gcloud compute target-ssl-proxies update TARGET_SSL_PROXY_NAME --ssl-policy SSL_POLICY_NAME gcloud compute target-https-proxies update TARGET_HTTPS_POLICY_NAME --ssl-policy SSL_POLICY_NAME ```",
+          "AuditProcedure": "**From Google Cloud Console**  1. See all load balancers by visiting https://console.cloud.google.com/net-services/loadbalancing/loadBalancers/list(https://console.cloud.google.com/net-services/loadbalancing/loadBalancers/list). 2. For each load balancer for `SSL (Proxy)` or `HTTPS`, click on its name to go the `Load balancer details` page. 3. Ensure that each target proxy entry in the `Frontend` table has an `SSL Policy` configured.  4. Click on each SSL policy to go to its `SSL policy details` page. 5. Ensure that the SSL policy satisfies one of the following conditions:  - has a `Min TLS` set to `TLS 1.2` and `Profile` set to `Modern` profile, or - has `Profile` set to `Restricted`. Note that a Restricted profile effectively requires clients to use TLS 1.2 regardless of the chosen minimum TLS version, or - has `Profile` set to `Custom` and the following features are all disabled: ``` TLS_RSA_WITH_AES_128_GCM_SHA256 TLS_RSA_WITH_AES_256_GCM_SHA384 TLS_RSA_WITH_AES_128_CBC_SHA TLS_RSA_WITH_AES_256_CBC_SHA TLS_RSA_WITH_3DES_EDE_CBC_SHA ```  **From Google Cloud CLI**  1. List all TargetHttpsProxies and TargetSslProxies. ``` gcloud compute target-https-proxies list gcloud compute target-ssl-proxies list ```  2. For each target proxy, list its properties: ``` gcloud compute target-https-proxies describe TARGET_HTTPS_PROXY_NAME gcloud compute target-ssl-proxies describe TARGET_SSL_PROXY_NAME ```  3. Ensure that the `sslPolicy` field is present and identifies the name of the SSL policy:  ``` sslPolicy: https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/sslPolicies/SSL_POLICY_NAME ``` If the `sslPolicy` field is missing from the configuration, it means that the GCP default policy is used, which is insecure.  4. Describe the SSL policy: ``` gcloud compute ssl-policies describe SSL_POLICY_NAME ``` 5. Ensure that the policy satisfies one of the following conditions: - has `Profile` set to `Modern` and `minTlsVersion` set to `TLS_1_2`, or - has `Profile` set to `Restricted`, or - has `Profile` set to `Custom` and  `enabledFeatures` does not contain any of the following values: ``` TLS_RSA_WITH_AES_128_GCM_SHA256 TLS_RSA_WITH_AES_256_GCM_SHA384 TLS_RSA_WITH_AES_128_CBC_SHA TLS_RSA_WITH_AES_256_CBC_SHA TLS_RSA_WITH_3DES_EDE_CBC_SHA ```",
           "AdditionalInformation": "",
           "References": "https://cloud.google.com/load-balancing/docs/use-ssl-policies:https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-52r2.pdf"
         }
@@ -905,8 +905,8 @@
           "Description": "IAP authenticates the user requests to your apps via a Google single sign in. You can then manage these users with permissions to control access. It is recommended to use both IAP permissions and firewalls to restrict this access to your apps with sensitive information.",
           "RationaleStatement": "IAP ensure that access to VMs is controlled by authenticating incoming requests. Access to your apps and the VMs should be restricted by firewall rules that allow only the proxy IAP IP addresses contained in the 35.235.240.0/20 subnet. Otherwise, unauthenticated requests can be made to your apps. To ensure that load balancing works correctly health checks should also be allowed.",
           "ImpactStatement": "If firewall rules are not configured correctly, legitimate business services could be negatively impacted. It is recommended to make these changes during a time of low usage.",
-          "RemediationProcedure": "**From Google Cloud Console**\n1. Go to the Cloud Console VPC network > Firewall rules(https://console.cloud.google.com/networking/firewalls/list?_ga=2.72166934.480049361.1580860862-1336643914.1580248695).\n2. Select the checkbox next to the following rules:\n - default-allow-http\n - default-allow-https\n - default-allow-internal\n3. Click `Delete`.\n4. Click `Create firewall rule` and set the following values:\n - Name: allow-iap-traffic\n - Targets: All instances in the network\n - Source IP ranges (press Enter after you paste each value in the box, copy each full CIDR IP address):\n - IAP Proxy Addresses `35.235.240.0/20`\n - Google Health Check `130.211.0.0/22`\n - Google Health Check `35.191.0.0/16`\n - Protocols and ports:\n - Specified protocols and ports required for access and management of your app. For example most health check connection protocols would be covered by;\n - tcp:80 (Default HTTP Health Check port)\n - tcp:443 (Default HTTPS Health Check port)\n**Note: if you have custom ports used by your load balancers, you will need to list them here**\n5. When you're finished updating values, click `Create`.",
-          "AuditProcedure": "**From Google Cloud Console**\n\n1. For each of your apps that have IAP enabled go to the Cloud Console VPC network > Firewall rules.\n2. Verify that the only rules correspond to the following values:\n - Targets: All instances in the network\n - Source IP ranges:\n - IAP Proxy Addresses `35.235.240.0/20`\n - Google Health Check `130.211.0.0/22`\n - Google Health Check `35.191.0.0/16`\n - Protocols and ports:\n - Specified protocols and ports required for access and management of your app. For example most health check connection protocols would be covered by;\n - tcp:80 (Default HTTP Health Check port)\n - tcp:443 (Default HTTPS Health Check port)\n\n**Note: if you have custom ports used by your load balancers, you will need to list them here**",
+          "RemediationProcedure": "**From Google Cloud Console** 1. Go to the Cloud Console VPC network > Firewall rules(https://console.cloud.google.com/networking/firewalls/list?_ga=2.72166934.480049361.1580860862-1336643914.1580248695). 2. Select the checkbox next to the following rules:  - default-allow-http  - default-allow-https  - default-allow-internal 3. Click `Delete`. 4. Click `Create firewall rule` and set the following values:  - Name: allow-iap-traffic  - Targets: All instances in the network  - Source IP ranges (press Enter after you paste each value in the box, copy each full CIDR IP address):  - IAP Proxy Addresses `35.235.240.0/20`  - Google Health Check `130.211.0.0/22`  - Google Health Check `35.191.0.0/16`  - Protocols and ports:  - Specified protocols and ports required for access and management of your app. For example most health check connection protocols would be covered by;  - tcp:80 (Default HTTP Health Check port)  - tcp:443 (Default HTTPS Health Check port) **Note: if you have custom ports used by your load balancers, you will need to list them here** 5. When you're finished updating values, click `Create`.",
+          "AuditProcedure": "**From Google Cloud Console**  1. For each of your apps that have IAP enabled go to the Cloud Console VPC network > Firewall rules. 2. Verify that the only rules correspond to the following values:  - Targets: All instances in the network  - Source IP ranges:  - IAP Proxy Addresses `35.235.240.0/20`  - Google Health Check `130.211.0.0/22`  - Google Health Check `35.191.0.0/16`  - Protocols and ports:  - Specified protocols and ports required for access and management of your app. For example most health check connection protocols would be covered by;  - tcp:80 (Default HTTP Health Check port)  - tcp:443 (Default HTTPS Health Check port)  **Note: if you have custom ports used by your load balancers, you will need to list them here**",
           "AdditionalInformation": "",
           "References": "https://cloud.google.com/iap/docs/concepts-overview:https://cloud.google.com/iap/docs/load-balancer-howto:https://cloud.google.com/load-balancing/docs/health-checks:https://cloud.google.com/blog/products/identity-security/cloud-iap-enables-context-aware-access-to-vms-via-ssh-and-rdp-without-bastion-hosts"
         }
@@ -914,7 +914,7 @@
     },
     {
       "Id": "4.5",
-      "Description": "Interacting with a serial port is often referred to as the serial console, which is similar to using a terminal window, in that input and output is entirely in text mode and there is no graphical interface or mouse support.\n\nIf you enable the interactive serial console on an instance, clients can attempt to connect to that instance from any IP address. Therefore interactive serial console support should be disabled.",
+      "Description": "Interacting with a serial port is often referred to as the serial console, which is similar to using a terminal window, in that input and output is entirely in text mode and there is no graphical interface or mouse support.  If you enable the interactive serial console on an instance, clients can attempt to connect to that instance from any IP address. Therefore interactive serial console support should be disabled.",
       "Checks": [
         "compute_instance_serial_ports_in_use"
       ],
@@ -923,11 +923,11 @@
           "Section": "4. Virtual Machines",
           "Profile": "Level 1",
           "AssessmentStatus": "Automated",
-          "Description": "Interacting with a serial port is often referred to as the serial console, which is similar to using a terminal window, in that input and output is entirely in text mode and there is no graphical interface or mouse support.\n\nIf you enable the interactive serial console on an instance, clients can attempt to connect to that instance from any IP address. Therefore interactive serial console support should be disabled.",
-          "RationaleStatement": "A virtual machine instance has four virtual serial ports. Interacting with a serial port is similar to using a terminal window, in that input and output is entirely in text mode and there is no graphical interface or mouse support. The instance's operating system, BIOS, and other system-level entities often write output to the serial ports, and can accept input such as commands or answers to prompts. Typically, these system-level entities use the first serial port (port 1) and serial port 1 is often referred to as the serial console.\n\nThe interactive serial console does not support IP-based access restrictions such as IP whitelists. If you enable the interactive serial console on an instance, clients can attempt to connect to that instance from any IP address. This allows anybody to connect to that instance if they know the correct SSH key, username, project ID, zone, and instance name.\n\nTherefore interactive serial console support should be disabled.",
+          "Description": "Interacting with a serial port is often referred to as the serial console, which is similar to using a terminal window, in that input and output is entirely in text mode and there is no graphical interface or mouse support.  If you enable the interactive serial console on an instance, clients can attempt to connect to that instance from any IP address. Therefore interactive serial console support should be disabled.",
+          "RationaleStatement": "A virtual machine instance has four virtual serial ports. Interacting with a serial port is similar to using a terminal window, in that input and output is entirely in text mode and there is no graphical interface or mouse support. The instance's operating system, BIOS, and other system-level entities often write output to the serial ports, and can accept input such as commands or answers to prompts. Typically, these system-level entities use the first serial port (port 1) and serial port 1 is often referred to as the serial console.  The interactive serial console does not support IP-based access restrictions such as IP whitelists. If you enable the interactive serial console on an instance, clients can attempt to connect to that instance from any IP address. This allows anybody to connect to that instance if they know the correct SSH key, username, project ID, zone, and instance name.  Therefore interactive serial console support should be disabled.",
           "ImpactStatement": "",
-          "RemediationProcedure": "**From Google Cloud CLI**\n\n1. Login to Google Cloud console\n2. Go to Computer Engine\n3. Go to VM instances\n4. Click on the Specific VM\n5. Click `EDIT`\n6. Unselect `Enable connecting to serial ports` below `Remote access` block.\n7. Click `Save`\n\n**From Google Cloud Console**\n\nUse the below command to disable \n```\ngcloud compute instances add-metadata  --zone= --metadata=serial-port-enable=false\n```\n\nor\n\n```\ngcloud compute instances add-metadata  --zone= --metadata=serial-port-enable=0\n```\n\n**Prevention:**\n\nYou can prevent VMs from having serial port access enable by `Disable VM serial port access` organization policy: \nhttps://console.cloud.google.com/iam-admin/orgpolicies/compute-disableSerialPortAccess(https://console.cloud.google.com/iam-admin/orgpolicies/compute-disableSerialPortAccess).",
-          "AuditProcedure": "**From Google Cloud CLI**\n\n1. Login to Google Cloud console\n2. Go to Computer Engine\n3. Go to VM instances\n4. Click on the Specific VM\n5. Ensure `Enable connecting to serial ports` below `Remote access` block is unselected.\n\n**From Google Cloud Console**\n\nEnsure the below command's output shows `null`:\n\n```\ngcloud compute instances describe  --zone= --format=\"json(metadata.items.key,metadata.items.value)\"\n``` \n\nor `key` and `value` properties from below command's json response are equal to `serial-port-enable` and `0` or `false` respectively.\n\n```\n {\n \"metadata\": {\n \"items\": \n {\n \"key\": \"serial-port-enable\",\n \"value\": \"0\"\n }\n \n }\n }\n```",
+          "RemediationProcedure": "**From Google Cloud CLI**  1. Login to Google Cloud console 2. Go to Computer Engine 3. Go to VM instances 4. Click on the Specific VM 5. Click `EDIT` 6. Unselect `Enable connecting to serial ports` below `Remote access` block. 7. Click `Save`  **From Google Cloud Console**  Use the below command to disable  ``` gcloud compute instances add-metadata  --zone= --metadata=serial-port-enable=false ```  or  ``` gcloud compute instances add-metadata  --zone= --metadata=serial-port-enable=0 ```  **Prevention:**  You can prevent VMs from having serial port access enable by `Disable VM serial port access` organization policy:  https://console.cloud.google.com/iam-admin/orgpolicies/compute-disableSerialPortAccess(https://console.cloud.google.com/iam-admin/orgpolicies/compute-disableSerialPortAccess).",
+          "AuditProcedure": "**From Google Cloud CLI**  1. Login to Google Cloud console 2. Go to Computer Engine 3. Go to VM instances 4. Click on the Specific VM 5. Ensure `Enable connecting to serial ports` below `Remote access` block is unselected.  **From Google Cloud Console**  Ensure the below command's output shows `null`:  ``` gcloud compute instances describe  --zone= --format=\"json(metadata.items.key,metadata.items.value)\" ```   or `key` and `value` properties from below command's json response are equal to `serial-port-enable` and `0` or `false` respectively.  ```  {  \"metadata\": {  \"items\":   {  \"key\": \"serial-port-enable\",  \"value\": \"0\"  }    }  } ```",
           "AdditionalInformation": "",
           "References": "https://cloud.google.com/compute/docs/instances/interacting-with-serial-console"
         }
@@ -945,10 +945,10 @@
           "Profile": "Level 1",
           "AssessmentStatus": "Automated",
           "Description": "It is recommended to use Instance specific SSH key(s) instead of using common/shared project-wide SSH key(s) to access Instances.",
-          "RationaleStatement": "Project-wide SSH keys are stored in Compute/Project-meta-data. Project wide SSH keys can be used to login into all the instances within project. Using project-wide SSH keys eases the SSH key management but if compromised, poses the security risk which can impact all the instances within project.\nIt is recommended to use Instance specific SSH keys which can limit the attack surface if the SSH keys are compromised.",
+          "RationaleStatement": "Project-wide SSH keys are stored in Compute/Project-meta-data. Project wide SSH keys can be used to login into all the instances within project. Using project-wide SSH keys eases the SSH key management but if compromised, poses the security risk which can impact all the instances within project. It is recommended to use Instance specific SSH keys which can limit the attack surface if the SSH keys are compromised.",
           "ImpactStatement": "Users already having Project-wide ssh key pairs and using third party SSH clients will lose access to the impacted Instances. For Project users using gcloud or GCP Console based SSH option, no manual key creation and distribution is required and will be handled by GCE (Google Compute Engine) itself. To access Instance using third party SSH clients Instance specific SSH key pairs need to be created and distributed to the required users.",
-          "RemediationProcedure": "**From Google Cloud Console**\n\n1. Go to the `VM instances` page by visiting: https://console.cloud.google.com/compute/instances(https://console.cloud.google.com/compute/instances). It will list all the instances in your project.\n\n2. Click on the name of the Impacted instance\n\n3. Click `Edit` in the toolbar\n\n4. Under SSH Keys, go to the `Block project-wide SSH keys` checkbox\n\n5. To block users with project-wide SSH keys from connecting to this instance, select `Block project-wide SSH keys`\n\n6. Click `Save` at the bottom of the page\n\n7. Repeat steps for every impacted Instance\n\n**From Google Cloud CLI**\n\nTo block project-wide public SSH keys, set the metadata value to `TRUE`:\n\n```\ngcloud compute instances add-metadata  --metadata block-project-ssh-keys=TRUE\n```",
-          "AuditProcedure": "**From Google Cloud Console**\n\n1. Go to the `VM instances` page by visiting https://console.cloud.google.com/compute/instances(https://console.cloud.google.com/compute/instances). It will list all the instances in your project.\n\n2. For every instance, click on the name of the instance.\n\n3. Under `SSH Keys`, ensure `Block project-wide SSH keys` is selected.\n\n**From Google Cloud CLI**\n\n1. List the instances in your project and get details on each instance:\n```\ngcloud compute instances list --format=json\n```\n2. Ensure `key: block-project-ssh-keys` is set to `value: 'true'`.",
+          "RemediationProcedure": "**From Google Cloud Console**  1. Go to the `VM instances` page by visiting: https://console.cloud.google.com/compute/instances(https://console.cloud.google.com/compute/instances). It will list all the instances in your project.  2. Click on the name of the Impacted instance  3. Click `Edit` in the toolbar  4. Under SSH Keys, go to the `Block project-wide SSH keys` checkbox  5. To block users with project-wide SSH keys from connecting to this instance, select `Block project-wide SSH keys`  6. Click `Save` at the bottom of the page  7. Repeat steps for every impacted Instance  **From Google Cloud CLI**  To block project-wide public SSH keys, set the metadata value to `TRUE`:  ``` gcloud compute instances add-metadata  --metadata block-project-ssh-keys=TRUE ```",
+          "AuditProcedure": "**From Google Cloud Console**  1. Go to the `VM instances` page by visiting https://console.cloud.google.com/compute/instances(https://console.cloud.google.com/compute/instances). It will list all the instances in your project.  2. For every instance, click on the name of the instance.  3. Under `SSH Keys`, ensure `Block project-wide SSH keys` is selected.  **From Google Cloud CLI**  1. List the instances in your project and get details on each instance: ``` gcloud compute instances list --format=json ``` 2. Ensure `key: block-project-ssh-keys` is set to `value: 'true'`.",
           "AdditionalInformation": "If OS Login is enabled, SSH keys in instance metadata are ignored, and therefore blocking project-wide SSH keys is not necessary.",
           "References": "https://cloud.google.com/compute/docs/instances/adding-removing-ssh-keys:https://cloud.google.com/sdk/gcloud/reference/topic/formats"
         }
@@ -966,10 +966,10 @@
           "Profile": "Level 2",
           "AssessmentStatus": "Automated",
           "Description": "To defend against advanced threats and ensure that the boot loader and firmware on your VMs are signed and untampered, it is recommended that Compute instances are launched with Shielded VM enabled.",
-          "RationaleStatement": "Shielded VMs are virtual machines (VMs) on Google Cloud Platform hardened by a set of security controls that help defend against rootkits and bootkits. \n\nShielded VM offers verifiable integrity of your Compute Engine VM instances, so you can be confident your instances haven't been compromised by boot- or kernel-level malware or rootkits. Shielded VM's verifiable integrity is achieved through the use of Secure Boot, virtual trusted platform module (vTPM)-enabled Measured Boot, and integrity monitoring.\n\nShielded VM instances run firmware which is signed and verified using Google's Certificate Authority, ensuring that the instance's firmware is unmodified and establishing the root of trust for Secure Boot.\n\nIntegrity monitoring helps you understand and make decisions about the state of your VM instances and the Shielded VM vTPM enables Measured Boot by performing the measurements needed to create a known good boot baseline, called the integrity policy baseline. The integrity policy baseline is used for comparison with measurements from subsequent VM boots to determine if anything has changed.\n\nSecure Boot helps ensure that the system only runs authentic software by verifying the digital signature of all boot components, and halting the boot process if signature verification fails.",
+          "RationaleStatement": "Shielded VMs are virtual machines (VMs) on Google Cloud Platform hardened by a set of security controls that help defend against rootkits and bootkits.   Shielded VM offers verifiable integrity of your Compute Engine VM instances, so you can be confident your instances haven't been compromised by boot- or kernel-level malware or rootkits. Shielded VM's verifiable integrity is achieved through the use of Secure Boot, virtual trusted platform module (vTPM)-enabled Measured Boot, and integrity monitoring.  Shielded VM instances run firmware which is signed and verified using Google's Certificate Authority, ensuring that the instance's firmware is unmodified and establishing the root of trust for Secure Boot.  Integrity monitoring helps you understand and make decisions about the state of your VM instances and the Shielded VM vTPM enables Measured Boot by performing the measurements needed to create a known good boot baseline, called the integrity policy baseline. The integrity policy baseline is used for comparison with measurements from subsequent VM boots to determine if anything has changed.  Secure Boot helps ensure that the system only runs authentic software by verifying the digital signature of all boot components, and halting the boot process if signature verification fails.",
           "ImpactStatement": "",
-          "RemediationProcedure": "To be able turn on `Shielded VM` on an instance, your instance must use an image with Shielded VM support. \n\n**From Google Cloud Console**\n\n1. Go to the `VM instances` page by visiting: https://console.cloud.google.com/compute/instances(https://console.cloud.google.com/compute/instances).\n\n2. Click on the instance name to see its `VM instance details` page.\n\n3. Click `STOP` to stop the instance.\n\n4. When the instance has stopped, click `EDIT`.\n\n5. In the Shielded VM section, select `Turn on vTPM` and `Turn on Integrity Monitoring`.\n\n6. Optionally, if you do not use any custom or unsigned drivers on the instance, also select `Turn on Secure Boot`.\n\n7. Click the `Save` button to modify the instance and then click `START` to restart it.\n\n**From Google Cloud CLI**\n\nYou can only enable Shielded VM options on instances that have Shielded VM support. For a list of Shielded VM public images, run the gcloud compute images list command with the following flags:\n\n```\ngcloud compute images list --project gce-uefi-images --no-standard-images\n```\n\n1. Stop the instance:\n```\ngcloud compute instances stop \n```\n2. Update the instance:\n\n```\ngcloud compute instances update  --shielded-vtpm --shielded-vm-integrity-monitoring\n```\n3. Optionally, if you do not use any custom or unsigned drivers on the instance, also turn on secure boot.\n\n```\ngcloud compute instances update  --shielded-vm-secure-boot\n```\n\n4. Restart the instance:\n\n```\ngcloud compute instances start \n```\n\n**Prevention:**\n\nYou can ensure that all new VMs will be created with Shielded VM enabled by setting up an Organization Policy to for `Shielded VM` at https://console.cloud.google.com/iam-admin/orgpolicies/compute-requireShieldedVm(https://console.cloud.google.com/iam-admin/orgpolicies/compute-requireShieldedVm). Learn more at: \nhttps://cloud.google.com/security/shielded-cloud/shielded-vm#organization-policy-constraint(https://cloud.google.com/security/shielded-cloud/shielded-vm#organization-policy-constraint).",
-          "AuditProcedure": "**From Google Cloud Console**\n\n1. Go to the `VM instances` page by visiting: https://console.cloud.google.com/compute/instances(https://console.cloud.google.com/compute/instances).\n\n2. Click on the instance name to see its `VM instance details` page.\n\n3. Under the section `Shielded VM`, ensure that `vTPM` and `Integrity Monitoring` are `on`.\n\n**From Google Cloud CLI**\n\n1. For each instance in your project, get its metadata:\n```\ngcloud compute instances list --format=json | jq -r '. | \"vTPM: \\(..shieldedInstanceConfig.enableVtpm) IntegrityMonitoring: \\(..shieldedInstanceConfig.enableIntegrityMonitoring) Name: \\(..name)\"'\n```\n\n2. Ensure that there is a `shieldedInstanceConfig` configuration and that configuration has the `enableIntegrityMonitoring` and `enableVtpm` set to `true`. If the VM is not a Shield VM image, you will not see a shieldedInstanceConfig` in the output.",
+          "RemediationProcedure": "To be able turn on `Shielded VM` on an instance, your instance must use an image with Shielded VM support.   **From Google Cloud Console**  1. Go to the `VM instances` page by visiting: https://console.cloud.google.com/compute/instances(https://console.cloud.google.com/compute/instances).  2. Click on the instance name to see its `VM instance details` page.  3. Click `STOP` to stop the instance.  4. When the instance has stopped, click `EDIT`.  5. In the Shielded VM section, select `Turn on vTPM` and `Turn on Integrity Monitoring`.  6. Optionally, if you do not use any custom or unsigned drivers on the instance, also select `Turn on Secure Boot`.  7. Click the `Save` button to modify the instance and then click `START` to restart it.  **From Google Cloud CLI**  You can only enable Shielded VM options on instances that have Shielded VM support. For a list of Shielded VM public images, run the gcloud compute images list command with the following flags:  ``` gcloud compute images list --project gce-uefi-images --no-standard-images ```  1. Stop the instance: ``` gcloud compute instances stop  ``` 2. Update the instance:  ``` gcloud compute instances update  --shielded-vtpm --shielded-vm-integrity-monitoring ``` 3. Optionally, if you do not use any custom or unsigned drivers on the instance, also turn on secure boot.  ``` gcloud compute instances update  --shielded-vm-secure-boot ```  4. Restart the instance:  ``` gcloud compute instances start  ```  **Prevention:**  You can ensure that all new VMs will be created with Shielded VM enabled by setting up an Organization Policy to for `Shielded VM` at https://console.cloud.google.com/iam-admin/orgpolicies/compute-requireShieldedVm(https://console.cloud.google.com/iam-admin/orgpolicies/compute-requireShieldedVm). Learn more at:  https://cloud.google.com/security/shielded-cloud/shielded-vm#organization-policy-constraint(https://cloud.google.com/security/shielded-cloud/shielded-vm#organization-policy-constraint).",
+          "AuditProcedure": "**From Google Cloud Console**  1. Go to the `VM instances` page by visiting: https://console.cloud.google.com/compute/instances(https://console.cloud.google.com/compute/instances).  2. Click on the instance name to see its `VM instance details` page.  3. Under the section `Shielded VM`, ensure that `vTPM` and `Integrity Monitoring` are `on`.  **From Google Cloud CLI**  1. For each instance in your project, get its metadata: ``` gcloud compute instances list --format=json | jq -r '. | \"vTPM: \\(..shieldedInstanceConfig.enableVtpm) IntegrityMonitoring: \\(..shieldedInstanceConfig.enableIntegrityMonitoring) Name: \\(..name)\"' ```  2. Ensure that there is a `shieldedInstanceConfig` configuration and that configuration has the `enableIntegrityMonitoring` and `enableVtpm` set to `true`. If the VM is not a Shield VM image, you will not see a shieldedInstanceConfig` in the output.",
           "AdditionalInformation": "If you do use custom or unsigned drivers on the instance, enabling Secure Boot will cause the machine to no longer boot. Turn on Secure Boot only on instances that have been verified to not have any custom drivers installed.",
           "References": "https://cloud.google.com/compute/docs/instances/modifying-shielded-vm:https://cloud.google.com/shielded-vm:https://cloud.google.com/security/shielded-cloud/shielded-vm#organization-policy-constraint"
         }
@@ -989,9 +989,9 @@
           "Description": "Enabling OS login binds SSH certificates to IAM users and facilitates effective SSH certificate management.",
           "RationaleStatement": "Enabling osLogin ensures that SSH keys used to connect to instances are mapped with IAM users. Revoking access to IAM user will revoke all the SSH keys associated with that particular user. It facilitates centralized and automated SSH key pair management which is useful in handling cases like response to compromised SSH key pairs and/or revocation of external/third-party/Vendor users.",
           "ImpactStatement": "Enabling OS Login on project disables metadata-based SSH key configurations on all instances from a project. Disabling OS Login restores SSH keys that you have configured in project or instance meta-data.",
-          "RemediationProcedure": "**From Google Cloud Console**\n\n1. Go to the VM compute metadata page by visiting: https://console.cloud.google.com/compute/metadata(https://console.cloud.google.com/compute/metadata).\n\n2. Click `Edit`.\n\n3. Add a metadata entry where the key is `enable-oslogin` and the value is `TRUE`.\n\n4. Click `Save` to apply the changes.\n\n5. For every instance that overrides the project setting, go to the `VM Instances` page at https://console.cloud.google.com/compute/instances(https://console.cloud.google.com/compute/instances).\n\n6. Click the name of the instance on which you want to remove the metadata value.\n7. At the top of the instance details page, click `Edit` to edit the instance settings.\n8. Under `Custom metadata`, remove any entry with key `enable-oslogin` and the value is `FALSE`\n9. At the bottom of the instance details page, click `Save` to apply your changes to the instance.\n\n**From Google Cloud CLI**\n\n1. Configure oslogin on the project:\n```\ngcloud compute project-info add-metadata --metadata enable-oslogin=TRUE\n```\n2. Remove instance metadata that overrides the project setting.\n```\ngcloud compute instances remove-metadata  --keys=enable-oslogin\n```\n\nOptionally, you can enable two factor authentication for OS login. For more information, see: https://cloud.google.com/compute/docs/oslogin/setup-two-factor-authentication(https://cloud.google.com/compute/docs/oslogin/setup-two-factor-authentication).",
-          "AuditProcedure": "**From Google Cloud Console**\n\n1. Go to the VM compute metadata page by visiting https://console.cloud.google.com/compute/metadata(https://console.cloud.google.com/compute/metadata).\n\n2. Ensure that key `enable-oslogin` is present with value set to `TRUE`. \n\n3. Because instances can override project settings, ensure that no instance has custom metadata with key `enable-oslogin` and value `FALSE`.\n\n**From Google Cloud CLI**\n\n1. List the instances in your project and get details on each instance:\n```\ngcloud compute instances list --format=json\n```\n2. Verify that the section `commonInstanceMetadata` has a key `enable-oslogin` set to value `TRUE`.\n**Exception:**\nVMs created by GKE should be excluded. These VMs have names that start with `gke-` and are labeled `goog-gke-node`",
-          "AdditionalInformation": "1. In order to use osLogin, instance using Custom Images must have the latest version of the Linux Guest Environment installed. The following image families do not yet support OS Login:\n\n```\nProject cos-cloud (Container-Optimized OS) image family cos-stable.\n\nAll project coreos-cloud (CoreOS) image families\n\nProject suse-cloud (SLES) image family sles-11\n\nAll Windows Server and SQL Server image families\n```\n\n2. Project enable-oslogin can be over-ridden by setting enable-oslogin parameter to an instance metadata individually.",
+          "RemediationProcedure": "**From Google Cloud Console**  1. Go to the VM compute metadata page by visiting: https://console.cloud.google.com/compute/metadata(https://console.cloud.google.com/compute/metadata).  2. Click `Edit`.  3. Add a metadata entry where the key is `enable-oslogin` and the value is `TRUE`.  4. Click `Save` to apply the changes.  5. For every instance that overrides the project setting, go to the `VM Instances` page at https://console.cloud.google.com/compute/instances(https://console.cloud.google.com/compute/instances).  6. Click the name of the instance on which you want to remove the metadata value. 7. At the top of the instance details page, click `Edit` to edit the instance settings. 8. Under `Custom metadata`, remove any entry with key `enable-oslogin` and the value is `FALSE` 9. At the bottom of the instance details page, click `Save` to apply your changes to the instance.  **From Google Cloud CLI**  1. Configure oslogin on the project: ``` gcloud compute project-info add-metadata --metadata enable-oslogin=TRUE ``` 2. Remove instance metadata that overrides the project setting. ``` gcloud compute instances remove-metadata  --keys=enable-oslogin ```  Optionally, you can enable two factor authentication for OS login. For more information, see: https://cloud.google.com/compute/docs/oslogin/setup-two-factor-authentication(https://cloud.google.com/compute/docs/oslogin/setup-two-factor-authentication).",
+          "AuditProcedure": "**From Google Cloud Console**  1. Go to the VM compute metadata page by visiting https://console.cloud.google.com/compute/metadata(https://console.cloud.google.com/compute/metadata).  2. Ensure that key `enable-oslogin` is present with value set to `TRUE`.   3. Because instances can override project settings, ensure that no instance has custom metadata with key `enable-oslogin` and value `FALSE`.  **From Google Cloud CLI**  1. List the instances in your project and get details on each instance: ``` gcloud compute instances list --format=json ``` 2. Verify that the section `commonInstanceMetadata` has a key `enable-oslogin` set to value `TRUE`. **Exception:** VMs created by GKE should be excluded. These VMs have names that start with `gke-` and are labeled `goog-gke-node`",
+          "AdditionalInformation": "1. In order to use osLogin, instance using Custom Images must have the latest version of the Linux Guest Environment installed. The following image families do not yet support OS Login:  ``` Project cos-cloud (Container-Optimized OS) image family cos-stable.  All project coreos-cloud (CoreOS) image families  Project suse-cloud (SLES) image family sles-11  All Windows Server and SQL Server image families ```  2. Project enable-oslogin can be over-ridden by setting enable-oslogin parameter to an instance metadata individually.",
           "References": "https://cloud.google.com/compute/docs/instances/managing-instance-access:https://cloud.google.com/compute/docs/instances/managing-instance-access#enable_oslogin:https://cloud.google.com/sdk/gcloud/reference/compute/instances/remove-metadata:https://cloud.google.com/compute/docs/oslogin/setup-two-factor-authentication"
         }
       ]
@@ -1010,16 +1010,16 @@
           "Description": "Compute instances should not be configured to have external IP addresses.",
           "RationaleStatement": "To reduce your attack surface, Compute instances should not have public IP addresses. Instead, instances should be configured behind load balancers, to minimize the instance's exposure to the internet.",
           "ImpactStatement": "Removing the external IP address from your Compute instance may cause some applications to stop working.",
-          "RemediationProcedure": "**From Google Cloud Console**\n\n1. Go to the `VM instances` page by visiting: https://console.cloud.google.com/compute/instances(https://console.cloud.google.com/compute/instances).\n\n2. Click on the instance name to go the the `Instance detail page`.\n\n3. Click `Edit`.\n\n4. For each Network interface, ensure that `External IP` is set to `None`.\n\n5. Click `Done` and then click `Save`.\n\n**From Google Cloud CLI**\n\n1. Describe the instance properties:\n```\ngcloud compute instances describe  --zone=\n```\n\n2. Identify the access config name that contains the external IP address. This access config appears in the following format:\n\n```\nnetworkInterfaces:\n- accessConfigs:\n - kind: compute#accessConfig\n name: External NAT\n natIP: 130.211.181.55\n type: ONE_TO_ONE_NAT\n```\n\n3. Delete the access config. \n```\ngcloud compute instances delete-access-config  --zone= --access-config-name \n```\n\nIn the above example, the `ACCESS_CONFIG_NAME` is `External NAT`. The name of your access config might be different.\n\n**Prevention:**\nYou can configure the `Define allowed external IPs for VM instances` Organization Policy to prevent VMs from being configured with public IP addresses. Learn more at: https://console.cloud.google.com/orgpolicies/compute-vmExternalIpAccess(https://console.cloud.google.com/orgpolicies/compute-vmExternalIpAccess)",
-          "AuditProcedure": "**From Google Cloud Console**\n\n1. Go to the `VM instances` page by visiting: https://console.cloud.google.com/compute/instances(https://console.cloud.google.com/compute/instances).\n\n2. For every VM, ensure that there is no `External IP` configured.\n\n**From Google Cloud CLI**\n\n```\ngcloud compute instances list --format=json\n```\n\n1. The output should not contain an `accessConfigs` section under `networkInterfaces`. Note that the `natIP` value is present only for instances that are running or for instances that are stopped but have a static IP address. For instances that are stopped and are configured to have an ephemeral public IP address, the `natIP` field will not be present. Example output:\n\n```\nnetworkInterfaces:\n- accessConfigs:\n - kind: compute#accessConfig\n name: External NAT\n networkTier: STANDARD\n type: ONE_TO_ONE_NAT\n```\n\n**Exception:**\nInstances created by GKE should be excluded because some of them have external IP addresses and cannot be changed by editing the instance settings. Instances created by GKE should be excluded. These instances have names that start with \"gke-\" and are labeled \"goog-gke-node\".",
-          "AdditionalInformation": "You can connect to Linux VMs that do not have public IP addresses by using Identity-Aware Proxy for TCP forwarding. Learn more at https://cloud.google.com/compute/docs/instances/connecting-advanced#sshbetweeninstances(https://cloud.google.com/compute/docs/instances/connecting-advanced#sshbetweeninstances)\n\nFor Windows VMs, see https://cloud.google.com/compute/docs/instances/connecting-to-instance(https://cloud.google.com/compute/docs/instances/connecting-to-instance).",
+          "RemediationProcedure": "**From Google Cloud Console**  1. Go to the `VM instances` page by visiting: https://console.cloud.google.com/compute/instances(https://console.cloud.google.com/compute/instances).  2. Click on the instance name to go the the `Instance detail page`.  3. Click `Edit`.  4. For each Network interface, ensure that `External IP` is set to `None`.  5. Click `Done` and then click `Save`.  **From Google Cloud CLI**  1. Describe the instance properties: ``` gcloud compute instances describe  --zone= ```  2. Identify the access config name that contains the external IP address. This access config appears in the following format:  ``` networkInterfaces: - accessConfigs:  - kind: compute#accessConfig  name: External NAT  natIP: 130.211.181.55  type: ONE_TO_ONE_NAT ```  3. Delete the access config.  ``` gcloud compute instances delete-access-config  --zone= --access-config-name  ```  In the above example, the `ACCESS_CONFIG_NAME` is `External NAT`. The name of your access config might be different.  **Prevention:** You can configure the `Define allowed external IPs for VM instances` Organization Policy to prevent VMs from being configured with public IP addresses. Learn more at: https://console.cloud.google.com/orgpolicies/compute-vmExternalIpAccess(https://console.cloud.google.com/orgpolicies/compute-vmExternalIpAccess)",
+          "AuditProcedure": "**From Google Cloud Console**  1. Go to the `VM instances` page by visiting: https://console.cloud.google.com/compute/instances(https://console.cloud.google.com/compute/instances).  2. For every VM, ensure that there is no `External IP` configured.  **From Google Cloud CLI**  ``` gcloud compute instances list --format=json ```  1. The output should not contain an `accessConfigs` section under `networkInterfaces`. Note that the `natIP` value is present only for instances that are running or for instances that are stopped but have a static IP address. For instances that are stopped and are configured to have an ephemeral public IP address, the `natIP` field will not be present. Example output:  ``` networkInterfaces: - accessConfigs:  - kind: compute#accessConfig  name: External NAT  networkTier: STANDARD  type: ONE_TO_ONE_NAT ```  **Exception:** Instances created by GKE should be excluded because some of them have external IP addresses and cannot be changed by editing the instance settings. Instances created by GKE should be excluded. These instances have names that start with \"gke-\" and are labeled \"goog-gke-node\".",
+          "AdditionalInformation": "You can connect to Linux VMs that do not have public IP addresses by using Identity-Aware Proxy for TCP forwarding. Learn more at https://cloud.google.com/compute/docs/instances/connecting-advanced#sshbetweeninstances(https://cloud.google.com/compute/docs/instances/connecting-advanced#sshbetweeninstances)  For Windows VMs, see https://cloud.google.com/compute/docs/instances/connecting-to-instance(https://cloud.google.com/compute/docs/instances/connecting-to-instance).",
           "References": "https://cloud.google.com/load-balancing/docs/backend-service#backends_and_external_ip_addresses:https://cloud.google.com/compute/docs/instances/connecting-advanced#sshbetweeninstances:https://cloud.google.com/compute/docs/instances/connecting-to-instance:https://cloud.google.com/compute/docs/ip-addresses/reserve-static-external-ip-address#unassign_ip:https://cloud.google.com/resource-manager/docs/organization-policy/org-policy-constraints"
         }
       ]
     },
     {
       "Id": "4.11",
-      "Description": "Google Cloud encrypts data at-rest and in-transit, but customer data must be decrypted for processing. Confidential Computing is a breakthrough technology which encrypts data in-use—while it is being processed. Confidential Computing environments keep data encrypted in memory and elsewhere outside the central processing unit (CPU). \n\nConfidential VMs leverage the Secure Encrypted Virtualization (SEV) feature of AMD EPYC™ CPUs. Customer data will stay encrypted while it is used, indexed, queried, or trained on. Encryption keys are generated in hardware, per VM, and not exportable. Thanks to built-in hardware optimizations of both performance and security, there is no significant performance penalty to Confidential Computing workloads.",
+      "Description": "Google Cloud encrypts data at-rest and in-transit, but customer data must be decrypted for processing. Confidential Computing is a breakthrough technology which encrypts data in-use—while it is being processed. Confidential Computing environments keep data encrypted in memory and elsewhere outside the central processing unit (CPU).   Confidential VMs leverage the Secure Encrypted Virtualization (SEV) feature of AMD EPYC™ CPUs. Customer data will stay encrypted while it is used, indexed, queried, or trained on. Encryption keys are generated in hardware, per VM, and not exportable. Thanks to built-in hardware optimizations of both performance and security, there is no significant performance penalty to Confidential Computing workloads.",
       "Checks": [
         "compute_instance_confidential_computing_enabled"
       ],
@@ -1028,11 +1028,11 @@
           "Section": "4. Virtual Machines",
           "Profile": "Level 2",
           "AssessmentStatus": "Automated",
-          "Description": "Google Cloud encrypts data at-rest and in-transit, but customer data must be decrypted for processing. Confidential Computing is a breakthrough technology which encrypts data in-use—while it is being processed. Confidential Computing environments keep data encrypted in memory and elsewhere outside the central processing unit (CPU). \n\nConfidential VMs leverage the Secure Encrypted Virtualization (SEV) feature of AMD EPYC™ CPUs. Customer data will stay encrypted while it is used, indexed, queried, or trained on. Encryption keys are generated in hardware, per VM, and not exportable. Thanks to built-in hardware optimizations of both performance and security, there is no significant performance penalty to Confidential Computing workloads.",
+          "Description": "Google Cloud encrypts data at-rest and in-transit, but customer data must be decrypted for processing. Confidential Computing is a breakthrough technology which encrypts data in-use—while it is being processed. Confidential Computing environments keep data encrypted in memory and elsewhere outside the central processing unit (CPU).   Confidential VMs leverage the Secure Encrypted Virtualization (SEV) feature of AMD EPYC™ CPUs. Customer data will stay encrypted while it is used, indexed, queried, or trained on. Encryption keys are generated in hardware, per VM, and not exportable. Thanks to built-in hardware optimizations of both performance and security, there is no significant performance penalty to Confidential Computing workloads.",
           "RationaleStatement": "Confidential Computing enables customers' sensitive code and other data encrypted in memory during processing. Google does not have access to the encryption keys. Confidential VM can help alleviate concerns about risk related to either dependency on Google infrastructure or Google insiders' access to customer data in the clear.",
-          "ImpactStatement": "- Confidential Computing for Compute instances does not support live migration. Unlike regular Compute instances, Confidential VMs experience disruptions during maintenance events like a software or hardware update.\n- Additional charges may be incurred when enabling this security feature. See https://cloud.google.com/compute/confidential-vm/pricing(https://cloud.google.com/compute/confidential-vm/pricing) for more info.",
-          "RemediationProcedure": "Confidential Computing can only be enabled when an instance is created. You must delete the current instance and create a new one.\n\n**From Google Cloud Console**\n\n1. Go to the VM instances page by visiting: https://console.cloud.google.com/compute/instances(https://console.cloud.google.com/compute/instances).\n\n2. Click `CREATE INSTANCE`.\n\n3. Fill out the desired configuration for your instance.\n\n4. Under the `Confidential VM service` section, check the option `Enable the Confidential Computing service on this VM instance`.\n\n5. Click `Create`.\n\n**From Google Cloud CLI**\n\nCreate a new instance with Confidential Compute enabled. \n\n```\ngcloud compute instances create  --zone  --confidential-compute --maintenance-policy=TERMINATE \n```",
-          "AuditProcedure": "Note: Confidential Computing is currently only supported on N2D machines. To learn more about types of N2D machines, visit https://cloud.google.com/compute/docs/machine-types#n2d_machine_types(https://cloud.google.com/compute/docs/machine-types#n2d_machine_types)\n\n**From Google Cloud Console**\n\n1. Go to the `VM instances` page by visiting: https://console.cloud.google.com/compute/instances(https://console.cloud.google.com/compute/instances).\n\n2. Click on the instance name to see its VM instance details page.\n\n3. Ensure that `Confidential VM service` is `Enabled`.\n\n**From Google Cloud CLI**\n\n1. List the instances in your project and get details on each instance:\n\n```\ngcloud compute instances list --format=json\n```\n2. Ensure that `enableConfidentialCompute` is set to `true` for all instances with machine type starting with \"n2d-\".\n\n```\nconfidentialInstanceConfig:\n enableConfidentialCompute: true\n```",
+          "ImpactStatement": "- Confidential Computing for Compute instances does not support live migration. Unlike regular Compute instances, Confidential VMs experience disruptions during maintenance events like a software or hardware update. - Additional charges may be incurred when enabling this security feature. See https://cloud.google.com/compute/confidential-vm/pricing(https://cloud.google.com/compute/confidential-vm/pricing) for more info.",
+          "RemediationProcedure": "Confidential Computing can only be enabled when an instance is created. You must delete the current instance and create a new one.  **From Google Cloud Console**  1. Go to the VM instances page by visiting: https://console.cloud.google.com/compute/instances(https://console.cloud.google.com/compute/instances).  2. Click `CREATE INSTANCE`.  3. Fill out the desired configuration for your instance.  4. Under the `Confidential VM service` section, check the option `Enable the Confidential Computing service on this VM instance`.  5. Click `Create`.  **From Google Cloud CLI**  Create a new instance with Confidential Compute enabled.   ``` gcloud compute instances create  --zone  --confidential-compute --maintenance-policy=TERMINATE  ```",
+          "AuditProcedure": "Note: Confidential Computing is currently only supported on N2D machines. To learn more about types of N2D machines, visit https://cloud.google.com/compute/docs/machine-types#n2d_machine_types(https://cloud.google.com/compute/docs/machine-types#n2d_machine_types)  **From Google Cloud Console**  1. Go to the `VM instances` page by visiting: https://console.cloud.google.com/compute/instances(https://console.cloud.google.com/compute/instances).  2. Click on the instance name to see its VM instance details page.  3. Ensure that `Confidential VM service` is `Enabled`.  **From Google Cloud CLI**  1. List the instances in your project and get details on each instance:  ``` gcloud compute instances list --format=json ``` 2. Ensure that `enableConfidentialCompute` is set to `true` for all instances with machine type starting with \"n2d-\".  ``` confidentialInstanceConfig:  enableConfidentialCompute: true ```",
           "AdditionalInformation": "",
           "References": "https://cloud.google.com/compute/confidential-vm/docs/creating-cvm-instance:https://cloud.google.com/compute/confidential-vm/docs/about-cvm:https://cloud.google.com/confidential-computing:https://cloud.google.com/blog/products/identity-security/introducing-google-cloud-confidential-computing-with-confidential-vms"
         }
@@ -1050,10 +1050,10 @@
           "Profile": "Level 1",
           "AssessmentStatus": "Automated",
           "Description": "It is recommended to configure your instance to not use the default Compute Engine service account because it has the Editor role on the project.",
-          "RationaleStatement": "The default Compute Engine service account has the Editor role on the project, which allows read and write access to most Google Cloud Services. To defend against privilege escalations if your VM is compromised and prevent an attacker from gaining access to all of your project, it is recommended to not use the default Compute Engine service account. Instead, you should create a new service account and assigning only the permissions needed by your instance.\n\nThe default Compute Engine service account is named `PROJECT_NUMBER-compute@developer.gserviceaccount.com`.",
+          "RationaleStatement": "The default Compute Engine service account has the Editor role on the project, which allows read and write access to most Google Cloud Services. To defend against privilege escalations if your VM is compromised and prevent an attacker from gaining access to all of your project, it is recommended to not use the default Compute Engine service account. Instead, you should create a new service account and assigning only the permissions needed by your instance.  The default Compute Engine service account is named `PROJECT_NUMBER-compute@developer.gserviceaccount.com`.",
           "ImpactStatement": "",
-          "RemediationProcedure": "**From Google Cloud Console**\n\n1. Go to the `VM instances` page by visiting: https://console.cloud.google.com/compute/instances(https://console.cloud.google.com/compute/instances).\n2. Click on the instance name to go to its `VM instance details` page.\n3. Click `STOP` and then click `EDIT`.\n4. Under the section `API and identity management`, select a service account other than the default Compute Engine service account. You may first need to create a new service account.\n5. Click `Save` and then click `START`.\n\n**From Google Cloud CLI**\n\n1. Stop the instance:\n```\ngcloud compute instances stop \n```\n2. Update the instance:\n```\ngcloud compute instances set-service-account  --service-account= \n```\n3. Restart the instance:\n```\ngcloud compute instances start \n```",
-          "AuditProcedure": "**From Google Cloud Console**\n\n1. Go to the `VM instances` page by visiting: https://console.cloud.google.com/compute/instances(https://console.cloud.google.com/compute/instances).\n2. Click on each instance name to go to its `VM instance details` page.\n3. Under the section `API and identity management`, ensure that the default Compute Engine service account is not used. This account is named `PROJECT_NUMBER-compute@developer.gserviceaccount.com`.\n\n**From Google Cloud CLI**\n\n1. List the instances in your project and get details on each instance:\n```\ngcloud compute instances list --format=json | jq -r '. | \"SA: \\(..serviceAccounts.email) Name: \\(..name)\"'\n```\n2. Ensure that the service account section has an email that does not match the pattern `PROJECT_NUMBER-compute@developer.gserviceaccount.com`.\n\n**Exception:**\nVMs created by GKE should be excluded. These VMs have names that start with `gke-` and are labeled `goog-gke-node`.",
+          "RemediationProcedure": "**From Google Cloud Console**  1. Go to the `VM instances` page by visiting: https://console.cloud.google.com/compute/instances(https://console.cloud.google.com/compute/instances). 2. Click on the instance name to go to its `VM instance details` page. 3. Click `STOP` and then click `EDIT`. 4. Under the section `API and identity management`, select a service account other than the default Compute Engine service account. You may first need to create a new service account. 5. Click `Save` and then click `START`.  **From Google Cloud CLI**  1. Stop the instance: ``` gcloud compute instances stop  ``` 2. Update the instance: ``` gcloud compute instances set-service-account  --service-account=  ``` 3. Restart the instance: ``` gcloud compute instances start  ```",
+          "AuditProcedure": "**From Google Cloud Console**  1. Go to the `VM instances` page by visiting: https://console.cloud.google.com/compute/instances(https://console.cloud.google.com/compute/instances). 2. Click on each instance name to go to its `VM instance details` page. 3. Under the section `API and identity management`, ensure that the default Compute Engine service account is not used. This account is named `PROJECT_NUMBER-compute@developer.gserviceaccount.com`.  **From Google Cloud CLI**  1. List the instances in your project and get details on each instance: ``` gcloud compute instances list --format=json | jq -r '. | \"SA: \\(..serviceAccounts.email) Name: \\(..name)\"' ``` 2. Ensure that the service account section has an email that does not match the pattern `PROJECT_NUMBER-compute@developer.gserviceaccount.com`.  **Exception:** VMs created by GKE should be excluded. These VMs have names that start with `gke-` and are labeled `goog-gke-node`.",
           "AdditionalInformation": "",
           "References": "https://cloud.google.com/compute/docs/access/service-accounts:https://cloud.google.com/compute/docs/access/create-enable-service-accounts-for-instances:https://cloud.google.com/sdk/gcloud/reference/compute/instances/set-service-account"
         }
@@ -1071,18 +1071,18 @@
           "Profile": "Level 1",
           "AssessmentStatus": "Automated",
           "Description": "To support principle of least privileges and prevent potential privilege escalation it is recommended that instances are not assigned to default service account `Compute Engine default service account` with Scope `Allow full access to all Cloud APIs`.",
-          "RationaleStatement": "Along with ability to optionally create, manage and use user managed custom service accounts, Google Compute Engine provides default service account `Compute Engine default service account` for an instances to access necessary cloud services.\n`Project Editor` role is assigned to `Compute Engine default service account` hence, This service account has almost all capabilities over all cloud services except billing.\nHowever, when `Compute Engine default service account` assigned to an instance it can operate in 3 scopes.\n\n```\n1. Allow default access: Allows only minimum access required to run an Instance (Least Privileges)\n\n2. Allow full access to all Cloud APIs: Allow full access to all the cloud APIs/Services (Too much access)\n\n3. Set access for each API: Allows Instance administrator to choose only those APIs that are needed to perform specific business functionality expected by instance\n```\n\nWhen an instance is configured with `Compute Engine default service account` with Scope `Allow full access to all Cloud APIs`, based on IAM roles assigned to the user(s) accessing Instance, it may allow user to perform cloud operations/API calls that user is not supposed to perform leading to successful privilege escalation.",
+          "RationaleStatement": "Along with ability to optionally create, manage and use user managed custom service accounts, Google Compute Engine provides default service account `Compute Engine default service account` for an instances to access necessary cloud services. `Project Editor` role is assigned to `Compute Engine default service account` hence, This service account has almost all capabilities over all cloud services except billing. However, when `Compute Engine default service account` assigned to an instance it can operate in 3 scopes.  ``` 1. Allow default access: Allows only minimum access required to run an Instance (Least Privileges)  2. Allow full access to all Cloud APIs: Allow full access to all the cloud APIs/Services (Too much access)  3. Set access for each API: Allows Instance administrator to choose only those APIs that are needed to perform specific business functionality expected by instance ```  When an instance is configured with `Compute Engine default service account` with Scope `Allow full access to all Cloud APIs`, based on IAM roles assigned to the user(s) accessing Instance, it may allow user to perform cloud operations/API calls that user is not supposed to perform leading to successful privilege escalation.",
           "ImpactStatement": "In order to change service account or scope for an instance, it needs to be stopped.",
-          "RemediationProcedure": "**From Google Cloud Console**\n\n1. Go to the `VM instances` page by visiting: https://console.cloud.google.com/compute/instances(https://console.cloud.google.com/compute/instances).\n\n2. Click on the impacted VM instance.\n\n3. If the instance is not stopped, click the `Stop` button. Wait for the instance to be stopped.\n\n4. Next, click the `Edit` button.\n\n5. Scroll down to the `Service Account` section.\n\n6. Select a different service account or ensure that `Allow full access to all Cloud APIs` is not selected.\n\n7. Click the `Save` button to save your changes and then click `START`.\n\n**From Google Cloud CLI**\n\n1. Stop the instance:\n```\ngcloud compute instances stop \n```\n2. Update the instance:\n```\ngcloud compute instances set-service-account  --service-account= --scopes SCOPE1, SCOPE2...\n```\n3. Restart the instance:\n```\ngcloud compute instances start \n```",
-          "AuditProcedure": "**From Google Cloud Console**\n\n1. Go to the `VM instances` page by visiting: https://console.cloud.google.com/compute/instances(https://console.cloud.google.com/compute/instances).\n2. Click on each instance name to go to its `VM instance details` page.\n3. Under the `API and identity management`, ensure that `Cloud API access scopes` is not set to `Allow full access to all Cloud APIs`.\n\n**From Google Cloud CLI**\n\n1. List the instances in your project and get details on each instance:\n```\ngcloud compute instances list --format=json | jq -r '. | \"SA Scopes: \\(..serviceAccounts.scopes) Name: \\(..name) Email: \\(..serviceAccounts.email)\"'\n```\n2. Ensure that the service account section has an email that does not match the pattern `PROJECT_NUMBER-compute@developer.gserviceaccount.com`.\n\n**Exception:**\nVMs created by GKE should be excluded. These VMs have names that start with `gke-` and are labeled `goog-gke-node",
-          "AdditionalInformation": "'- User IAM roles will override service account scope but configuring minimal scope ensures defense in depth\n\n- Non-default service accounts do not offer selection of access scopes like default service account. IAM roles with non-default service accounts should be used to control VM access.",
+          "RemediationProcedure": "**From Google Cloud Console**  1. Go to the `VM instances` page by visiting: https://console.cloud.google.com/compute/instances(https://console.cloud.google.com/compute/instances).  2. Click on the impacted VM instance.  3. If the instance is not stopped, click the `Stop` button. Wait for the instance to be stopped.  4. Next, click the `Edit` button.  5. Scroll down to the `Service Account` section.  6. Select a different service account or ensure that `Allow full access to all Cloud APIs` is not selected.  7. Click the `Save` button to save your changes and then click `START`.  **From Google Cloud CLI**  1. Stop the instance: ``` gcloud compute instances stop  ``` 2. Update the instance: ``` gcloud compute instances set-service-account  --service-account= --scopes SCOPE1, SCOPE2... ``` 3. Restart the instance: ``` gcloud compute instances start  ```",
+          "AuditProcedure": "**From Google Cloud Console**  1. Go to the `VM instances` page by visiting: https://console.cloud.google.com/compute/instances(https://console.cloud.google.com/compute/instances). 2. Click on each instance name to go to its `VM instance details` page. 3. Under the `API and identity management`, ensure that `Cloud API access scopes` is not set to `Allow full access to all Cloud APIs`.  **From Google Cloud CLI**  1. List the instances in your project and get details on each instance: ``` gcloud compute instances list --format=json | jq -r '. | \"SA Scopes: \\(..serviceAccounts.scopes) Name: \\(..name) Email: \\(..serviceAccounts.email)\"' ``` 2. Ensure that the service account section has an email that does not match the pattern `PROJECT_NUMBER-compute@developer.gserviceaccount.com`.  **Exception:** VMs created by GKE should be excluded. These VMs have names that start with `gke-` and are labeled `goog-gke-node",
+          "AdditionalInformation": "'- User IAM roles will override service account scope but configuring minimal scope ensures defense in depth  - Non-default service accounts do not offer selection of access scopes like default service account. IAM roles with non-default service accounts should be used to control VM access.",
           "References": "https://cloud.google.com/compute/docs/access/create-enable-service-accounts-for-instances:https://cloud.google.com/compute/docs/access/service-accounts"
         }
       ]
     },
     {
       "Id": "4.6",
-      "Description": "Compute Engine instance cannot forward a packet unless the source IP address of the packet matches the IP address of the instance. Similarly, GCP won't deliver a packet whose destination IP address is different than the IP address of the instance receiving the packet. However, both capabilities are required if you want to use instances to help route packets.\n\nForwarding of data packets should be disabled to prevent data loss or information disclosure.",
+      "Description": "Compute Engine instance cannot forward a packet unless the source IP address of the packet matches the IP address of the instance. Similarly, GCP won't deliver a packet whose destination IP address is different than the IP address of the instance receiving the packet. However, both capabilities are required if you want to use instances to help route packets.  Forwarding of data packets should be disabled to prevent data loss or information disclosure.",
       "Checks": [
         "compute_instance_ip_forwarding_is_enabled"
       ],
@@ -1091,11 +1091,11 @@
           "Section": "4. Virtual Machines",
           "Profile": "Level 1",
           "AssessmentStatus": "Automated",
-          "Description": "Compute Engine instance cannot forward a packet unless the source IP address of the packet matches the IP address of the instance. Similarly, GCP won't deliver a packet whose destination IP address is different than the IP address of the instance receiving the packet. However, both capabilities are required if you want to use instances to help route packets.\n\nForwarding of data packets should be disabled to prevent data loss or information disclosure.",
-          "RationaleStatement": "Compute Engine instance cannot forward a packet unless the source IP address of the packet matches the IP address of the instance. Similarly, GCP won't deliver a packet whose destination IP address is different than the IP address of the instance receiving the packet. However, both capabilities are required if you want to use instances to help route packets.\nTo enable this source and destination IP check, disable the `canIpForward` field, which allows an instance to send and receive packets with non-matching destination or source IPs.",
+          "Description": "Compute Engine instance cannot forward a packet unless the source IP address of the packet matches the IP address of the instance. Similarly, GCP won't deliver a packet whose destination IP address is different than the IP address of the instance receiving the packet. However, both capabilities are required if you want to use instances to help route packets.  Forwarding of data packets should be disabled to prevent data loss or information disclosure.",
+          "RationaleStatement": "Compute Engine instance cannot forward a packet unless the source IP address of the packet matches the IP address of the instance. Similarly, GCP won't deliver a packet whose destination IP address is different than the IP address of the instance receiving the packet. However, both capabilities are required if you want to use instances to help route packets. To enable this source and destination IP check, disable the `canIpForward` field, which allows an instance to send and receive packets with non-matching destination or source IPs.",
           "ImpactStatement": "Deleting instance(s) acting as routers/packet forwarders may break the network connectivity.",
-          "RemediationProcedure": "You only edit the `canIpForward` setting at instance creation time. Therefore, you need to delete the instance and create a new one where `canIpForward` is set to `false`.\n\n**From Google Cloud Console**\n\n1. Go to the `VM Instances` page by visiting: https://console.cloud.google.com/compute/instances(https://console.cloud.google.com/compute/instances). \n2. Select the `VM Instance` you want to remediate.\n3. Click the `Delete` button.\n4. On the 'VM Instances' page, click `CREATE INSTANCE'.\n5. Create a new instance with the desired configuration. By default, the instance is configured to not allow IP forwarding.\n\n**From Google Cloud CLI**\n\n1. Delete the instance:\n```\ngcloud compute instances delete INSTANCE_NAME\n```\n\n2. Create a new instance to replace it, with `IP forwarding` set to `Off`\n```\ngcloud compute instances create\n```",
-          "AuditProcedure": "**From Google Cloud Console**\n\n1. Go to the `VM Instances` page by visiting: https://console.cloud.google.com/compute/instances(https://console.cloud.google.com/compute/instances). \n2. For every instance, click on its name to go to the `VM instance details` page.\n3. Under the `Network interfaces` section, ensure that `IP forwarding` is set to `Off` for every network interface.\n\n**From Google Cloud CLI**\n\n1. List all instances:\n```\ngcloud compute instances list --format='table(name,canIpForward)'\n```\n2. Ensure that `CAN_IP_FORWARD` column in the output of above command does not contain `True` for any VM instance.\n\n**Exception:**\nInstances created by GKE should be excluded because they need to have IP forwarding enabled and cannot be changed. Instances created by GKE have names that start with \"gke-\".",
+          "RemediationProcedure": "You only edit the `canIpForward` setting at instance creation time. Therefore, you need to delete the instance and create a new one where `canIpForward` is set to `false`.  **From Google Cloud Console**  1. Go to the `VM Instances` page by visiting: https://console.cloud.google.com/compute/instances(https://console.cloud.google.com/compute/instances).  2. Select the `VM Instance` you want to remediate. 3. Click the `Delete` button. 4. On the 'VM Instances' page, click `CREATE INSTANCE'. 5. Create a new instance with the desired configuration. By default, the instance is configured to not allow IP forwarding.  **From Google Cloud CLI**  1. Delete the instance: ``` gcloud compute instances delete INSTANCE_NAME ```  2. Create a new instance to replace it, with `IP forwarding` set to `Off` ``` gcloud compute instances create ```",
+          "AuditProcedure": "**From Google Cloud Console**  1. Go to the `VM Instances` page by visiting: https://console.cloud.google.com/compute/instances(https://console.cloud.google.com/compute/instances).  2. For every instance, click on its name to go to the `VM instance details` page. 3. Under the `Network interfaces` section, ensure that `IP forwarding` is set to `Off` for every network interface.  **From Google Cloud CLI**  1. List all instances: ``` gcloud compute instances list --format='table(name,canIpForward)' ``` 2. Ensure that `CAN_IP_FORWARD` column in the output of above command does not contain `True` for any VM instance.  **Exception:** Instances created by GKE should be excluded because they need to have IP forwarding enabled and cannot be changed. Instances created by GKE have names that start with \"gke-\".",
           "AdditionalInformation": "You can only set the `canIpForward` field at instance creation time. After an instance is created, the field becomes read-only.",
           "References": "https://cloud.google.com/vpc/docs/using-routes#canipforward"
         }
@@ -1113,8 +1113,8 @@
           "Description": "In order to maintain the highest level of security all connections to an application should be secure by default.",
           "RationaleStatement": "Insecure HTTP connections maybe subject to eavesdropping which can expose sensitive data.",
           "ImpactStatement": "All connections to appengine will automatically be redirected to the HTTPS endpoint ensuring that all connections are secured by TLS.",
-          "RemediationProcedure": "Add a line to the app.yaml file controlling the application which enforces secure connections. For example\n\n```\nhandlers:\n- url: /.*\n **secure: always**\n redirect_http_response_code: 301\n script: auto\n```\n\nhttps://cloud.google.com/appengine/docs/standard/python3/config/appref",
-          "AuditProcedure": "Verify that the app.yaml file controlling the application contains a line which enforces secure connections. For example\n\n```\nhandlers:\n- url: /.*\n secure: always\n redirect_http_response_code: 301\n script: auto\n```\n\nhttps://cloud.google.com/appengine/docs/standard/python3/config/appref(https://cloud.google.com/appengine/docs/standard/python3/config/appref)",
+          "RemediationProcedure": "Add a line to the app.yaml file controlling the application which enforces secure connections. For example  ``` handlers: - url: /.*  **secure: always**  redirect_http_response_code: 301  script: auto ```  https://cloud.google.com/appengine/docs/standard/python3/config/appref",
+          "AuditProcedure": "Verify that the app.yaml file controlling the application contains a line which enforces secure connections. For example  ``` handlers: - url: /.*  secure: always  redirect_http_response_code: 301  script: auto ```  https://cloud.google.com/appengine/docs/standard/python3/config/appref(https://cloud.google.com/appengine/docs/standard/python3/config/appref)",
           "AdditionalInformation": "",
           "References": "https://cloud.google.com/appengine/docs/standard/python3/config/appref:https://cloud.google.com/appengine/docs/flexible/nodejs/configuring-your-app-with-app-yaml"
         }
@@ -1132,30 +1132,30 @@
           "Profile": "Level 2",
           "AssessmentStatus": "Automated",
           "Description": "Customer-Supplied Encryption Keys (CSEK) are a feature in Google Cloud Storage and Google Compute Engine. If you supply your own encryption keys, Google uses your key to protect the Google-generated keys used to encrypt and decrypt your data. By default, Google Compute Engine encrypts all data at rest. Compute Engine handles and manages this encryption for you without any additional actions on your part. However, if you wanted to control and manage this encryption yourself, you can provide your own encryption keys.",
-          "RationaleStatement": "By default, Google Compute Engine encrypts all data at rest. Compute Engine handles and manages this encryption for you without any additional actions on your part. However, if you wanted to control and manage this encryption yourself, you can provide your own encryption keys.\n\nIf you provide your own encryption keys, Compute Engine uses your key to protect the Google-generated keys used to encrypt and decrypt your data. Only users who can provide the correct key can use resources protected by a customer-supplied encryption key.\n\nGoogle does not store your keys on its servers and cannot access your protected data unless you provide the key. This also means that if you forget or lose your key, there is no way for Google to recover the key or to recover any data encrypted with the lost key.\n\nAt least business critical VMs should have VM disks encrypted with CSEK.",
+          "RationaleStatement": "By default, Google Compute Engine encrypts all data at rest. Compute Engine handles and manages this encryption for you without any additional actions on your part. However, if you wanted to control and manage this encryption yourself, you can provide your own encryption keys.  If you provide your own encryption keys, Compute Engine uses your key to protect the Google-generated keys used to encrypt and decrypt your data. Only users who can provide the correct key can use resources protected by a customer-supplied encryption key.  Google does not store your keys on its servers and cannot access your protected data unless you provide the key. This also means that if you forget or lose your key, there is no way for Google to recover the key or to recover any data encrypted with the lost key.  At least business critical VMs should have VM disks encrypted with CSEK.",
           "ImpactStatement": "If you lose your encryption key, you will not be able to recover the data.",
-          "RemediationProcedure": "Currently there is no way to update the encryption of an existing disk. Therefore you should create a new disk with `Encryption` set to `Customer supplied`.\n\n**From Google Cloud Console**\n\n1. Go to Compute Engine `Disks` by visiting: https://console.cloud.google.com/compute/disks(https://console.cloud.google.com/compute/disks).\n2. Click `CREATE DISK`.\n3. Set `Encryption type` to `Customer supplied`,\n4. Provide the `Key` in the box.\n5. Select `Wrapped key`.\n6. Click `Create`.\n\n**From Google Cloud CLI**\n\nIn the gcloud compute tool, encrypt a disk using the --csek-key-file flag during instance creation. If you are using an RSA-wrapped key, use the gcloud beta component:\n\n```\ngcloud compute instances create  --csek-key-file \n```\n\nTo encrypt a standalone persistent disk:\n```\ngcloud compute disks create  --csek-key-file \n```",
-          "AuditProcedure": "**From Google Cloud Console**\n\n1. Go to Compute Engine `Disks` by visiting: https://console.cloud.google.com/compute/disks(https://console.cloud.google.com/compute/disks).\n2. Click on the disk for your critical VMs to see its configuration details.\n4. Ensure that `Encryption type` is set to `Customer supplied`.\n\n**From Google Cloud CLI**\n\nEnsure `diskEncryptionKey` property in the below command's response is not null, and contains key `sha256` with corresponding value\n\n```\ngcloud compute disks describe  --zone  --format=\"json(diskEncryptionKey,name)\"\n```",
-          "AdditionalInformation": "`Note 1:` When you delete a persistent disk, Google discards the cipher keys, rendering the data irretrievable. This process is irreversible.\n\n`Note 2:` It is up to you to generate and manage your key. You must provide a key that is a 256-bit string encoded in RFC 4648 standard base64 to Compute Engine.\n\n`Note 3:` An example key file looks like this.\n\n \n {\n \"uri\": \"https://www.googleapis.com/compute/v1/projects/myproject/zones/us-central1-a/disks/example-disk\",\n \"key\": \"acXTX3rxrKAFTF0tYVLvydU1riRZTvUNC4g5I11NY-c=\",\n \"key-type\": \"raw\"\n },\n {\n \"uri\": \"https://www.googleapis.com/compute/v1/projects/myproject/global/snapshots/my-private-snapshot\",\n \"key\": \"ieCx/NcW06PcT7Ep1X6LUTc/hLvUDYyzSZPPVCVPTVEohpeHASqC8uw5TzyO9U+Fka9JFHz0mBibXUInrC/jEk014kCK/NPjYgEMOyssZ4ZINPKxlUh2zn1bV+MCaTICrdmuSBTWlUUiFoDD6PYznLwh8ZNdaheCeZ8ewEXgFQ8V+sDroLaN3Xs3MDTXQEMMoNUXMCZEIpg9Vtp9x2oeQ5lAbtt7bYAAHf5l+gJWw3sUfs0/Glw5fpdjT8Uggrr+RMZezGrltJEF293rvTIjWOEB3z5OHyHwQkvdrPDFcTqsLfh+8Hr8g+mf+7zVPEC8nEbqpdl3GPv3A7AwpFp7MA==\"\n \"key-type\": \"rsa-encrypted\"\n }\n ",
+          "RemediationProcedure": "Currently there is no way to update the encryption of an existing disk. Therefore you should create a new disk with `Encryption` set to `Customer supplied`.  **From Google Cloud Console**  1. Go to Compute Engine `Disks` by visiting: https://console.cloud.google.com/compute/disks(https://console.cloud.google.com/compute/disks). 2. Click `CREATE DISK`. 3. Set `Encryption type` to `Customer supplied`, 4. Provide the `Key` in the box. 5. Select `Wrapped key`. 6. Click `Create`.  **From Google Cloud CLI**  In the gcloud compute tool, encrypt a disk using the --csek-key-file flag during instance creation. If you are using an RSA-wrapped key, use the gcloud beta component:  ``` gcloud compute instances create  --csek-key-file  ```  To encrypt a standalone persistent disk: ``` gcloud compute disks create  --csek-key-file  ```",
+          "AuditProcedure": "**From Google Cloud Console**  1. Go to Compute Engine `Disks` by visiting: https://console.cloud.google.com/compute/disks(https://console.cloud.google.com/compute/disks). 2. Click on the disk for your critical VMs to see its configuration details. 4. Ensure that `Encryption type` is set to `Customer supplied`.  **From Google Cloud CLI**  Ensure `diskEncryptionKey` property in the below command's response is not null, and contains key `sha256` with corresponding value  ``` gcloud compute disks describe  --zone  --format=\"json(diskEncryptionKey,name)\" ```",
+          "AdditionalInformation": "`Note 1:` When you delete a persistent disk, Google discards the cipher keys, rendering the data irretrievable. This process is irreversible.  `Note 2:` It is up to you to generate and manage your key. You must provide a key that is a 256-bit string encoded in RFC 4648 standard base64 to Compute Engine.  `Note 3:` An example key file looks like this.     {  \"uri\": \"https://www.googleapis.com/compute/v1/projects/myproject/zones/us-central1-a/disks/example-disk\",  \"key\": \"acXTX3rxrKAFTF0tYVLvydU1riRZTvUNC4g5I11NY-c=\",  \"key-type\": \"raw\"  },  {  \"uri\": \"https://www.googleapis.com/compute/v1/projects/myproject/global/snapshots/my-private-snapshot\",  \"key\": \"ieCx/NcW06PcT7Ep1X6LUTc/hLvUDYyzSZPPVCVPTVEohpeHASqC8uw5TzyO9U+Fka9JFHz0mBibXUInrC/jEk014kCK/NPjYgEMOyssZ4ZINPKxlUh2zn1bV+MCaTICrdmuSBTWlUUiFoDD6PYznLwh8ZNdaheCeZ8ewEXgFQ8V+sDroLaN3Xs3MDTXQEMMoNUXMCZEIpg9Vtp9x2oeQ5lAbtt7bYAAHf5l+gJWw3sUfs0/Glw5fpdjT8Uggrr+RMZezGrltJEF293rvTIjWOEB3z5OHyHwQkvdrPDFcTqsLfh+8Hr8g+mf+7zVPEC8nEbqpdl3GPv3A7AwpFp7MA==\"  \"key-type\": \"rsa-encrypted\"  }  ",
           "References": "https://cloud.google.com/compute/docs/disks/customer-supplied-encryption#encrypt_a_new_persistent_disk_with_your_own_keys:https://cloud.google.com/compute/docs/reference/rest/v1/disks/get:https://cloud.google.com/compute/docs/disks/customer-supplied-encryption#key_file"
         }
       ]
     },
     {
       "Id": "4.12",
-      "Description": "Google Cloud Virtual Machines have the ability via an OS Config agent API to periodically (about every 10 minutes) report OS inventory data. A patch compliance API periodically reads this data, and cross references metadata to determine if the latest updates are installed.\n\nThis is not the only Patch Management solution available to your organization and you should weigh your needs before committing to using this method.",
+      "Description": "Google Cloud Virtual Machines have the ability via an OS Config agent API to periodically (about every 10 minutes) report OS inventory data. A patch compliance API periodically reads this data, and cross references metadata to determine if the latest updates are installed.  This is not the only Patch Management solution available to your organization and you should weigh your needs before committing to using this method.",
       "Checks": [],
       "Attributes": [
         {
           "Section": "4. Virtual Machines",
           "Profile": "Level 2",
           "AssessmentStatus": "Manual",
-          "Description": "Google Cloud Virtual Machines have the ability via an OS Config agent API to periodically (about every 10 minutes) report OS inventory data. A patch compliance API periodically reads this data, and cross references metadata to determine if the latest updates are installed.\n\nThis is not the only Patch Management solution available to your organization and you should weigh your needs before committing to using this method.",
+          "Description": "Google Cloud Virtual Machines have the ability via an OS Config agent API to periodically (about every 10 minutes) report OS inventory data. A patch compliance API periodically reads this data, and cross references metadata to determine if the latest updates are installed.  This is not the only Patch Management solution available to your organization and you should weigh your needs before committing to using this method.",
           "RationaleStatement": "Keeping virtual machine operating systems up to date is a security best practice. Using this service will simplify this process.",
           "ImpactStatement": "Most Operating Systems require a restart or changing critical resources to apply the updates. Using the Google Cloud VM manager for its OS Patch management will incur additional costs for each VM managed by it. Please view the VM manager pricing reference for further information.",
-          "RemediationProcedure": "**From Google Cloud Console**\n\n**Enabling OS Patch Management on a Project by Project Basis**\n\n**Install OS Config API for the Project**\n\n1. Navigate into a project. In the expanded portal menu located at the top left of the screen hover over \"APIs & Services\". Then in the menu right of that select \"API Libraries\"\n2. Search for \"VM Manager (OS Config API) or scroll down in the left hand column and select the filter labeled \"Compute\" where it is the last listed. Open this API.\n3. Click the blue 'Enable' button.\n\n**Add MetaData Tags for OSConfig Parsing**\n\n1. From the main Google Cloud console, open the portal menu in the top left. Mouse over Computer Engine to expand the menu next to it.\n2. Under the \"Settings\" heading, select \"Metadata\".\n3. In this view there will be a list of the project wide metadata tags for VMs. Click edit and 'add item' in the key column type 'enable-osconfig' and in the value column set it to 'true'.\n\nFrom Command Line\n\n1. For project wide tagging, run the following command\n\n```\ngcloud compute project-info add-metadata \\\n --project \\\n --metadata=enable-osconfig=TRUE\n```\nPlease see the reference /compute/docs/troubleshooting/vm-manager/verify-setup#metadata-enabled at the bottom for more options like instance specific tagging.\n\nNote: Adding a new tag via commandline may overwrite existing tags. You will need to do this at a time of low usage for the least impact.\n\n**Install and Start the Local OSConfig for Data Parsing**\n\nThere is no way to centrally manage or start the Local OSConfig agent. Please view the reference of manage-os#agent-install to view specific operating system commands. \n\n**Setup a project wide Service Account**\n\nPlease view Recommendation 4.1 to view how to setup a service account. Rerun the audit procedure to test if it has taken effect.\n\n**Enable NAT or Configure Private Google Access to allow Access to Public Update Hosting**\n\nFor the sake of brevity, please see the attached resources to enable NAT or Private Google Access. Rerun the audit procedure to test if it has taken effect.\n\nFrom Command Line:\n\n**Install OS Config API for the Project**\n\n1. In each project you wish to audit run ```gcloud services enable osconfig.googleapis.com```\n\n**Install and Start the Local OSConfig for Data Parsing**\n\nPlease view the reference of manage-os#agent-install to view specific operating system commands.\n\n**Setup a project wide Service Account**\n\nPlease view Recommendation 4.1 to view how to setup a service account. Rerun the audit procedure to test if it has taken effect.\n\n**Enable NAT or Configure Private Google Access to allow Access to Public Update Hosting**\n\nFor the sake of brevity, please see the attached resources to enable NAT or Private Google Access. Rerun the audit procedure to test if it has taken effect.\n\nDetermine if Instances can connect to public update hosting\n\nLinux \n\nDebian Based Operating Systems\n\n```\nsudo apt update\n```\nThe output should have a numbered list of lines with Hit: URL of updates.\n\nRedhat Based Operating Systems\n```\nyum check-update\n```\nThe output should show a list of packages that have updates available.\n\nWindows\n\n```\nping http://windowsupdate.microsoft.com/\n```\nThe ping should successfully be delivered and received.",
-          "AuditProcedure": "**From Google Cloud Console**\n\n**Determine if OS Config API is Enabled for the Project**\n\n1. Navigate into a project. In the expanded navigation menu located at the top left of the screen hover over `APIs & Services`. Then in the menu right of that select `API Libraries`\n2. Search for \"VM Manager (OS Config API) or scroll down in the left hand column and select the filter labeled \"Compute\" where it is the last listed. Open this API.\n3. Verify the blue button at the top is enabled.\n\n**Determine if VM Instances have correct metadata tags for OSConfig parsing**\n\n1. From the main Google Cloud console, open the hamburger menu in the top left. Mouse over Computer Engine to expand the menu next to it.\n1. Under the \"Settings\" heading, select \"Metadata\".\n1. In this view there will be a list of the project wide metadata tags for VMs. Determine if the tag \"enable-osconfig\" is set to \"true\".\n\n**Determine if the Operating System of VM Instances have the local OS-Config Agent running**\n\nThere is no way to determine this from the Google Cloud console. The only way is to run operating specific commands locally inside the operating system via remote connection. For the sake of brevity of this recommendation please view the docs/troubleshooting/vm-manager/verify-setup reference at the bottom of the page. If you initialized your VM instance with a Google Supplied OS Image with a build date of later than v20200114 it will have the service installed. You should still determine its status for proper operation.\n\n**Verify the service account you have setup for the project in Recommendation 4.1 is running**\n\n1. Go to the `VM instances` page by visiting: https://console.cloud.google.com/compute/instances(https://console.cloud.google.com/compute/instances).\n2. Click on each instance name to go to its `VM instance details` page.\n3. Under the section `Service Account`, take note of the service account\n4. Run the commands locally for your operating system that are located at the docs/troubleshooting/vm-manager/verify-setup#service-account-enabled reference located at the bottom of this page. They should return the name of your service account.\n\n**Determine if Instances can connect to public update hosting**\n\nEach type of operating system has its own update process. You will need to determine on each operating system that it can reach the update servers via its network connection. The VM Manager doesn't host the updates, it will only allow you to centrally issue a command to each VM to update.\n\n**Determine if OS Config API is Enabled for the Project**\n\n1. In each project you wish to enable run the following command\n\n ```gcloud services list```\n\n2. If osconfig.googleapis.com is in the left hand column it is enabled for this project.\n\n**Determine if VM Manager is Enabled for the Project**\n\n1. Within the project run the following command:\n```\ngcloud compute instances os-inventory describe VM-NAME \\\n --zone=ZONE\n```\nThe output will look like\n```\nINSTANCE_ID INSTANCE_NAME OS OSCONFIG_AGENT_VERSION UPDATE_TIME\n29255009728795105 centos7 CentOS Linux 7 (Core) 20210217.00-g1.el7 2021-04-12T22:19:36.559Z\n5138980234596718741 rhel-8 Red Hat Enterprise Linux 8.3 (Ootpa) 20210316.00-g1.el8 2021-09-16T17:19:24Z\n7127836223366142250 windows Microsoft Windows Server 2019 Datacenter 20210316.00.0+win@1 2021-09-16T17:13:18Z\n```\n\n**Determine if VM Instances have correct metadata tags for OSConfig parsing**\n\n1. Select the project you want to view tagging in.\n\n**From Google Cloud Console**\n\n1. From the main Google Cloud console, open the hamburger menu in the top left. Mouse over Computer Engine to expand the menu next to it.\n2. Under the \"Settings\" heading, select \"Metadata\".\n3. In this view there will be a list of the project wide metadata tags for Vms. Verify a tag of ‘enable-osconfig’ is in this list and it is set to ‘true’.\n\n**From Command Line**\n\nRun the following command to view instance data\n```\ngcloud compute instances list --format=\"table(name,status,tags.list())\"\n```\nOn each instance it should have a tag of ‘enable-osconfig’ set to ‘true’\n\n**Determine if the Operating System of VM Instances have the local OS-Config Agent running**\n\nThere is no way to determine this from the Google Cloud CLI. The best way is to run the the commands inside the operating system located at 'Check OS-Config agent is installed and running' at the /docs/troubleshooting/vm-manager/verify-setup reference at the bottom of the page. If you initialized your VM instance with a Google Supplied OS Image with a build date of later than v20200114 it will have the service installed. You should still determine its status.\n\n**Verify the service account you have setup for the project in Recommendation 4.1 is running**\n\n1. Go to the `VM instances` page by visiting: https://console.cloud.google.com/compute/instances(https://console.cloud.google.com/compute/instances).\n2. Click on each instance name to go to its `VM instance details` page.\n3. Under the section `Service Account`, take note of the service account\n4. View the compute/docs/troubleshooting/vm-manager/verify-setup#service-account-enabled resource at the bottom of the page for operating system specific commands to run locally.\n\n**Determine if Instances can connect to public update hosting**\n\nLinux \nDebian Based Operating Systems\n```\nsudo apt update\n```\nThe output should have a numbered list of lines with Hit: URL of updates.\n\nRedhat Based Operating Systems\n```\nyum check-update\n```\nThe output should show a list of packages that have updates available.\n\nWindows\n\n```\nping http://windowsupdate.microsoft.com/\n```\nThe ping should successfully be delivered and received.",
-          "AdditionalInformation": "This is not your only solution to handle updates. This is a Google Cloud specific recommendation to leverage a resource to solve the need for comprehensive update procedures and policy. If you have a solution already in place you do not need to make the switch.\n\nThere are also further resources that would be out of the scope of this recommendation. If you need to allow your VMs to access public hosted updates, please see the reference to setup NAT or Private Google Access.",
+          "RemediationProcedure": "**From Google Cloud Console**  **Enabling OS Patch Management on a Project by Project Basis**  **Install OS Config API for the Project**  1. Navigate into a project. In the expanded portal menu located at the top left of the screen hover over \"APIs & Services\". Then in the menu right of that select \"API Libraries\" 2. Search for \"VM Manager (OS Config API) or scroll down in the left hand column and select the filter labeled \"Compute\" where it is the last listed. Open this API. 3. Click the blue 'Enable' button.  **Add MetaData Tags for OSConfig Parsing**  1. From the main Google Cloud console, open the portal menu in the top left. Mouse over Computer Engine to expand the menu next to it. 2. Under the \"Settings\" heading, select \"Metadata\". 3. In this view there will be a list of the project wide metadata tags for VMs. Click edit and 'add item' in the key column type 'enable-osconfig' and in the value column set it to 'true'.  From Command Line  1. For project wide tagging, run the following command  ``` gcloud compute project-info add-metadata \\  --project \\  --metadata=enable-osconfig=TRUE ``` Please see the reference /compute/docs/troubleshooting/vm-manager/verify-setup#metadata-enabled at the bottom for more options like instance specific tagging.  Note: Adding a new tag via commandline may overwrite existing tags. You will need to do this at a time of low usage for the least impact.  **Install and Start the Local OSConfig for Data Parsing**  There is no way to centrally manage or start the Local OSConfig agent. Please view the reference of manage-os#agent-install to view specific operating system commands.   **Setup a project wide Service Account**  Please view Recommendation 4.1 to view how to setup a service account. Rerun the audit procedure to test if it has taken effect.  **Enable NAT or Configure Private Google Access to allow Access to Public Update Hosting**  For the sake of brevity, please see the attached resources to enable NAT or Private Google Access. Rerun the audit procedure to test if it has taken effect.  From Command Line:  **Install OS Config API for the Project**  1. In each project you wish to audit run ```gcloud services enable osconfig.googleapis.com```  **Install and Start the Local OSConfig for Data Parsing**  Please view the reference of manage-os#agent-install to view specific operating system commands.  **Setup a project wide Service Account**  Please view Recommendation 4.1 to view how to setup a service account. Rerun the audit procedure to test if it has taken effect.  **Enable NAT or Configure Private Google Access to allow Access to Public Update Hosting**  For the sake of brevity, please see the attached resources to enable NAT or Private Google Access. Rerun the audit procedure to test if it has taken effect.  Determine if Instances can connect to public update hosting  Linux   Debian Based Operating Systems  ``` sudo apt update ``` The output should have a numbered list of lines with Hit: URL of updates.  Redhat Based Operating Systems ``` yum check-update ``` The output should show a list of packages that have updates available.  Windows  ``` ping http://windowsupdate.microsoft.com/ ``` The ping should successfully be delivered and received.",
+          "AuditProcedure": "**From Google Cloud Console**  **Determine if OS Config API is Enabled for the Project**  1. Navigate into a project. In the expanded navigation menu located at the top left of the screen hover over `APIs & Services`. Then in the menu right of that select `API Libraries` 2. Search for \"VM Manager (OS Config API) or scroll down in the left hand column and select the filter labeled \"Compute\" where it is the last listed. Open this API. 3. Verify the blue button at the top is enabled.  **Determine if VM Instances have correct metadata tags for OSConfig parsing**  1. From the main Google Cloud console, open the hamburger menu in the top left. Mouse over Computer Engine to expand the menu next to it. 1. Under the \"Settings\" heading, select \"Metadata\". 1. In this view there will be a list of the project wide metadata tags for VMs. Determine if the tag \"enable-osconfig\" is set to \"true\".  **Determine if the Operating System of VM Instances have the local OS-Config Agent running**  There is no way to determine this from the Google Cloud console. The only way is to run operating specific commands locally inside the operating system via remote connection. For the sake of brevity of this recommendation please view the docs/troubleshooting/vm-manager/verify-setup reference at the bottom of the page. If you initialized your VM instance with a Google Supplied OS Image with a build date of later than v20200114 it will have the service installed. You should still determine its status for proper operation.  **Verify the service account you have setup for the project in Recommendation 4.1 is running**  1. Go to the `VM instances` page by visiting: https://console.cloud.google.com/compute/instances(https://console.cloud.google.com/compute/instances). 2. Click on each instance name to go to its `VM instance details` page. 3. Under the section `Service Account`, take note of the service account 4. Run the commands locally for your operating system that are located at the docs/troubleshooting/vm-manager/verify-setup#service-account-enabled reference located at the bottom of this page. They should return the name of your service account.  **Determine if Instances can connect to public update hosting**  Each type of operating system has its own update process. You will need to determine on each operating system that it can reach the update servers via its network connection. The VM Manager doesn't host the updates, it will only allow you to centrally issue a command to each VM to update.  **Determine if OS Config API is Enabled for the Project**  1. In each project you wish to enable run the following command   ```gcloud services list```  2. If osconfig.googleapis.com is in the left hand column it is enabled for this project.  **Determine if VM Manager is Enabled for the Project**  1. Within the project run the following command: ``` gcloud compute instances os-inventory describe VM-NAME \\  --zone=ZONE ``` The output will look like ``` INSTANCE_ID INSTANCE_NAME OS OSCONFIG_AGENT_VERSION UPDATE_TIME 29255009728795105 centos7 CentOS Linux 7 (Core) 20210217.00-g1.el7 2021-04-12T22:19:36.559Z 5138980234596718741 rhel-8 Red Hat Enterprise Linux 8.3 (Ootpa) 20210316.00-g1.el8 2021-09-16T17:19:24Z 7127836223366142250 windows Microsoft Windows Server 2019 Datacenter 20210316.00.0+win@1 2021-09-16T17:13:18Z ```  **Determine if VM Instances have correct metadata tags for OSConfig parsing**  1. Select the project you want to view tagging in.  **From Google Cloud Console**  1. From the main Google Cloud console, open the hamburger menu in the top left. Mouse over Computer Engine to expand the menu next to it. 2. Under the \"Settings\" heading, select \"Metadata\". 3. In this view there will be a list of the project wide metadata tags for Vms. Verify a tag of ‘enable-osconfig’ is in this list and it is set to ‘true’.  **From Command Line**  Run the following command to view instance data ``` gcloud compute instances list --format=\"table(name,status,tags.list())\" ``` On each instance it should have a tag of ‘enable-osconfig’ set to ‘true’  **Determine if the Operating System of VM Instances have the local OS-Config Agent running**  There is no way to determine this from the Google Cloud CLI. The best way is to run the the commands inside the operating system located at 'Check OS-Config agent is installed and running' at the /docs/troubleshooting/vm-manager/verify-setup reference at the bottom of the page. If you initialized your VM instance with a Google Supplied OS Image with a build date of later than v20200114 it will have the service installed. You should still determine its status.  **Verify the service account you have setup for the project in Recommendation 4.1 is running**  1. Go to the `VM instances` page by visiting: https://console.cloud.google.com/compute/instances(https://console.cloud.google.com/compute/instances). 2. Click on each instance name to go to its `VM instance details` page. 3. Under the section `Service Account`, take note of the service account 4. View the compute/docs/troubleshooting/vm-manager/verify-setup#service-account-enabled resource at the bottom of the page for operating system specific commands to run locally.  **Determine if Instances can connect to public update hosting**  Linux  Debian Based Operating Systems ``` sudo apt update ``` The output should have a numbered list of lines with Hit: URL of updates.  Redhat Based Operating Systems ``` yum check-update ``` The output should show a list of packages that have updates available.  Windows  ``` ping http://windowsupdate.microsoft.com/ ``` The ping should successfully be delivered and received.",
+          "AdditionalInformation": "This is not your only solution to handle updates. This is a Google Cloud specific recommendation to leverage a resource to solve the need for comprehensive update procedures and policy. If you have a solution already in place you do not need to make the switch.  There are also further resources that would be out of the scope of this recommendation. If you need to allow your VMs to access public hosted updates, please see the reference to setup NAT or Private Google Access.",
           "References": "https://cloud.google.com/compute/docs/manage-os:https://cloud.google.com/compute/docs/os-patch-management:https://cloud.google.com/compute/docs/vm-manager:https://cloud.google.com/compute/docs/images/os-details#vm-manager:https://cloud.google.com/compute/docs/vm-manager#pricing:https://cloud.google.com/compute/docs/troubleshooting/vm-manager/verify-setup:https://cloud.google.com/compute/docs/instances/view-os-details#view-data-tools:https://cloud.google.com/compute/docs/os-patch-management/create-patch-job:https://cloud.google.com/nat/docs/set-up-network-address-translation:https://cloud.google.com/vpc/docs/configure-private-google-access:https://workbench.cisecurity.org/sections/811638/recommendations/1334335:https://cloud.google.com/compute/docs/manage-os#agent-install:https://cloud.google.com/compute/docs/troubleshooting/vm-manager/verify-setup#service-account-enabled:https://cloud.google.com/compute/docs/os-patch-management#use-dashboard:https://cloud.google.com/compute/docs/troubleshooting/vm-manager/verify-setup#metadata-enabled"
         }
       ]
@@ -1174,8 +1174,8 @@
           "Description": "It is recommended that IAM policy on Cloud Storage bucket does not allows anonymous or public access.",
           "RationaleStatement": "Allowing anonymous or public access grants permissions to anyone to access bucket content. Such access might not be desired if you are storing any sensitive data. Hence, ensure that anonymous or public access to a bucket is not allowed.",
           "ImpactStatement": "No storage buckets would be publicly accessible. You would have to explicitly administer bucket access.",
-          "RemediationProcedure": "**From Google Cloud Console**\n\n1. Go to `Storage browser` by visiting https://console.cloud.google.com/storage/browser(https://console.cloud.google.com/storage/browser).\n2. Click on the bucket name to go to its `Bucket details` page.\n3. Click on the `Permissions` tab. \n4. Click `Delete` button in front of `allUsers` and `allAuthenticatedUsers` to remove that particular role assignment.\n\n**From Google Cloud CLI**\n\nRemove `allUsers` and `allAuthenticatedUsers` access.\n```\ngsutil iam ch -d allUsers gs://BUCKET_NAME\ngsutil iam ch -d allAuthenticatedUsers gs://BUCKET_NAME\n```\n\n**Prevention:**\n\nYou can prevent Storage buckets from becoming publicly accessible by setting up the `Domain restricted sharing` organization policy at: https://console.cloud.google.com/iam-admin/orgpolicies/iam-allowedPolicyMemberDomains (https://console.cloud.google.com/iam-admin/orgpolicies/iam-allowedPolicyMemberDomains).",
-          "AuditProcedure": "**From Google Cloud Console**\n\n1. Go to `Storage browser` by visiting https://console.cloud.google.com/storage/browser(https://console.cloud.google.com/storage/browser).\n2. Click on each bucket name to go to its `Bucket details` page.\n3. Click on the `Permissions` tab.\n4. Ensure that `allUsers` and `allAuthenticatedUsers` are not in the `Members` list.\n\n**From Google Cloud CLI**\n\n1. List all buckets in a project\n\n```\ngsutil ls\n```\n\n2. Check the IAM Policy for each bucket:\n\n```\ngsutil iam get gs://BUCKET_NAME\n```\n\nNo role should contain `allUsers` and/or `allAuthenticatedUsers` as a member.\n\n**Using Rest API**\n\n1. List all buckets in a project\n\n```\nGet https://www.googleapis.com/storage/v1/b?project=\n```\n\n2. Check the IAM Policy for each bucket\n\n```\nGET https://www.googleapis.com/storage/v1/b//iam\n```\n\nNo role should contain `allUsers` and/or `allAuthenticatedUsers` as a member.",
+          "RemediationProcedure": "**From Google Cloud Console**  1. Go to `Storage browser` by visiting https://console.cloud.google.com/storage/browser(https://console.cloud.google.com/storage/browser). 2. Click on the bucket name to go to its `Bucket details` page. 3. Click on the `Permissions` tab.  4. Click `Delete` button in front of `allUsers` and `allAuthenticatedUsers` to remove that particular role assignment.  **From Google Cloud CLI**  Remove `allUsers` and `allAuthenticatedUsers` access. ``` gsutil iam ch -d allUsers gs://BUCKET_NAME gsutil iam ch -d allAuthenticatedUsers gs://BUCKET_NAME ```  **Prevention:**  You can prevent Storage buckets from becoming publicly accessible by setting up the `Domain restricted sharing` organization policy at: https://console.cloud.google.com/iam-admin/orgpolicies/iam-allowedPolicyMemberDomains (https://console.cloud.google.com/iam-admin/orgpolicies/iam-allowedPolicyMemberDomains).",
+          "AuditProcedure": "**From Google Cloud Console**  1. Go to `Storage browser` by visiting https://console.cloud.google.com/storage/browser(https://console.cloud.google.com/storage/browser). 2. Click on each bucket name to go to its `Bucket details` page. 3. Click on the `Permissions` tab. 4. Ensure that `allUsers` and `allAuthenticatedUsers` are not in the `Members` list.  **From Google Cloud CLI**  1. List all buckets in a project  ``` gsutil ls ```  2. Check the IAM Policy for each bucket:  ``` gsutil iam get gs://BUCKET_NAME ```  No role should contain `allUsers` and/or `allAuthenticatedUsers` as a member.  **Using Rest API**  1. List all buckets in a project  ``` Get https://www.googleapis.com/storage/v1/b?project= ```  2. Check the IAM Policy for each bucket  ``` GET https://www.googleapis.com/storage/v1/b//iam ```  No role should contain `allUsers` and/or `allAuthenticatedUsers` as a member.",
           "AdditionalInformation": "To implement Access restrictions on buckets, configuring Bucket IAM is preferred way than configuring Bucket ACL. On GCP console, \"Edit Permissions\" for bucket exposes IAM configurations only. Bucket ACLs are configured automatically as per need in order to implement/support User enforced Bucket IAM policy. In-case administrator changes bucket ACL using command-line(gsutils)/API bucket IAM also gets updated automatically.",
           "References": "https://cloud.google.com/storage/docs/access-control/iam-reference:https://cloud.google.com/storage/docs/access-control/making-data-public:https://cloud.google.com/storage/docs/gsutil/commands/iam"
         }
@@ -1193,10 +1193,10 @@
           "Profile": "Level 2",
           "AssessmentStatus": "Automated",
           "Description": "It is recommended that uniform bucket-level access is enabled on Cloud Storage buckets.",
-          "RationaleStatement": "It is recommended to use uniform bucket-level access to unify and simplify how you grant access to your Cloud Storage resources. \n\nCloud Storage offers two systems for granting users permission to access your buckets and objects: Cloud Identity and Access Management (Cloud IAM) and Access Control Lists (ACLs). These systems act in parallel - in order for a user to access a Cloud Storage resource, only one of the systems needs to grant the user permission. Cloud IAM is used throughout Google Cloud and allows you to grant a variety of permissions at the bucket and project levels. ACLs are used only by Cloud Storage and have limited permission options, but they allow you to grant permissions on a per-object basis.\n\nIn order to support a uniform permissioning system, Cloud Storage has uniform bucket-level access. Using this feature disables ACLs for all Cloud Storage resources: access to Cloud Storage resources then is granted exclusively through Cloud IAM. Enabling uniform bucket-level access guarantees that if a Storage bucket is not publicly accessible, no object in the bucket is publicly accessible either.",
-          "ImpactStatement": "If you enable uniform bucket-level access, you revoke access from users who gain their access solely through object ACLs.\n\nCertain Google Cloud services, such as Stackdriver, Cloud Audit Logs, and Datastore, cannot export to Cloud Storage buckets that have uniform bucket-level access enabled.",
-          "RemediationProcedure": "**From Google Cloud Console**\n\n1. Open the Cloud Storage browser in the Google Cloud Console by visiting: https://console.cloud.google.com/storage/browser(https://console.cloud.google.com/storage/browser)\n\n2. In the list of buckets, click on the name of the desired bucket.\n\n3. Select the `Permissions` tab near the top of the page.\n\n4. In the text box that starts with `This bucket uses fine-grained access control...`, click `Edit`.\n\n5. In the pop-up menu that appears, select `Uniform`.\n\n6. Click `Save`.\n\n**From Google Cloud CLI**\n\nUse the on option in a uniformbucketlevelaccess set command:\n\n```\ngsutil uniformbucketlevelaccess set on gs://BUCKET_NAME/\n```\n\n**Prevention**\n\nYou can set up an Organization Policy to enforce that any new bucket has uniform bucket level access enabled. Learn more at:\nhttps://cloud.google.com/storage/docs/setting-org-policies#uniform-bucket(https://cloud.google.com/storage/docs/setting-org-policies#uniform-bucket)",
-          "AuditProcedure": "**From Google Cloud Console**\n\n1. Open the Cloud Storage browser in the Google Cloud Console by visiting: https://console.cloud.google.com/storage/browser(https://console.cloud.google.com/storage/browser)\n\n2. For each bucket, make sure that `Access control` column has the value `Uniform`.\n\n**From Google Cloud CLI**\n\n1. List all buckets in a project\n```\ngsutil ls\n```\n2. For each bucket, verify that uniform bucket-level access is enabled.\n```\ngsutil uniformbucketlevelaccess get gs://BUCKET_NAME/\n```\nIf uniform bucket-level access is enabled, the response looks like:\n\n```\nUniform bucket-level access setting for gs://BUCKET_NAME/:\n Enabled: True\n LockedTime: LOCK_DATE\n```",
+          "RationaleStatement": "It is recommended to use uniform bucket-level access to unify and simplify how you grant access to your Cloud Storage resources.   Cloud Storage offers two systems for granting users permission to access your buckets and objects: Cloud Identity and Access Management (Cloud IAM) and Access Control Lists (ACLs). These systems act in parallel - in order for a user to access a Cloud Storage resource, only one of the systems needs to grant the user permission. Cloud IAM is used throughout Google Cloud and allows you to grant a variety of permissions at the bucket and project levels. ACLs are used only by Cloud Storage and have limited permission options, but they allow you to grant permissions on a per-object basis.  In order to support a uniform permissioning system, Cloud Storage has uniform bucket-level access. Using this feature disables ACLs for all Cloud Storage resources: access to Cloud Storage resources then is granted exclusively through Cloud IAM. Enabling uniform bucket-level access guarantees that if a Storage bucket is not publicly accessible, no object in the bucket is publicly accessible either.",
+          "ImpactStatement": "If you enable uniform bucket-level access, you revoke access from users who gain their access solely through object ACLs.  Certain Google Cloud services, such as Stackdriver, Cloud Audit Logs, and Datastore, cannot export to Cloud Storage buckets that have uniform bucket-level access enabled.",
+          "RemediationProcedure": "**From Google Cloud Console**  1. Open the Cloud Storage browser in the Google Cloud Console by visiting: https://console.cloud.google.com/storage/browser(https://console.cloud.google.com/storage/browser)  2. In the list of buckets, click on the name of the desired bucket.  3. Select the `Permissions` tab near the top of the page.  4. In the text box that starts with `This bucket uses fine-grained access control...`, click `Edit`.  5. In the pop-up menu that appears, select `Uniform`.  6. Click `Save`.  **From Google Cloud CLI**  Use the on option in a uniformbucketlevelaccess set command:  ``` gsutil uniformbucketlevelaccess set on gs://BUCKET_NAME/ ```  **Prevention**  You can set up an Organization Policy to enforce that any new bucket has uniform bucket level access enabled. Learn more at: https://cloud.google.com/storage/docs/setting-org-policies#uniform-bucket(https://cloud.google.com/storage/docs/setting-org-policies#uniform-bucket)",
+          "AuditProcedure": "**From Google Cloud Console**  1. Open the Cloud Storage browser in the Google Cloud Console by visiting: https://console.cloud.google.com/storage/browser(https://console.cloud.google.com/storage/browser)  2. For each bucket, make sure that `Access control` column has the value `Uniform`.  **From Google Cloud CLI**  1. List all buckets in a project ``` gsutil ls ``` 2. For each bucket, verify that uniform bucket-level access is enabled. ``` gsutil uniformbucketlevelaccess get gs://BUCKET_NAME/ ``` If uniform bucket-level access is enabled, the response looks like:  ``` Uniform bucket-level access setting for gs://BUCKET_NAME/:  Enabled: True  LockedTime: LOCK_DATE ```",
           "AdditionalInformation": "Uniform bucket-level access can no longer be disabled if it has been active on a bucket for 90 consecutive days.",
           "References": "https://cloud.google.com/storage/docs/uniform-bucket-level-access:https://cloud.google.com/storage/docs/using-uniform-bucket-level-access:https://cloud.google.com/storage/docs/setting-org-policies#uniform-bucket"
         }
@@ -1216,8 +1216,8 @@
           "Description": "It is recommended to have all SQL database instances set to enable automated backups.",
           "RationaleStatement": "Backups provide a way to restore a Cloud SQL instance to recover lost data or recover from a problem with that instance. Automated backups need to be set for any instance that contains data that should be protected from loss or damage. This recommendation is applicable for SQL Server, PostgreSql, MySql generation 1 and MySql generation 2 instances.",
           "ImpactStatement": "Automated Backups will increase required size of storage and costs associated with it.",
-          "RemediationProcedure": "**From Google Cloud Console**\n\n1. Go to the Cloud SQL Instances page in the Google Cloud Console by visiting https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances).\n2. Select the instance where the backups need to be configured.\n3. Click `Edit`.\n4. In the `Backups` section, check `Enable automated backups', and choose a backup window.\n5. Click `Save`.\n\n**From Google Cloud CLI**\n\n1. List all Cloud SQL database instances using the following command:\n```\ngcloud sql instances list\n```\n\n2. Enable `Automated backups` for every Cloud SQL database instance using the below command:\n```\ngcloud sql instances patch  --backup-start-time \n```\nThe `backup-start-time` parameter is specified in 24-hour time, in the UTC±00 time zone, and specifies the start of a 4-hour backup window. Backups can start any time during the backup window.",
-          "AuditProcedure": "**From Google Cloud Console**\n\n1. Go to the Cloud SQL Instances page in the Google Cloud Console by visiting https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances).\n2. Click the instance name to open its instance details page.\n3. Go to the `Backups` menu.\n4. Ensure that `Automated backups` is set to `Enabled` and `Backup time` is mentioned.\n\n**From Google Cloud CLI**\n\n1. List all Cloud SQL database instances using the following command:\n```\ngcloud sql instances list\n```\n\n2. Ensure that the below command returns `True` for every Cloud SQL database instance.\n```\ngcloud sql instances describe  --format=\"value('Enabled':settings.backupConfiguration.enabled)\"\n```",
+          "RemediationProcedure": "**From Google Cloud Console**  1. Go to the Cloud SQL Instances page in the Google Cloud Console by visiting https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances). 2. Select the instance where the backups need to be configured. 3. Click `Edit`. 4. In the `Backups` section, check `Enable automated backups', and choose a backup window. 5. Click `Save`.  **From Google Cloud CLI**  1. List all Cloud SQL database instances using the following command: ``` gcloud sql instances list ```  2. Enable `Automated backups` for every Cloud SQL database instance using the below command: ``` gcloud sql instances patch  --backup-start-time  ``` The `backup-start-time` parameter is specified in 24-hour time, in the UTC±00 time zone, and specifies the start of a 4-hour backup window. Backups can start any time during the backup window.",
+          "AuditProcedure": "**From Google Cloud Console**  1. Go to the Cloud SQL Instances page in the Google Cloud Console by visiting https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances). 2. Click the instance name to open its instance details page. 3. Go to the `Backups` menu. 4. Ensure that `Automated backups` is set to `Enabled` and `Backup time` is mentioned.  **From Google Cloud CLI**  1. List all Cloud SQL database instances using the following command: ``` gcloud sql instances list ```  2. Ensure that the below command returns `True` for every Cloud SQL database instance. ``` gcloud sql instances describe  --format=\"value('Enabled':settings.backupConfiguration.enabled)\" ```",
           "AdditionalInformation": "",
           "References": "https://cloud.google.com/sql/docs/mysql/backup-recovery/backups:https://cloud.google.com/sql/docs/postgres/backup-recovery/backing-up"
         }
@@ -1237,8 +1237,8 @@
           "Description": "It is recommended to configure Second Generation Sql instance to use private IPs instead of public IPs.",
           "RationaleStatement": "To lower the organization's attack surface, Cloud SQL databases should not have public IPs. Private IPs provide improved network security and lower latency for your application.",
           "ImpactStatement": "Removing the public IP address on SQL instances may break some applications that relied on it for database connectivity.",
-          "RemediationProcedure": "**From Google Cloud Console**\n\n1. Go to the Cloud SQL Instances page in the Google Cloud Console: https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances)\n2. Click the instance name to open its Instance details page.\n3. Select the `Connections` tab.\n4. Deselect the `Public IP` checkbox.\n5. Click `Save` to update the instance.\n\n**From Google Cloud CLI**\n\n1. For every instance remove its public IP and assign a private IP instead:\n```\ngcloud sql instances patch  --network= --no-assign-ip\n```\n\n2. Confirm the changes using the following command::\n```\ngcloud sql instances describe \n```\n\n**Prevention:**\n\nTo prevent new SQL instances from getting configured with public IP addresses, set up a `Restrict Public IP access on Cloud SQL instances` Organization policy at: https://console.cloud.google.com/iam-admin/orgpolicies/sql-restrictPublicIp(https://console.cloud.google.com/iam-admin/orgpolicies/sql-restrictPublicIp).",
-          "AuditProcedure": "**From Google Cloud Console**\n\n1. Go to the Cloud SQL Instances page in the Google Cloud Console: https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances)\n\n2. Ensure that every instance has a private IP address and no public IP address configured.\n\n**From Google Cloud CLI**\n\n1. List all Cloud SQL database instances using the following command:\n\n```\ngcloud sql instances list\n```\n\n2. For every instance of type `instanceType: CLOUD_SQL_INSTANCE` with `backendType: SECOND_GEN`, get detailed configuration. Ignore instances of type `READ_REPLICA_INSTANCE` because these instances inherit their settings from the primary instance. Also, note that first generation instances cannot be configured to have a private IP address.\n\n```\ngcloud sql instances describe \n```\n\n3. Ensure that the setting `ipAddresses` has an IP address configured of `type: PRIVATE` and has no IP address of `type: PRIMARY`. `PRIMARY` IP addresses are public addresses. An instance can have both a private and public address at the same time. Note also that you cannot use private IP with First Generation instances.",
+          "RemediationProcedure": "**From Google Cloud Console**  1. Go to the Cloud SQL Instances page in the Google Cloud Console: https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances) 2. Click the instance name to open its Instance details page. 3. Select the `Connections` tab. 4. Deselect the `Public IP` checkbox. 5. Click `Save` to update the instance.  **From Google Cloud CLI**  1. For every instance remove its public IP and assign a private IP instead: ``` gcloud sql instances patch  --network= --no-assign-ip ```  2. Confirm the changes using the following command:: ``` gcloud sql instances describe  ```  **Prevention:**  To prevent new SQL instances from getting configured with public IP addresses, set up a `Restrict Public IP access on Cloud SQL instances` Organization policy at: https://console.cloud.google.com/iam-admin/orgpolicies/sql-restrictPublicIp(https://console.cloud.google.com/iam-admin/orgpolicies/sql-restrictPublicIp).",
+          "AuditProcedure": "**From Google Cloud Console**  1. Go to the Cloud SQL Instances page in the Google Cloud Console: https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances)  2. Ensure that every instance has a private IP address and no public IP address configured.  **From Google Cloud CLI**  1. List all Cloud SQL database instances using the following command:  ``` gcloud sql instances list ```  2. For every instance of type `instanceType: CLOUD_SQL_INSTANCE` with `backendType: SECOND_GEN`, get detailed configuration. Ignore instances of type `READ_REPLICA_INSTANCE` because these instances inherit their settings from the primary instance. Also, note that first generation instances cannot be configured to have a private IP address.  ``` gcloud sql instances describe  ```  3. Ensure that the setting `ipAddresses` has an IP address configured of `type: PRIVATE` and has no IP address of `type: PRIMARY`. `PRIMARY` IP addresses are public addresses. An instance can have both a private and public address at the same time. Note also that you cannot use private IP with First Generation instances.",
           "AdditionalInformation": "Replicas inherit their private IP status from their primary instance. You cannot configure a private IP directly on a replica.",
           "References": "https://cloud.google.com/sql/docs/mysql/configure-private-ip:https://cloud.google.com/sql/docs/mysql/private-ip:https://cloud.google.com/resource-manager/docs/organization-policy/org-policy-constraints:https://console.cloud.google.com/iam-admin/orgpolicies/sql-restrictPublicIp"
         }
@@ -1256,10 +1256,10 @@
           "Profile": "Level 1",
           "AssessmentStatus": "Automated",
           "Description": "Database Server should accept connections only from trusted Network(s)/IP(s) and restrict access from public IP addresses.",
-          "RationaleStatement": "To minimize attack surface on a Database server instance, only trusted/known and required IP(s) should be white-listed to connect to it.\n\nAn authorized network should not have IPs/networks configured to `0.0.0.0/0` which will allow access to the instance from anywhere in the world. Note that authorized networks apply only to instances with public IPs.",
+          "RationaleStatement": "To minimize attack surface on a Database server instance, only trusted/known and required IP(s) should be white-listed to connect to it.  An authorized network should not have IPs/networks configured to `0.0.0.0/0` which will allow access to the instance from anywhere in the world. Note that authorized networks apply only to instances with public IPs.",
           "ImpactStatement": "The Cloud SQL database instance would not be available to public IP addresses.",
-          "RemediationProcedure": "**From Google Cloud Console**\n\n1. Go to the Cloud SQL Instances page in the Google Cloud Console by visiting https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances).\n\n2. Click the instance name to open its `Instance details` page.\n3. Under the `Configuration` section click `Edit configurations`\n4. Under `Configuration options` expand the `Connectivity` section.\n5. Click the `delete` icon for the authorized network `0.0.0.0/0`.\n6. Click `Save` to update the instance.\n\n**From Google Cloud CLI**\n\nUpdate the authorized network list by dropping off any addresses.\n\n```\ngcloud sql instances patch  --authorized-networks=IP_ADDR1,IP_ADDR2...\n```\n\n**Prevention:**\n\nTo prevent new SQL instances from being configured to accept incoming connections from any IP addresses, set up a `Restrict Authorized Networks on Cloud SQL instances` Organization Policy at: https://console.cloud.google.com/iam-admin/orgpolicies/sql-restrictAuthorizedNetworks(https://console.cloud.google.com/iam-admin/orgpolicies/sql-restrictAuthorizedNetworks).",
-          "AuditProcedure": "**From Google Cloud Console**\n\n1. Go to the Cloud SQL Instances page in the Google Cloud Console by visiting https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances).\n2. Click the instance name to open its `Instance details` page.\n3. Under the `Configuration` section click `Edit configurations`\n4. Under `Configuration options` expand the `Connectivity` section.\n5. Ensure that no authorized network is configured to allow `0.0.0.0/0`.\n\n**From Google Cloud CLI**\n\n1. Get detailed configuration for every Cloud SQL database instance.\n\n```\ngcloud sql instances list --format=json\n```\n\nEnsure that the section `settings: ipConfiguration : authorizedNetworks` does not have any parameter `value` containing `0.0.0.0/0`.",
+          "RemediationProcedure": "**From Google Cloud Console**  1. Go to the Cloud SQL Instances page in the Google Cloud Console by visiting https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances).  2. Click the instance name to open its `Instance details` page. 3. Under the `Configuration` section click `Edit configurations` 4. Under `Configuration options` expand the `Connectivity` section. 5. Click the `delete` icon for the authorized network `0.0.0.0/0`. 6. Click `Save` to update the instance.  **From Google Cloud CLI**  Update the authorized network list by dropping off any addresses.  ``` gcloud sql instances patch  --authorized-networks=IP_ADDR1,IP_ADDR2... ```  **Prevention:**  To prevent new SQL instances from being configured to accept incoming connections from any IP addresses, set up a `Restrict Authorized Networks on Cloud SQL instances` Organization Policy at: https://console.cloud.google.com/iam-admin/orgpolicies/sql-restrictAuthorizedNetworks(https://console.cloud.google.com/iam-admin/orgpolicies/sql-restrictAuthorizedNetworks).",
+          "AuditProcedure": "**From Google Cloud Console**  1. Go to the Cloud SQL Instances page in the Google Cloud Console by visiting https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances). 2. Click the instance name to open its `Instance details` page. 3. Under the `Configuration` section click `Edit configurations` 4. Under `Configuration options` expand the `Connectivity` section. 5. Ensure that no authorized network is configured to allow `0.0.0.0/0`.  **From Google Cloud CLI**  1. Get detailed configuration for every Cloud SQL database instance.  ``` gcloud sql instances list --format=json ```  Ensure that the section `settings: ipConfiguration : authorizedNetworks` does not have any parameter `value` containing `0.0.0.0/0`.",
           "AdditionalInformation": "There is no IPv6 configuration found for Google cloud SQL server services.",
           "References": "https://cloud.google.com/sql/docs/mysql/configure-ip:https://console.cloud.google.com/iam-admin/orgpolicies/sql-restrictAuthorizedNetworks:https://cloud.google.com/resource-manager/docs/organization-policy/org-policy-constraints:https://cloud.google.com/sql/docs/mysql/connection-org-policy"
         }
@@ -1277,10 +1277,10 @@
           "Profile": "Level 1",
           "AssessmentStatus": "Automated",
           "Description": "It is recommended to enforce all incoming connections to SQL database instance to use SSL.",
-          "RationaleStatement": "SQL database connections if successfully trapped (MITM); can reveal sensitive data like credentials, database queries, query outputs etc.\nFor security, it is recommended to always use SSL encryption when connecting to your instance.\nThis recommendation is applicable for Postgresql, MySql generation 1, MySql generation 2 and SQL Server 2017 instances.",
+          "RationaleStatement": "SQL database connections if successfully trapped (MITM); can reveal sensitive data like credentials, database queries, query outputs etc. For security, it is recommended to always use SSL encryption when connecting to your instance. This recommendation is applicable for Postgresql, MySql generation 1, MySql generation 2 and SQL Server 2017 instances.",
           "ImpactStatement": "After enforcing SSL connection, existing client will not be able to communicate with SQL server unless configured with appropriate client-certificates to communicate to SQL database instance.",
-          "RemediationProcedure": "**From Google Cloud Console**\n\n1. Go to https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances).\n\n2. Click on an instance name to see its configuration overview.\n\n3. In the left-side panel, select `Connections`.\n\n3. In the `SSL connections` section, click `Allow only SSL connections`.\n\n4. Under `Configure SSL server certificates` click `Create new certificate`.\n\n5. Under `Configure SSL client certificates` click `Create a client certificate`. \n\n6. Follow the instructions shown to learn how to connect to your instance. \n\n**From Google Cloud CLI**\n\nTo enforce SSL encryption for an instance run the command:\n\n```\ngcloud sql instances patch  --require-ssl\n```\n\nNote:\n`RESTART` is required for type MySQL Generation 1 Instances (`backendType: FIRST_GEN`) to get this configuration in effect.",
-          "AuditProcedure": "**From Google Cloud Console**\n\n1. Go to https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances).\n\n2. Click on an instance name to see its configuration overview.\n\n3. In the left-side panel, select `Connections`.\n\n3. In the `SSL connections` section, ensure that `Only secured connections are allowed to connect to this instance.`.\n\n**From Google Cloud CLI**\n\n1. Get the detailed configuration for every SQL database instance using the following command:\n\n```\ngcloud sql instances list --format=json\n```\n\nEnsure that section `settings: ipConfiguration` has the parameter `requireSsl` set to `true`.",
+          "RemediationProcedure": "**From Google Cloud Console**  1. Go to https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances).  2. Click on an instance name to see its configuration overview.  3. In the left-side panel, select `Connections`.  3. In the `SSL connections` section, click `Allow only SSL connections`.  4. Under `Configure SSL server certificates` click `Create new certificate`.  5. Under `Configure SSL client certificates` click `Create a client certificate`.   6. Follow the instructions shown to learn how to connect to your instance.   **From Google Cloud CLI**  To enforce SSL encryption for an instance run the command:  ``` gcloud sql instances patch  --require-ssl ```  Note: `RESTART` is required for type MySQL Generation 1 Instances (`backendType: FIRST_GEN`) to get this configuration in effect.",
+          "AuditProcedure": "**From Google Cloud Console**  1. Go to https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances).  2. Click on an instance name to see its configuration overview.  3. In the left-side panel, select `Connections`.  3. In the `SSL connections` section, ensure that `Only secured connections are allowed to connect to this instance.`.  **From Google Cloud CLI**  1. Get the detailed configuration for every SQL database instance using the following command:  ``` gcloud sql instances list --format=json ```  Ensure that section `settings: ipConfiguration` has the parameter `requireSsl` set to `true`.",
           "AdditionalInformation": "By default `Settings: ipConfiguration` has no `authorizedNetworks` set/configured. In that case even if by default `requireSsl` is not set, which is equivalent to `requireSsl:false` there is no risk as instance cannot be accessed outside of the network unless `authorizedNetworks` are configured. However, If default for `requireSsl` is not updated to `true` any `authorizedNetworks` created later on will not enforce SSL only connection.",
           "References": "https://cloud.google.com/sql/docs/postgres/configure-ssl-instance/"
         }
@@ -1288,18 +1288,18 @@
     },
     {
       "Id": "6.1.1",
-      "Description": "It is recommended to set a password for the administrative user (`root` by default) to prevent unauthorized access to the SQL database instances.\n\nThis recommendation is applicable only for MySQL Instances. PostgreSQL does not offer any setting for No Password from the cloud console.",
+      "Description": "It is recommended to set a password for the administrative user (`root` by default) to prevent unauthorized access to the SQL database instances.  This recommendation is applicable only for MySQL Instances. PostgreSQL does not offer any setting for No Password from the cloud console.",
       "Checks": [],
       "Attributes": [
         {
           "Section": "6.1. MySQL Database",
           "Profile": "Level 1",
           "AssessmentStatus": "Manual",
-          "Description": "It is recommended to set a password for the administrative user (`root` by default) to prevent unauthorized access to the SQL database instances.\n\nThis recommendation is applicable only for MySQL Instances. PostgreSQL does not offer any setting for No Password from the cloud console.",
+          "Description": "It is recommended to set a password for the administrative user (`root` by default) to prevent unauthorized access to the SQL database instances.  This recommendation is applicable only for MySQL Instances. PostgreSQL does not offer any setting for No Password from the cloud console.",
           "RationaleStatement": "At the time of MySQL Instance creation, not providing an administrative password allows anyone to connect to the SQL database instance with administrative privileges. The root password should be set to ensure only authorized users have these privileges.",
           "ImpactStatement": "Connection strings for administrative clients need to be reconfigured to use a password.",
-          "RemediationProcedure": "**From Google Cloud Console**\n\n1. Go to the Cloud SQL Instances page in the Google Cloud Platform Console using `https://console.cloud.google.com/sql/`\n\n2. Select the instance to open its Overview page.\n\n3. Select `Access Control > Users`.\n\n4. Click the `More actions icon` for the user to be updated.\n\n5. Select `Change password`, specify a `New password`, and click `OK`.\n\n**From Google Cloud CLI**\n\n1. Set a password to a MySql instance:\n\n```\ngcloud sql users set-password root --host= --instance= --prompt-for-password\n```\n\n2. A prompt will appear, requiring the user to enter a password:\n\n```\nInstance Password:\n```\n\n3. With a successful password configured, the following message should be seen:\n\n```\nUpdating Cloud SQL user...done.\n```",
-          "AuditProcedure": "**From Google Cloud CLI**\n\n1. List All SQL database instances of type MySQL:\n\n```\ngcloud sql instances list --filter='DATABASE_VERSION:MYSQL* --project  --format=\"(NAME,PRIMARY_ADDRESS)\"'\n```\n\n2. For every MySQL instance try to connect using the `PRIMARY_ADDRESS`, if available:\n\n```\nmysql -u root -h \n```\n\nThe command should return either an error message or a password prompt.\n\nSample Error message:\n\n```\nERROR 1045 (28000): Access denied for user 'root'@'' (using password: NO)\n```\n\nIf a command produces the `mysql>` prompt, the MySQL instance allows anyone to connect with administrative privileges without needing a password.\n\n**Note:** The `No Password` setting is exposed only at the time of MySQL instance creation. Once the instance is created, the Google Cloud Platform Console does not expose the set to confirm whether a password for an administrative user is set to a MySQL instance.",
+          "RemediationProcedure": "**From Google Cloud Console**  1. Go to the Cloud SQL Instances page in the Google Cloud Platform Console using `https://console.cloud.google.com/sql/`  2. Select the instance to open its Overview page.  3. Select `Access Control > Users`.  4. Click the `More actions icon` for the user to be updated.  5. Select `Change password`, specify a `New password`, and click `OK`.  **From Google Cloud CLI**  1. Set a password to a MySql instance:  ``` gcloud sql users set-password root --host= --instance= --prompt-for-password ```  2. A prompt will appear, requiring the user to enter a password:  ``` Instance Password: ```  3. With a successful password configured, the following message should be seen:  ``` Updating Cloud SQL user...done. ```",
+          "AuditProcedure": "**From Google Cloud CLI**  1. List All SQL database instances of type MySQL:  ``` gcloud sql instances list --filter='DATABASE_VERSION:MYSQL* --project  --format=\"(NAME,PRIMARY_ADDRESS)\"' ```  2. For every MySQL instance try to connect using the `PRIMARY_ADDRESS`, if available:  ``` mysql -u root -h  ```  The command should return either an error message or a password prompt.  Sample Error message:  ``` ERROR 1045 (28000): Access denied for user 'root'@'' (using password: NO) ```  If a command produces the `mysql>` prompt, the MySQL instance allows anyone to connect with administrative privileges without needing a password.  **Note:** The `No Password` setting is exposed only at the time of MySQL instance creation. Once the instance is created, the Google Cloud Platform Console does not expose the set to confirm whether a password for an administrative user is set to a MySQL instance.",
           "AdditionalInformation": "",
           "References": "https://cloud.google.com/sql/docs/mysql/create-manage-users:https://cloud.google.com/sql/docs/mysql/create-instance"
         }
@@ -1319,9 +1319,9 @@
           "Description": "It is recommended to set `skip_show_database` database flag for Cloud SQL Mysql instance to `on`",
           "RationaleStatement": "'skip_show_database' database flag prevents people from using the SHOW DATABASES statement if they do not have the SHOW DATABASES privilege. This can improve security if you have concerns about users being able to see databases belonging to other users. Its effect depends on the SHOW DATABASES privilege: If the variable value is ON, the SHOW DATABASES statement is permitted only to users who have the SHOW DATABASES privilege, and the statement displays all database names. If the value is OFF, SHOW DATABASES is permitted to all users, but displays the names of only those databases for which the user has the SHOW DATABASES or other privilege. This recommendation is applicable to Mysql database instances.",
           "ImpactStatement": "",
-          "RemediationProcedure": "**From Google Cloud Console**\n\n1. Go to the Cloud SQL Instances page in the Google Cloud Console by visiting https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances).\n2. Select the Mysql instance for which you want to enable to database flag.\n3. Click `Edit`.\n4. Scroll down to the `Flags` section.\n5. To set a flag that has not been set on the instance before, click `Add item`, choose the flag `skip_show_database` from the drop-down menu, and set its value to `on`.\n6. Click `Save` to save your changes.\n7. Confirm your changes under `Flags` on the Overview page.\n\n**From Google Cloud CLI**\n\n1. List all Cloud SQL database Instances\n```\ngcloud sql instances list\n```\n2. Configure the `skip_show_database` database flag for every Cloud SQL Mysql database instance using the below command.\n```\ngcloud sql instances patch INSTANCE_NAME --database-flags skip_show_database=on\n```\n\n```\nNote : \n\nThis command will overwrite all database flags previously set. To keep those and add new ones, include the values for all flags you want set on the instance; any flag not specifically included is set to its default value. For flags that do not take a value, specify the flag name followed by an equals sign (\"=\").\n```",
-          "AuditProcedure": "**From Google Cloud Console**\n\n1. Go to the Cloud SQL Instances page in the Google Cloud Console by visiting https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances).\n2. Select the instance to open its `Instance Overview` page\n3. Ensure the database flag `skip_show_database` that has been set is listed under the `Database flags` section.\n\n**From Google Cloud CLI**\n\n1. List all Cloud SQL database Instances\n```\ngcloud sql instances list\n```\n2. Ensure the below command returns `on` for every Cloud SQL Mysql database instance\n```\ngcloud sql instances describe INSTANCE_NAME --format=json | jq '.settings.databaseFlags | select(.name==\"skip_show_database\")|.value'\n```",
-          "AdditionalInformation": "```\n\"WARNING: This patch modifies database flag values, which may require \nyour instance to be restarted. Check the list of supported flags - \nhttps://cloud.google.com/sql/docs/mysql/flags - to see if your \ninstance will be restarted when this patch is submitted.\n```\n\n```\nNote: some database flag settings can affect instance availability or stability, and remove the instance from the Cloud SQL SLA. For information about these flags, see Operational Guidelines.\"\n\n```\n\n```\nNote: Configuring the above flag restarts the Cloud SQL instance.\n```",
+          "RemediationProcedure": "**From Google Cloud Console**  1. Go to the Cloud SQL Instances page in the Google Cloud Console by visiting https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances). 2. Select the Mysql instance for which you want to enable to database flag. 3. Click `Edit`. 4. Scroll down to the `Flags` section. 5. To set a flag that has not been set on the instance before, click `Add item`, choose the flag `skip_show_database` from the drop-down menu, and set its value to `on`. 6. Click `Save` to save your changes. 7. Confirm your changes under `Flags` on the Overview page.  **From Google Cloud CLI**  1. List all Cloud SQL database Instances ``` gcloud sql instances list ``` 2. Configure the `skip_show_database` database flag for every Cloud SQL Mysql database instance using the below command. ``` gcloud sql instances patch INSTANCE_NAME --database-flags skip_show_database=on ```  ``` Note :   This command will overwrite all database flags previously set. To keep those and add new ones, include the values for all flags you want set on the instance; any flag not specifically included is set to its default value. For flags that do not take a value, specify the flag name followed by an equals sign (\"=\"). ```",
+          "AuditProcedure": "**From Google Cloud Console**  1. Go to the Cloud SQL Instances page in the Google Cloud Console by visiting https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances). 2. Select the instance to open its `Instance Overview` page 3. Ensure the database flag `skip_show_database` that has been set is listed under the `Database flags` section.  **From Google Cloud CLI**  1. List all Cloud SQL database Instances ``` gcloud sql instances list ``` 2. Ensure the below command returns `on` for every Cloud SQL Mysql database instance ``` gcloud sql instances describe INSTANCE_NAME --format=json | jq '.settings.databaseFlags | select(.name==\"skip_show_database\")|.value' ```",
+          "AdditionalInformation": "``` \"WARNING: This patch modifies database flag values, which may require  your instance to be restarted. Check the list of supported flags -  https://cloud.google.com/sql/docs/mysql/flags - to see if your  instance will be restarted when this patch is submitted. ```  ``` Note: some database flag settings can affect instance availability or stability, and remove the instance from the Cloud SQL SLA. For information about these flags, see Operational Guidelines.\"  ```  ``` Note: Configuring the above flag restarts the Cloud SQL instance. ```",
           "References": "https://cloud.google.com/sql/docs/mysql/flags:https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_skip_show_database"
         }
       ]
@@ -1338,18 +1338,18 @@
           "Profile": "Level 1",
           "AssessmentStatus": "Automated",
           "Description": "It is recommended to set the `local_infile` database flag for a Cloud SQL MySQL instance to `off`.",
-          "RationaleStatement": "The `local_infile` flag controls the server-side LOCAL capability for LOAD DATA statements. Depending on the `local_infile` setting, the server refuses or permits local data loading by clients that have LOCAL enabled on the client side.\n\nTo explicitly cause the server to refuse LOAD DATA LOCAL statements (regardless of how client programs and libraries are configured at build time or runtime), start mysqld with local_infile disabled. local_infile can also be set at runtime.\n\nDue to security issues associated with the `local_infile` flag, it is recommended to disable it. This recommendation is applicable to MySQL database instances.",
+          "RationaleStatement": "The `local_infile` flag controls the server-side LOCAL capability for LOAD DATA statements. Depending on the `local_infile` setting, the server refuses or permits local data loading by clients that have LOCAL enabled on the client side.  To explicitly cause the server to refuse LOAD DATA LOCAL statements (regardless of how client programs and libraries are configured at build time or runtime), start mysqld with local_infile disabled. local_infile can also be set at runtime.  Due to security issues associated with the `local_infile` flag, it is recommended to disable it. This recommendation is applicable to MySQL database instances.",
           "ImpactStatement": "Disabling `local_infile` makes the server refuse local data loading by clients that have LOCAL enabled on the client side.",
-          "RemediationProcedure": "**From Google Cloud Console**\n\n1. Go to the Cloud SQL Instances page in the Google Cloud Console by visiting https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances).\n2. Select the MySQL instance where the database flag needs to be enabled.\n3. Click `Edit`.\n4. Scroll down to the `Flags` section.\n5. To set a flag that has not been set on the instance before, click `Add item`, choose the flag `local_infile` from the drop-down menu, and set its value to `off`.\n6. Click `Save`.\n7. Confirm the changes under `Flags` on the Overview page.\n\n**From Google Cloud CLI**\n\n1. List all Cloud SQL database instances using the following command:\n```\ngcloud sql instances list\n```\n2. Configure the `local_infile` database flag for every Cloud SQL Mysql database instance using the below command:\n```\ngcloud sql instances patch INSTANCE_NAME --database-flags local_infile=off\n```\n\n```\nNote : \n\nThis command will overwrite all database flags that were previously set. To keep those and add new ones, include the values for all flags to be set on the instance; any flag not specifically included is set to its default value. For flags that do not take a value, specify the flag name followed by an equals sign (\"=\").\n```",
-          "AuditProcedure": "**From Google Cloud Console**\n\n1. Go to the Cloud SQL Instances page in the Google Cloud Console by visiting https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances).\n2. Select the instance to open its `Instance Overview` page\n3. Ensure the database flag `local_infile` that has been set is listed under the `Database flags` section.\n\n**From Google Cloud CLI**\n\n1. List all Cloud SQL database instances:\n```\ngcloud sql instances list\n```\n2. Ensure the below command returns `off` for every Cloud SQL MySQL database instance.\n```\ngcloud sql instances describe INSTANCE_NAME --format=json | jq '.settings.databaseFlags | select(.name==\"local_infile\")|.value'\n```",
-          "AdditionalInformation": "```\n\"WARNING: This patch modifies database flag values, which may require \nthe instance to be restarted. Check the list of supported flags - \nhttps://cloud.google.com/sql/docs/mysql/flags - to see if your instance will be restarted when this patch is submitted.\n```\n\n```\nNote: some database flag settings can affect instance availability or stability, and remove the instance from the Cloud SQL SLA. For information about these flags, see Operational Guidelines.\"\n\n```\n\n```\nNote: Configuring the above flag restarts the Cloud SQL instance.\n```",
+          "RemediationProcedure": "**From Google Cloud Console**  1. Go to the Cloud SQL Instances page in the Google Cloud Console by visiting https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances). 2. Select the MySQL instance where the database flag needs to be enabled. 3. Click `Edit`. 4. Scroll down to the `Flags` section. 5. To set a flag that has not been set on the instance before, click `Add item`, choose the flag `local_infile` from the drop-down menu, and set its value to `off`. 6. Click `Save`. 7. Confirm the changes under `Flags` on the Overview page.  **From Google Cloud CLI**  1. List all Cloud SQL database instances using the following command: ``` gcloud sql instances list ``` 2. Configure the `local_infile` database flag for every Cloud SQL Mysql database instance using the below command: ``` gcloud sql instances patch INSTANCE_NAME --database-flags local_infile=off ```  ``` Note :   This command will overwrite all database flags that were previously set. To keep those and add new ones, include the values for all flags to be set on the instance; any flag not specifically included is set to its default value. For flags that do not take a value, specify the flag name followed by an equals sign (\"=\"). ```",
+          "AuditProcedure": "**From Google Cloud Console**  1. Go to the Cloud SQL Instances page in the Google Cloud Console by visiting https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances). 2. Select the instance to open its `Instance Overview` page 3. Ensure the database flag `local_infile` that has been set is listed under the `Database flags` section.  **From Google Cloud CLI**  1. List all Cloud SQL database instances: ``` gcloud sql instances list ``` 2. Ensure the below command returns `off` for every Cloud SQL MySQL database instance. ``` gcloud sql instances describe INSTANCE_NAME --format=json | jq '.settings.databaseFlags | select(.name==\"local_infile\")|.value' ```",
+          "AdditionalInformation": "``` \"WARNING: This patch modifies database flag values, which may require  the instance to be restarted. Check the list of supported flags -  https://cloud.google.com/sql/docs/mysql/flags - to see if your instance will be restarted when this patch is submitted. ```  ``` Note: some database flag settings can affect instance availability or stability, and remove the instance from the Cloud SQL SLA. For information about these flags, see Operational Guidelines.\"  ```  ``` Note: Configuring the above flag restarts the Cloud SQL instance. ```",
           "References": "https://cloud.google.com/sql/docs/mysql/flags:https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_local_infile:https://dev.mysql.com/doc/refman/5.7/en/load-data-local.html"
         }
       ]
     },
     {
       "Id": "6.2.1",
-      "Description": "The `log_error_verbosity` flag controls the verbosity/details of messages logged. Valid values are:\n- `TERSE`\n- `DEFAULT`\n- `VERBOSE`\n\n`TERSE` excludes the logging of `DETAIL`, `HINT`, `QUERY`, and `CONTEXT` error information.\n\n`VERBOSE` output includes the `SQLSTATE` error code, source code file name, function name, and line number that generated the error.\n\nEnsure an appropriate value is set to 'DEFAULT' or stricter.",
+      "Description": "The `log_error_verbosity` flag controls the verbosity/details of messages logged. Valid values are: - `TERSE` - `DEFAULT` - `VERBOSE`  `TERSE` excludes the logging of `DETAIL`, `HINT`, `QUERY`, and `CONTEXT` error information.  `VERBOSE` output includes the `SQLSTATE` error code, source code file name, function name, and line number that generated the error.  Ensure an appropriate value is set to 'DEFAULT' or stricter.",
       "Checks": [
         "cloudsql_instance_postgres_log_error_verbosity_flag"
       ],
@@ -1358,19 +1358,19 @@
           "Section": "6.2. PostgreSQL Database",
           "Profile": "Level 2",
           "AssessmentStatus": "Automated",
-          "Description": "The `log_error_verbosity` flag controls the verbosity/details of messages logged. Valid values are:\n- `TERSE`\n- `DEFAULT`\n- `VERBOSE`\n\n`TERSE` excludes the logging of `DETAIL`, `HINT`, `QUERY`, and `CONTEXT` error information.\n\n`VERBOSE` output includes the `SQLSTATE` error code, source code file name, function name, and line number that generated the error.\n\nEnsure an appropriate value is set to 'DEFAULT' or stricter.",
+          "Description": "The `log_error_verbosity` flag controls the verbosity/details of messages logged. Valid values are: - `TERSE` - `DEFAULT` - `VERBOSE`  `TERSE` excludes the logging of `DETAIL`, `HINT`, `QUERY`, and `CONTEXT` error information.  `VERBOSE` output includes the `SQLSTATE` error code, source code file name, function name, and line number that generated the error.  Ensure an appropriate value is set to 'DEFAULT' or stricter.",
           "RationaleStatement": "Auditing helps in troubleshooting operational problems and also permits forensic analysis. If `log_error_verbosity` is not set to the correct value, too many details or too few details may be logged. This flag should be configured with a value of 'DEFAULT' or stricter. This recommendation is applicable to PostgreSQL database instances.",
           "ImpactStatement": "Turning on logging will increase the required storage over time. Mismanaged logs may cause your storage costs to increase. Setting custom flags via command line on certain instances will cause all omitted flags to be reset to defaults. This may cause you to lose custom flags and could result in unforeseen complications or instance restarts. Because of this, it is recommended you apply these flags changes during a period of low usage.",
-          "RemediationProcedure": "**From Google Cloud Console**\n\n1. Go to the Cloud SQL Instances page in the Google Cloud Console by visiting https://console.cloud.google.com/sql/instances.\n2. Select the PostgreSQL instance for which you want to enable the database flag.\n3. Click `Edit`.\n4. Scroll down to the `Flags` section.\n5. To set a flag that has not been set on the instance before, click `Add item`, choose the flag `log_error_verbosity` from the drop-down menu and set appropriate value.\n6. Click `Save` to save your changes.\n7. Confirm your changes under `Flags` on the Overview page.\n\n**From Google Cloud CLI**\n\n1. Configure the log_error_verbosity database flag for every Cloud SQL PosgreSQL database instance using the below command.\n```\ngcloud sql instances patch  --database-flags log_error_verbosity=\n```\n```\nNote: This command will overwrite all database flags previously set. To keep those and add new ones, include the values for all flags you want set on the instance; any flag not specifically included is set to its default value. For flags that do not take a value, specify the flag name followed by an equals sign (\"=\").\n```",
-          "AuditProcedure": "**From Google Cloud Console**\n\n1. Go to the Cloud SQL Instances page in the Google Cloud Console by visiting https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances).\n2. Select the instance to open its `Instance Overview` page\n3. Go to `Configuration` card\n4. Under `Database flags`, check the value of `log_error_verbosity` flag is set to 'DEFAULT' or stricter.\n\n**From Google Cloud CLI**\n\n1. Use the below command for every Cloud SQL PostgreSQL database instance to verify the value of `log_error_verbosity`\n```\ngcloud sql instances list --format=json | jq '.settings.databaseFlags | select(.name==\"log_error_verbosity\")|.value'\n```",
-          "AdditionalInformation": "```\nWARNING: This patch modifies database flag values, which may require your instance to be restarted. Check the list of supported flags - https://cloud.google.com/sql/docs/postgres/flags - to see if your instance will be restarted when this patch is submitted.\n```\n```\nNote: some database flag settings can affect instance availability or stability and remove the instance from the Cloud SQL SLA. For information about these flags, see Operational Guidelines.\n```\n```\nNote: Configuring the above flag does not require restarting the Cloud SQL instance.\n```",
+          "RemediationProcedure": "**From Google Cloud Console**  1. Go to the Cloud SQL Instances page in the Google Cloud Console by visiting https://console.cloud.google.com/sql/instances. 2. Select the PostgreSQL instance for which you want to enable the database flag. 3. Click `Edit`. 4. Scroll down to the `Flags` section. 5. To set a flag that has not been set on the instance before, click `Add item`, choose the flag `log_error_verbosity` from the drop-down menu and set appropriate value. 6. Click `Save` to save your changes. 7. Confirm your changes under `Flags` on the Overview page.  **From Google Cloud CLI**  1. Configure the log_error_verbosity database flag for every Cloud SQL PosgreSQL database instance using the below command. ``` gcloud sql instances patch  --database-flags log_error_verbosity= ``` ``` Note: This command will overwrite all database flags previously set. To keep those and add new ones, include the values for all flags you want set on the instance; any flag not specifically included is set to its default value. For flags that do not take a value, specify the flag name followed by an equals sign (\"=\"). ```",
+          "AuditProcedure": "**From Google Cloud Console**  1. Go to the Cloud SQL Instances page in the Google Cloud Console by visiting https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances). 2. Select the instance to open its `Instance Overview` page 3. Go to `Configuration` card 4. Under `Database flags`, check the value of `log_error_verbosity` flag is set to 'DEFAULT' or stricter.  **From Google Cloud CLI**  1. Use the below command for every Cloud SQL PostgreSQL database instance to verify the value of `log_error_verbosity` ``` gcloud sql instances list --format=json | jq '.settings.databaseFlags | select(.name==\"log_error_verbosity\")|.value' ```",
+          "AdditionalInformation": "``` WARNING: This patch modifies database flag values, which may require your instance to be restarted. Check the list of supported flags - https://cloud.google.com/sql/docs/postgres/flags - to see if your instance will be restarted when this patch is submitted. ``` ``` Note: some database flag settings can affect instance availability or stability and remove the instance from the Cloud SQL SLA. For information about these flags, see Operational Guidelines. ``` ``` Note: Configuring the above flag does not require restarting the Cloud SQL instance. ```",
           "References": "https://cloud.google.com/sql/docs/postgres/flags:https://www.postgresql.org/docs/current/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-WHAT"
         }
       ]
     },
     {
       "Id": "6.2.6",
-      "Description": "The `log_min_error_statement` flag defines the minimum message severity level that are considered as an error statement. Messages for error statements are logged with the SQL statement. Valid values include `DEBUG5`, `DEBUG4`, `DEBUG3`, `DEBUG2`, `DEBUG1`, `INFO`, `NOTICE`, `WARNING`, `ERROR`, `LOG`, `FATAL`, and `PANIC`.\nEach severity level includes the subsequent levels mentioned above. Ensure a value of `ERROR` or stricter is set.",
+      "Description": "The `log_min_error_statement` flag defines the minimum message severity level that are considered as an error statement. Messages for error statements are logged with the SQL statement. Valid values include `DEBUG5`, `DEBUG4`, `DEBUG3`, `DEBUG2`, `DEBUG1`, `INFO`, `NOTICE`, `WARNING`, `ERROR`, `LOG`, `FATAL`, and `PANIC`. Each severity level includes the subsequent levels mentioned above. Ensure a value of `ERROR` or stricter is set.",
       "Checks": [
         "cloudsql_instance_postgres_log_min_error_statement_flag"
       ],
@@ -1379,19 +1379,19 @@
           "Section": "6.2. PostgreSQL Database",
           "Profile": "Level 1",
           "AssessmentStatus": "Automated",
-          "Description": "The `log_min_error_statement` flag defines the minimum message severity level that are considered as an error statement. Messages for error statements are logged with the SQL statement. Valid values include `DEBUG5`, `DEBUG4`, `DEBUG3`, `DEBUG2`, `DEBUG1`, `INFO`, `NOTICE`, `WARNING`, `ERROR`, `LOG`, `FATAL`, and `PANIC`.\nEach severity level includes the subsequent levels mentioned above. Ensure a value of `ERROR` or stricter is set.",
-          "RationaleStatement": "Auditing helps in troubleshooting operational problems and also permits forensic analysis. If `log_min_error_statement` is not set to the correct value, messages may not be classified as error messages appropriately. Considering general log messages as error messages would make is difficult to find actual errors and considering only stricter severity levels as error messages may skip actual errors to log their SQL statements.\nThe `log_min_error_statement` flag should be set to `ERROR` or stricter. This recommendation is applicable to PostgreSQL database instances.",
+          "Description": "The `log_min_error_statement` flag defines the minimum message severity level that are considered as an error statement. Messages for error statements are logged with the SQL statement. Valid values include `DEBUG5`, `DEBUG4`, `DEBUG3`, `DEBUG2`, `DEBUG1`, `INFO`, `NOTICE`, `WARNING`, `ERROR`, `LOG`, `FATAL`, and `PANIC`. Each severity level includes the subsequent levels mentioned above. Ensure a value of `ERROR` or stricter is set.",
+          "RationaleStatement": "Auditing helps in troubleshooting operational problems and also permits forensic analysis. If `log_min_error_statement` is not set to the correct value, messages may not be classified as error messages appropriately. Considering general log messages as error messages would make is difficult to find actual errors and considering only stricter severity levels as error messages may skip actual errors to log their SQL statements. The `log_min_error_statement` flag should be set to `ERROR` or stricter. This recommendation is applicable to PostgreSQL database instances.",
           "ImpactStatement": "Turning on logging will increase the required storage over time. Mismanaged logs may cause your storage costs to increase. Setting custom flags via command line on certain instances will cause all omitted flags to be reset to defaults. This may cause you to lose custom flags and could result in unforeseen complications or instance restarts. Because of this, it is recommended you apply these flags changes during a period of low usage.",
-          "RemediationProcedure": "**From Google Cloud Console**\n\n1. Go to the Cloud SQL Instances page in the Google Cloud Console by visiting https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances).\n2. Select the PostgreSQL instance for which you want to enable the database flag.\n3. Click `Edit`.\n4. Scroll down to the `Flags` section.\n5. To set a flag that has not been set on the instance before, click `Add item`, choose the flag `log_min_error_statement` from the drop-down menu and set appropriate value.\n6. Click `Save` to save your changes.\n7. Confirm your changes under `Flags` on the Overview page.\n\n**From Google Cloud CLI**\n\n1. Configure the `log_min_error_statement` database flag for every Cloud SQL PosgreSQL database instance using the below command.\n```\ngcloud sql instances patch  --database-flags log_min_error_statement=\n```\n```\nNote: This command will overwrite all database flags previously set. To keep those and add new ones, include the values for all flags you want set on the instance; any flag not specifically included is set to its default value. For flags that do not take a value, specify the flag name followed by an equals sign (\"=\").\n```",
-          "AuditProcedure": "**From Google Cloud Console**\n\n1. Go to the Cloud SQL Instances page in the Google Cloud Console by visiting https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances).\n2. Select the instance to open its `Instance Overview` page\n3. Go to `Configuration` card\n4. Under `Database flags`, check the value of `log_min_error_statement` flag is configured as to `ERROR` or stricter.\n\n**From Google Cloud CLI**\n\n1. Use the below command for every Cloud SQL PostgreSQL database instance to verify the value of `log_min_error_statement` is set to `ERROR` or stricter.\n```\ngcloud sql instances list --format=json | jq '..settings.databaseFlags | select(.name==\"log_min_error_statement\")|.value'\n```",
-          "AdditionalInformation": "```\nWARNING: This patch modifies database flag values, which may require your instance to be restarted. Check the list of supported flags - https://cloud.google.com/sql/docs/postgres/flags - to see if your instance will be restarted when this patch is submitted.\n```\n```\nNote: some database flag settings can affect instance availability or stability and remove the instance from the Cloud SQL SLA. For information about these flags, see Operational Guidelines.\n```\n```\nNote: Configuring the above flag does not require restarting the Cloud SQL instance.\n```",
+          "RemediationProcedure": "**From Google Cloud Console**  1. Go to the Cloud SQL Instances page in the Google Cloud Console by visiting https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances). 2. Select the PostgreSQL instance for which you want to enable the database flag. 3. Click `Edit`. 4. Scroll down to the `Flags` section. 5. To set a flag that has not been set on the instance before, click `Add item`, choose the flag `log_min_error_statement` from the drop-down menu and set appropriate value. 6. Click `Save` to save your changes. 7. Confirm your changes under `Flags` on the Overview page.  **From Google Cloud CLI**  1. Configure the `log_min_error_statement` database flag for every Cloud SQL PosgreSQL database instance using the below command. ``` gcloud sql instances patch  --database-flags log_min_error_statement= ``` ``` Note: This command will overwrite all database flags previously set. To keep those and add new ones, include the values for all flags you want set on the instance; any flag not specifically included is set to its default value. For flags that do not take a value, specify the flag name followed by an equals sign (\"=\"). ```",
+          "AuditProcedure": "**From Google Cloud Console**  1. Go to the Cloud SQL Instances page in the Google Cloud Console by visiting https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances). 2. Select the instance to open its `Instance Overview` page 3. Go to `Configuration` card 4. Under `Database flags`, check the value of `log_min_error_statement` flag is configured as to `ERROR` or stricter.  **From Google Cloud CLI**  1. Use the below command for every Cloud SQL PostgreSQL database instance to verify the value of `log_min_error_statement` is set to `ERROR` or stricter. ``` gcloud sql instances list --format=json | jq '..settings.databaseFlags | select(.name==\"log_min_error_statement\")|.value' ```",
+          "AdditionalInformation": "``` WARNING: This patch modifies database flag values, which may require your instance to be restarted. Check the list of supported flags - https://cloud.google.com/sql/docs/postgres/flags - to see if your instance will be restarted when this patch is submitted. ``` ``` Note: some database flag settings can affect instance availability or stability and remove the instance from the Cloud SQL SLA. For information about these flags, see Operational Guidelines. ``` ``` Note: Configuring the above flag does not require restarting the Cloud SQL instance. ```",
           "References": "https://cloud.google.com/sql/docs/postgres/flags:https://www.postgresql.org/docs/9.6/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-WHEN"
         }
       ]
     },
     {
       "Id": "6.2.4",
-      "Description": "The value of `log_statement` flag determined the SQL statements that are logged. Valid values are:\n- `none`\n- `ddl`\n- `mod`\n- `all`\n\nThe value `ddl` logs all data definition statements.\nThe value `mod` logs all ddl statements, plus data-modifying statements.\n\nThe statements are logged after a basic parsing is done and statement type is determined, thus this does not logs statements with errors. When using extended query protocol, logging occurs after an Execute message is received and values of the Bind parameters are included.\n\nA value of 'ddl' is recommended unless otherwise directed by your organization's logging policy.",
+      "Description": "The value of `log_statement` flag determined the SQL statements that are logged. Valid values are: - `none` - `ddl` - `mod` - `all`  The value `ddl` logs all data definition statements. The value `mod` logs all ddl statements, plus data-modifying statements.  The statements are logged after a basic parsing is done and statement type is determined, thus this does not logs statements with errors. When using extended query protocol, logging occurs after an Execute message is received and values of the Bind parameters are included.  A value of 'ddl' is recommended unless otherwise directed by your organization's logging policy.",
       "Checks": [
         "cloudsql_instance_postgres_log_statement_flag"
       ],
@@ -1400,19 +1400,19 @@
           "Section": "6.2. PostgreSQL Database",
           "Profile": "Level 2",
           "AssessmentStatus": "Automated",
-          "Description": "The value of `log_statement` flag determined the SQL statements that are logged. Valid values are:\n- `none`\n- `ddl`\n- `mod`\n- `all`\n\nThe value `ddl` logs all data definition statements.\nThe value `mod` logs all ddl statements, plus data-modifying statements.\n\nThe statements are logged after a basic parsing is done and statement type is determined, thus this does not logs statements with errors. When using extended query protocol, logging occurs after an Execute message is received and values of the Bind parameters are included.\n\nA value of 'ddl' is recommended unless otherwise directed by your organization's logging policy.",
-          "RationaleStatement": "Auditing helps in forensic analysis. If `log_statement` is not set to the correct value, too many statements may be logged leading to issues in finding the relevant information from the logs, or too few statements may be logged with relevant information missing from the logs. Setting log_statement to align with your organization's security and logging policies facilitates later auditing and review of database activities.\nThis recommendation is applicable to PostgreSQL database instances.",
+          "Description": "The value of `log_statement` flag determined the SQL statements that are logged. Valid values are: - `none` - `ddl` - `mod` - `all`  The value `ddl` logs all data definition statements. The value `mod` logs all ddl statements, plus data-modifying statements.  The statements are logged after a basic parsing is done and statement type is determined, thus this does not logs statements with errors. When using extended query protocol, logging occurs after an Execute message is received and values of the Bind parameters are included.  A value of 'ddl' is recommended unless otherwise directed by your organization's logging policy.",
+          "RationaleStatement": "Auditing helps in forensic analysis. If `log_statement` is not set to the correct value, too many statements may be logged leading to issues in finding the relevant information from the logs, or too few statements may be logged with relevant information missing from the logs. Setting log_statement to align with your organization's security and logging policies facilitates later auditing and review of database activities. This recommendation is applicable to PostgreSQL database instances.",
           "ImpactStatement": "Turning on logging will increase the required storage over time. Mismanaged logs may cause your storage costs to increase. Setting custom flags via command line on certain instances will cause all omitted flags to be reset to defaults. This may cause you to lose custom flags and could result in unforeseen complications or instance restarts. Because of this, it is recommended you apply these flags changes during a period of low usage.",
-          "RemediationProcedure": "**From Google Cloud Console**\n\n1. Go to the Cloud SQL Instances page in the Google Cloud Console by visiting https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances).\n2. Select the PostgreSQL instance for which you want to enable the database flag.\n3. Click `Edit`.\n4. Scroll down to the `Flags` section.\n5. To set a flag that has not been set on the instance before, click `Add item`, choose the flag `log_statement` from the drop-down menu and set appropriate value.\n6. Click `Save` to save your changes.\n7. Confirm your changes under `Flags` on the Overview page.\n\n**From Google Cloud CLI**\n\n1. Configure the `log_statement` database flag for every Cloud SQL PosgreSQL database instance using the below command.\n```\ngcloud sql instances patch  --database-flags log_statement=\n```\n```\nNote: This command will overwrite all database flags previously set. To keep those and add new ones, include the values for all flags you want set on the instance; any flag not specifically included is set to its default value. For flags that do not take a value, specify the flag name followed by an equals sign (\"=\").\n```",
-          "AuditProcedure": "**From Google Cloud Console**\n\n1. Go to the Cloud SQL Instances page in the Google Cloud Console by visiting https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances).\n2. Select the instance to open its `Instance Overview` page\n3. Go to `Configuration` card\n4. Under `Database flags`, check the value of `log_statement` flag is set to appropriately.\n\n**From Google Cloud CLI**\n\n1. Use the below command for every Cloud SQL PostgreSQL database instance to verify the value of `log_statement`\n```\ngcloud sql instances list --format=json | jq '..settings.databaseFlags | select(.name==\"log_statement\")|.value'\n```",
-          "AdditionalInformation": "```\nWARNING: This patch modifies database flag values, which may require your instance to be restarted. Check the list of supported flags - https://cloud.google.com/sql/docs/postgres/flags - to see if your instance will be restarted when this patch is submitted.\n```\n```\nNote: some database flag settings can affect instance availability or stability and remove the instance from the Cloud SQL SLA. For information about these flags, see Operational Guidelines.\n```\n```\nNote: Configuring the above flag does not require restarting the Cloud SQL instance.\n```",
+          "RemediationProcedure": "**From Google Cloud Console**  1. Go to the Cloud SQL Instances page in the Google Cloud Console by visiting https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances). 2. Select the PostgreSQL instance for which you want to enable the database flag. 3. Click `Edit`. 4. Scroll down to the `Flags` section. 5. To set a flag that has not been set on the instance before, click `Add item`, choose the flag `log_statement` from the drop-down menu and set appropriate value. 6. Click `Save` to save your changes. 7. Confirm your changes under `Flags` on the Overview page.  **From Google Cloud CLI**  1. Configure the `log_statement` database flag for every Cloud SQL PosgreSQL database instance using the below command. ``` gcloud sql instances patch  --database-flags log_statement= ``` ``` Note: This command will overwrite all database flags previously set. To keep those and add new ones, include the values for all flags you want set on the instance; any flag not specifically included is set to its default value. For flags that do not take a value, specify the flag name followed by an equals sign (\"=\"). ```",
+          "AuditProcedure": "**From Google Cloud Console**  1. Go to the Cloud SQL Instances page in the Google Cloud Console by visiting https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances). 2. Select the instance to open its `Instance Overview` page 3. Go to `Configuration` card 4. Under `Database flags`, check the value of `log_statement` flag is set to appropriately.  **From Google Cloud CLI**  1. Use the below command for every Cloud SQL PostgreSQL database instance to verify the value of `log_statement` ``` gcloud sql instances list --format=json | jq '..settings.databaseFlags | select(.name==\"log_statement\")|.value' ```",
+          "AdditionalInformation": "``` WARNING: This patch modifies database flag values, which may require your instance to be restarted. Check the list of supported flags - https://cloud.google.com/sql/docs/postgres/flags - to see if your instance will be restarted when this patch is submitted. ``` ``` Note: some database flag settings can affect instance availability or stability and remove the instance from the Cloud SQL SLA. For information about these flags, see Operational Guidelines. ``` ``` Note: Configuring the above flag does not require restarting the Cloud SQL instance. ```",
           "References": "https://cloud.google.com/sql/docs/postgres/flags:https://www.postgresql.org/docs/current/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-WHAT"
         }
       ]
     },
     {
       "Id": "6.2.9",
-      "Description": "Instance addresses can be public IP or private IP. Public IP means that the instance is accessible through the public internet. In contrast, instances using only private IP are not accessible through the public internet, but are accessible through a Virtual Private Cloud (VPC).\n\nLimiting network access to your database will limit potential attacks.",
+      "Description": "Instance addresses can be public IP or private IP. Public IP means that the instance is accessible through the public internet. In contrast, instances using only private IP are not accessible through the public internet, but are accessible through a Virtual Private Cloud (VPC).  Limiting network access to your database will limit potential attacks.",
       "Checks": [
         "cloudsql_instance_private_ip_assignment"
       ],
@@ -1421,11 +1421,11 @@
           "Section": "6.2. PostgreSQL Database",
           "Profile": "Level 1",
           "AssessmentStatus": "Automated",
-          "Description": "Instance addresses can be public IP or private IP. Public IP means that the instance is accessible through the public internet. In contrast, instances using only private IP are not accessible through the public internet, but are accessible through a Virtual Private Cloud (VPC).\n\nLimiting network access to your database will limit potential attacks.",
+          "Description": "Instance addresses can be public IP or private IP. Public IP means that the instance is accessible through the public internet. In contrast, instances using only private IP are not accessible through the public internet, but are accessible through a Virtual Private Cloud (VPC).  Limiting network access to your database will limit potential attacks.",
           "RationaleStatement": "Setting databases access only to private will reduce attack surface.",
-          "ImpactStatement": "If you set a database IP to private, only host from the same network will have the ability to connect your database.\n\nConfiguring an existing Cloud SQL instance to use private IP causes the instance to restart.",
-          "RemediationProcedure": "**From Google Cloud Console**\n\n1. In the Google Cloud console, go to the `Cloud SQL Instances` page.\n1. Open the `Overview page` of an instance by clicking the instance name.\n1. Select `Connections` from the SQL navigation menu.\n1. Check the `Private IP` checkbox. A drop-down list shows the available networks in your project.\n1. Select the VPC network you want to use:\n If you see `Private service connection required`:\n 1. Click `Set up connection`.\n 1. In the `Allocate an IP range` section, choose one of the following options:\n - Select one or more existing IP ranges or create a new one from the dropdown. The dropdown includes previously allocated ranges, if there are any, or you can select Allocate a new IP range and enter a new range and name.\n - Use an automatically allocated IP range in your network.\n Note: You can specify an address range only for a primary instance, not for a read replica or clone.\n 3. Click Continue.\n 1. Click Create connection.\n 1. Verify that you see the Private service connection for network VPC_NETWORK_NAME has been successfully created status.\n1. Optional step for Private Services Access - review reference links to VPC documents for additional detail If you want to allow other Google Cloud services such as BigQuery to access data in Cloud SQL and make queries against this data over a private IP connection, then select the Private path for Google Cloud services check box.\n1. Click Save\n\n**From Google Cloud CLI**\n\n1. List cloud SQL instances\n```\ngcloud sql instances list --format=\"json\" | jq '. | .connectionName,.ipAddresses'\n```\nNote the `project name` of the instance you want to set to a private IP, this will be \n\nNote the `instance name` of the instance you want to set to a private IP, this will be \n\nExample public instance output:\n\n```\n\"my-project-123456:us-central1:my-instance\"\n\n {\n \"ipAddress\": \"0.0.0.0\",\n \"type\": \"PRIMARY\"\n },\n {\n \"ipAddress\": \"0.0.0.0\",\n \"type\": \"OUTGOING\"\n }\n```\n\n2. run the following command to list the available VPCs \n```\ngcloud compute networks list --format=\"json\" | jq '..name'\n```\nNote the name of the VPC to use for the instance private IP, this will be \n\n3. run the following to set instance to a private IP\n```\ngcloud beta sql instances patch  \\\n--project= \\\n--network=projects//global/networks/ \\\n--no-assign-ip\n```",
-          "AuditProcedure": "**From Google Cloud Console**\n\n1. In the Google Cloud console, go to the `Cloud SQL Instances` page.\n1. Open the `Overview page` of an instance by clicking the instance name.\n1. Look for a field labeled `Private IP address` This field will only show if the Private IP option is checked. The IP listed should be in the private IP space.\n\n**From Google Cloud CLI**\n1. List cloud SQL instances\n```\ngcloud sql instances list --format=\"json\" | jq '. | .connectionName,.ipAddresses'\n```\nEach instance listed should have a `type` of `PRIVATE`.\n\n2. If you want to view a specific instance, note the (s) listed and run the following.\n```\ngcloud sql instances describe  --format=\"json\" | jq '.ipAddresses'\n```\n`Type` should be `\"PRIVATE\"`\n```\n {\n \"ipAddress\": \"10.21.0.2\",\n \"type\": \"PRIVATE\"\n }\n```",
+          "ImpactStatement": "If you set a database IP to private, only host from the same network will have the ability to connect your database.  Configuring an existing Cloud SQL instance to use private IP causes the instance to restart.",
+          "RemediationProcedure": "**From Google Cloud Console**  1. In the Google Cloud console, go to the `Cloud SQL Instances` page. 1. Open the `Overview page` of an instance by clicking the instance name. 1. Select `Connections` from the SQL navigation menu. 1. Check the `Private IP` checkbox. A drop-down list shows the available networks in your project. 1. Select the VPC network you want to use:  If you see `Private service connection required`:  1. Click `Set up connection`.  1. In the `Allocate an IP range` section, choose one of the following options:  - Select one or more existing IP ranges or create a new one from the dropdown. The dropdown includes previously allocated ranges, if there are any, or you can select Allocate a new IP range and enter a new range and name.  - Use an automatically allocated IP range in your network.  Note: You can specify an address range only for a primary instance, not for a read replica or clone.  3. Click Continue.  1. Click Create connection.  1. Verify that you see the Private service connection for network VPC_NETWORK_NAME has been successfully created status. 1. Optional step for Private Services Access - review reference links to VPC documents for additional detail If you want to allow other Google Cloud services such as BigQuery to access data in Cloud SQL and make queries against this data over a private IP connection, then select the Private path for Google Cloud services check box. 1. Click Save  **From Google Cloud CLI**  1. List cloud SQL instances ``` gcloud sql instances list --format=\"json\" | jq '. | .connectionName,.ipAddresses' ``` Note the `project name` of the instance you want to set to a private IP, this will be   Note the `instance name` of the instance you want to set to a private IP, this will be   Example public instance output:  ``` \"my-project-123456:us-central1:my-instance\"   {  \"ipAddress\": \"0.0.0.0\",  \"type\": \"PRIMARY\"  },  {  \"ipAddress\": \"0.0.0.0\",  \"type\": \"OUTGOING\"  } ```  2. run the following command to list the available VPCs  ``` gcloud compute networks list --format=\"json\" | jq '..name' ``` Note the name of the VPC to use for the instance private IP, this will be   3. run the following to set instance to a private IP ``` gcloud beta sql instances patch  \\ --project= \\ --network=projects//global/networks/ \\ --no-assign-ip ```",
+          "AuditProcedure": "**From Google Cloud Console**  1. In the Google Cloud console, go to the `Cloud SQL Instances` page. 1. Open the `Overview page` of an instance by clicking the instance name. 1. Look for a field labeled `Private IP address` This field will only show if the Private IP option is checked. The IP listed should be in the private IP space.  **From Google Cloud CLI** 1. List cloud SQL instances ``` gcloud sql instances list --format=\"json\" | jq '. | .connectionName,.ipAddresses' ``` Each instance listed should have a `type` of `PRIVATE`.  2. If you want to view a specific instance, note the (s) listed and run the following. ``` gcloud sql instances describe  --format=\"json\" | jq '.ipAddresses' ``` `Type` should be `\"PRIVATE\"` ```  {  \"ipAddress\": \"10.21.0.2\",  \"type\": \"PRIVATE\"  } ```",
           "AdditionalInformation": "",
           "References": "https://cloud.google.com/sql/docs/postgres/configure-private-ip:https://cloud.google.com/vpc/docs/configure-private-services-access#procedure:https://cloud.google.com/vpc/docs/configure-private-services-access#creating-connection"
         }
@@ -1445,9 +1445,9 @@
           "Description": "Ensure `cloudsql.enable_pgaudit` database flag for Cloud SQL PostgreSQL instance is set to `on` to allow for centralized logging.",
           "RationaleStatement": "As numerous other recommendations in this section consist of turning on flags for logging purposes, your organization will need a way to manage these logs. You may have a solution already in place. If you do not, consider installing and enabling the open source pgaudit extension within PostgreSQL and enabling its corresponding flag of `cloudsql.enable_pgaudit`. This flag and installing the extension enables database auditing in PostgreSQL through the open-source pgAudit extension. This extension provides detailed session and object logging to comply with government, financial, & ISO standards and provides auditing capabilities to mitigate threats by monitoring security events on the instance. Enabling the flag and settings later in this recommendation will send these logs to Google Logs Explorer so that you can access them in a central location. to This recommendation is applicable only to PostgreSQL database instances.",
           "ImpactStatement": "Enabling the pgAudit extension can lead to increased data storage requirements and to ensure durability of pgAudit log records in the event of unexpected storage issues, it is recommended to enable the `Enable automatic storage increases` setting on the instance. Enabling flags via the command line will also overwrite all existing flags, so you should apply all needed flags in the CLI command. Also flags may require a restart of the server to be implemented or will break existing functionality so update your servers at a time of low usage.",
-          "RemediationProcedure": "**Initialize the pgAudit flag**\n\n**From Google Cloud Console**\n\n1. Go to https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances).\n2. Select the instance to open its `Overview` page.\n3. Click `Edit`.\n4. Scroll down and expand `Flags`.\n5. To set a flag that has not been set on the instance before, click `Add item`.\n6. Enter `cloudsql.enable_pgaudit` for the flag name and set the flag to `on`.\n7. Click `Done`.\n8. Click `Save` to update the configuration.\n9. Confirm your changes under `Flags` on the `Overview` page.\n\n**From Google Cloud CLI**\n\nRun the below command by providing `` to enable `cloudsql.enable_pgaudit` flag.\n\n```\ngcloud sql instances patch  --database-flags cloudsql.enable_pgaudit=on\n```\n\nNote: `RESTART` is required to get this configuration in effect.\n\n**Creating the extension**\n\n1. Connect to the the server running PostgreSQL or through a SQL client of your choice.\n2. If SSHing to the server in the command line open the PostgreSQL shell by typing `psql`\n3. Run the following command as a superuser.\n\n```\nCREATE EXTENSION pgaudit;\n```\n\n**Updating the previously created pgaudit.log flag for your Logging Needs**\n\n**From Console:**\n\nNote: there are multiple options here. This command will enable logging for all databases on a server. Please see the customizing database audit logging reference for more flag options. \n\n1. Go to https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances).\n2. Select the instance to open its `Overview` page.\n3. Click `Edit`.\n4. Scroll down and expand `Flags`.\n5. To set a flag that has not been set on the instance before, click `Add item`.\n6. Enter `pgaudit.log=all` for the flag name and set the flag to `on`.\n7. Click `Done`.\n8. Click `Save` to update the configuration.\n9. Confirm your changes under `Flags` on the `Overview` page.\n\n**From Command Line:**\n\nRun the command\n\n```\ngcloud sql instances patch  --database-flags \\\n cloudsql.enable_pgaudit=on,pgaudit.log=all\n```\n\n**Determine if logs are being sent to Logs Explorer**\n\n1. From the Google Console home page, open the hamburger menu in the top left.\n2. In the menu that pops open, scroll down to Logs Explorer under Operations.\n3. In the query box, paste the following and search\n\nresource.type=\"cloudsql_database\"\nlogName=\"projects//logs/cloudaudit.googleapis.com%2Fdata_access\"\nprotoPayload.request.@type=\"type.googleapis.com/google.cloud.sql.audit.v1.PgAuditEntry\"\n\n If it returns any log sources, they are correctly setup.",
-          "AuditProcedure": "**Determining if the pgAudit Flag is set to 'on'**\n\n**From Google Cloud Console**\n\n1. Go to https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances).\n2. Select the instance to open its `Overview` page.\n3. Click `Edit`.\n4. Scroll down and expand `Flags`.\n5. Ensure that `cloudsql.enable_pgaudit` flag is set to `on`.\n\n**From Google Cloud CLI**\n\nRun the command by providing ``. Ensure the value of the flag is `on`.\n\n``` \ngcloud sql instances describe  --format=\"json\" | jq '.settings|.|.databaseFlags|select(.name==\"cloudsql.enable_pgaudit\")|.value' \n```\n\n**Determine if the pgAudit extension is installed**\n\n1. Connect to the the server running PostgreSQL or through a SQL client of your choice.\n2. Via command line open the PostgreSQL shell by typing `psql`\n3. Run the following command\n\n```\nSELECT * \nFROM pg_extension;\n```\n\n4. If pgAudit is in this list. If so, it is installed.\n\n**Determine if Data Access Audit logs are enabled for your project and have sufficient privileges**\n\n1. From the homepage open the hamburger menu in the top left.\n2. Scroll down to `IAM & Admin`and hover over it.\n3. In the menu that opens up, select `Audit Logs`\n4. In the middle of the page, in the search box next to `filter` search for `Cloud Composer API`\n5. Select it, and ensure that both 'Admin Read' and 'Data Read' are checked.\n\n**Determine if logs are being sent to Logs Explorer**\n\n1. From the Google Console home page, open the hamburger menu in the top left.\n2. In the menu that pops open, scroll down to Logs Explorer under Operations.\n3. In the query box, paste the following and search\n```\nresource.type=\"cloudsql_database\"\nlogName=\"projects//logs/cloudaudit.googleapis.com%2Fdata_access\"\nprotoPayload.request.@type=\"type.googleapis.com/google.cloud.sql.audit.v1.PgAuditEntry\"\n```\n4. If it returns any log sources, they are correctly setup.",
-          "AdditionalInformation": "WARNING: This patch modifies database flag values, which may require your instance to be restarted. Check the list of supported flags - https://cloud.google.com/sql/docs/postgres/flags - to see if your instance will be restarted when this patch is submitted.\n\nNote: Configuring the 'cloudsql.enable_pgaudit' database flag requires restarting the Cloud SQL PostgreSQL instance.",
+          "RemediationProcedure": "**Initialize the pgAudit flag**  **From Google Cloud Console**  1. Go to https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances). 2. Select the instance to open its `Overview` page. 3. Click `Edit`. 4. Scroll down and expand `Flags`. 5. To set a flag that has not been set on the instance before, click `Add item`. 6. Enter `cloudsql.enable_pgaudit` for the flag name and set the flag to `on`. 7. Click `Done`. 8. Click `Save` to update the configuration. 9. Confirm your changes under `Flags` on the `Overview` page.  **From Google Cloud CLI**  Run the below command by providing `` to enable `cloudsql.enable_pgaudit` flag.  ``` gcloud sql instances patch  --database-flags cloudsql.enable_pgaudit=on ```  Note: `RESTART` is required to get this configuration in effect.  **Creating the extension**  1. Connect to the the server running PostgreSQL or through a SQL client of your choice. 2. If SSHing to the server in the command line open the PostgreSQL shell by typing `psql` 3. Run the following command as a superuser.  ``` CREATE EXTENSION pgaudit; ```  **Updating the previously created pgaudit.log flag for your Logging Needs**  **From Console:**  Note: there are multiple options here. This command will enable logging for all databases on a server. Please see the customizing database audit logging reference for more flag options.   1. Go to https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances). 2. Select the instance to open its `Overview` page. 3. Click `Edit`. 4. Scroll down and expand `Flags`. 5. To set a flag that has not been set on the instance before, click `Add item`. 6. Enter `pgaudit.log=all` for the flag name and set the flag to `on`. 7. Click `Done`. 8. Click `Save` to update the configuration. 9. Confirm your changes under `Flags` on the `Overview` page.  **From Command Line:**  Run the command  ``` gcloud sql instances patch  --database-flags \\  cloudsql.enable_pgaudit=on,pgaudit.log=all ```  **Determine if logs are being sent to Logs Explorer**  1. From the Google Console home page, open the hamburger menu in the top left. 2. In the menu that pops open, scroll down to Logs Explorer under Operations. 3. In the query box, paste the following and search  resource.type=\"cloudsql_database\" logName=\"projects//logs/cloudaudit.googleapis.com%2Fdata_access\" protoPayload.request.@type=\"type.googleapis.com/google.cloud.sql.audit.v1.PgAuditEntry\"   If it returns any log sources, they are correctly setup.",
+          "AuditProcedure": "**Determining if the pgAudit Flag is set to 'on'**  **From Google Cloud Console**  1. Go to https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances). 2. Select the instance to open its `Overview` page. 3. Click `Edit`. 4. Scroll down and expand `Flags`. 5. Ensure that `cloudsql.enable_pgaudit` flag is set to `on`.  **From Google Cloud CLI**  Run the command by providing ``. Ensure the value of the flag is `on`.  ```  gcloud sql instances describe  --format=\"json\" | jq '.settings|.|.databaseFlags|select(.name==\"cloudsql.enable_pgaudit\")|.value'  ```  **Determine if the pgAudit extension is installed**  1. Connect to the the server running PostgreSQL or through a SQL client of your choice. 2. Via command line open the PostgreSQL shell by typing `psql` 3. Run the following command  ``` SELECT *  FROM pg_extension; ```  4. If pgAudit is in this list. If so, it is installed.  **Determine if Data Access Audit logs are enabled for your project and have sufficient privileges**  1. From the homepage open the hamburger menu in the top left. 2. Scroll down to `IAM & Admin`and hover over it. 3. In the menu that opens up, select `Audit Logs` 4. In the middle of the page, in the search box next to `filter` search for `Cloud Composer API` 5. Select it, and ensure that both 'Admin Read' and 'Data Read' are checked.  **Determine if logs are being sent to Logs Explorer**  1. From the Google Console home page, open the hamburger menu in the top left. 2. In the menu that pops open, scroll down to Logs Explorer under Operations. 3. In the query box, paste the following and search ``` resource.type=\"cloudsql_database\" logName=\"projects//logs/cloudaudit.googleapis.com%2Fdata_access\" protoPayload.request.@type=\"type.googleapis.com/google.cloud.sql.audit.v1.PgAuditEntry\" ``` 4. If it returns any log sources, they are correctly setup.",
+          "AdditionalInformation": "WARNING: This patch modifies database flag values, which may require your instance to be restarted. Check the list of supported flags - https://cloud.google.com/sql/docs/postgres/flags - to see if your instance will be restarted when this patch is submitted.  Note: Configuring the 'cloudsql.enable_pgaudit' database flag requires restarting the Cloud SQL PostgreSQL instance.",
           "References": "https://cloud.google.com/sql/docs/postgres/flags#list-flags-postgres:https://cloud.google.com/sql/docs/postgres/pg-audit#enable-auditing-flag:https://cloud.google.com/sql/docs/postgres/pg-audit#customizing-database-audit-logging:https://cloud.google.com/logging/docs/audit/configure-data-access#config-console-enable"
         }
       ]
@@ -1466,9 +1466,9 @@
           "Description": "Enabling the `log_connections` setting causes each attempted connection to the server to be logged, along with successful completion of client authentication. This parameter cannot be changed after the session starts.",
           "RationaleStatement": "PostgreSQL does not log attempted connections by default. Enabling the `log_connections` setting will create log entries for each attempted connection as well as successful completion of client authentication which can be useful in troubleshooting issues and to determine any unusual connection attempts to the server. This recommendation is applicable to PostgreSQL database instances.",
           "ImpactStatement": "Turning on logging will increase the required storage over time. Mismanaged logs may cause your storage costs to increase. Setting custom flags via command line on certain instances will cause all omitted flags to be reset to defaults. This may cause you to lose custom flags and could result in unforeseen complications or instance restarts. Because of this, it is recommended you apply these flags changes during a period of low usage.",
-          "RemediationProcedure": "**From Google Cloud Console**\n\n1. Go to the Cloud SQL Instances page in the Google Cloud Console by visiting https://console.cloud.google.com/sql/instances.\n2. Select the PostgreSQL instance for which you want to enable the database flag.\n3. Click `Edit`.\n4. Scroll down to the `Flags` section.\n5. To set a flag that has not been set on the instance before, click `Add item`, choose the flag `log_connections` from the drop-down menu and set the value as `on`.\n6. Click `Save`.\n7. Confirm the changes under `Flags` on the Overview page.\n\n**From Google Cloud CLI**\n\n1. Configure the `log_connections` database flag for every Cloud SQL PosgreSQL database instance using the below command.\n```\ngcloud sql instances patch  --database-flags log_connections=on\n```\n```\nNote: \nThis command will overwrite all previously set database flags. To keep those and add new ones, include the values for all flags to be set on the instance; any flag not specifically included is set to its default value. For flags that do not take a value, specify the flag name followed by an equals sign (\"=\").\n```",
-          "AuditProcedure": "**From Google Cloud Console**\n\n1. Go to the Cloud SQL Instances page in the Google Cloud Console by visiting https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances).\n2. Select the instance to open its `Instance Overview` page.\n3. Go to the `Configuration` card.\n4. Under `Database flags`, check the value of `log_connections` flag to determine if it is configured as expected.\n\n**From Google Cloud CLI**\n\n1. Ensure the below command returns `on` for every Cloud SQL PostgreSQL database instance:\n```\ngcloud sql instances list --format=json | jq '.settings.databaseFlags | select(.name==\"log_connections\")|.value'\n```",
-          "AdditionalInformation": "```\nWARNING: This patch modifies database flag values, which may require your instance to be restarted. Check the list of supported flags - https://cloud.google.com/sql/docs/postgres/flags - to see if your instance will be restarted when this patch is submitted.\n```\n```\nNote: some database flag settings can affect instance availability or stability and remove the instance from the Cloud SQL SLA. For information about these flags, see the Operational Guidelines.\n```\n```\nNote: Configuring the above flag does not require restarting the Cloud SQL instance.\n```",
+          "RemediationProcedure": "**From Google Cloud Console**  1. Go to the Cloud SQL Instances page in the Google Cloud Console by visiting https://console.cloud.google.com/sql/instances. 2. Select the PostgreSQL instance for which you want to enable the database flag. 3. Click `Edit`. 4. Scroll down to the `Flags` section. 5. To set a flag that has not been set on the instance before, click `Add item`, choose the flag `log_connections` from the drop-down menu and set the value as `on`. 6. Click `Save`. 7. Confirm the changes under `Flags` on the Overview page.  **From Google Cloud CLI**  1. Configure the `log_connections` database flag for every Cloud SQL PosgreSQL database instance using the below command. ``` gcloud sql instances patch  --database-flags log_connections=on ``` ``` Note:  This command will overwrite all previously set database flags. To keep those and add new ones, include the values for all flags to be set on the instance; any flag not specifically included is set to its default value. For flags that do not take a value, specify the flag name followed by an equals sign (\"=\"). ```",
+          "AuditProcedure": "**From Google Cloud Console**  1. Go to the Cloud SQL Instances page in the Google Cloud Console by visiting https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances). 2. Select the instance to open its `Instance Overview` page. 3. Go to the `Configuration` card. 4. Under `Database flags`, check the value of `log_connections` flag to determine if it is configured as expected.  **From Google Cloud CLI**  1. Ensure the below command returns `on` for every Cloud SQL PostgreSQL database instance: ``` gcloud sql instances list --format=json | jq '.settings.databaseFlags | select(.name==\"log_connections\")|.value' ```",
+          "AdditionalInformation": "``` WARNING: This patch modifies database flag values, which may require your instance to be restarted. Check the list of supported flags - https://cloud.google.com/sql/docs/postgres/flags - to see if your instance will be restarted when this patch is submitted. ``` ``` Note: some database flag settings can affect instance availability or stability and remove the instance from the Cloud SQL SLA. For information about these flags, see the Operational Guidelines. ``` ``` Note: Configuring the above flag does not require restarting the Cloud SQL instance. ```",
           "References": "https://cloud.google.com/sql/docs/postgres/flags:https://www.postgresql.org/docs/9.6/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-WHAT"
         }
       ]
@@ -1485,11 +1485,11 @@
           "Profile": "Level 1",
           "AssessmentStatus": "Automated",
           "Description": "Enabling the `log_disconnections` setting logs the end of each session, including the session duration.",
-          "RationaleStatement": "PostgreSQL does not log session details such as duration and session end by default. Enabling the `log_disconnections` setting will create log entries at the end of each session which can be useful in troubleshooting issues and determine any unusual activity across a time period.\nThe `log_disconnections` and `log_connections` work hand in hand and generally, the pair would be enabled/disabled together. This recommendation is applicable to PostgreSQL database instances.",
+          "RationaleStatement": "PostgreSQL does not log session details such as duration and session end by default. Enabling the `log_disconnections` setting will create log entries at the end of each session which can be useful in troubleshooting issues and determine any unusual activity across a time period. The `log_disconnections` and `log_connections` work hand in hand and generally, the pair would be enabled/disabled together. This recommendation is applicable to PostgreSQL database instances.",
           "ImpactStatement": "Turning on logging will increase the required storage over time. Mismanaged logs may cause your storage costs to increase. Setting custom flags via command line on certain instances will cause all omitted flags to be reset to defaults. This may cause you to lose custom flags and could result in unforeseen complications or instance restarts. Because of this, it is recommended you apply these flags changes during a period of low usage.",
-          "RemediationProcedure": "**From Google Cloud Console**\n\n1. Go to the Cloud SQL Instances page in the Google Cloud Console by visiting https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances).\n2. Select the PostgreSQL instance where the database flag needs to be enabled.\n3. Click `Edit`.\n4. Scroll down to the `Flags` section.\n5. To set a flag that has not been set on the instance before, click `Add item`, choose the flag `log_disconnections` from the drop-down menu and set the value as `on`.\n6. Click `Save`.\n7. Confirm the changes under `Flags` on the Overview page.\n\n**From Google Cloud CLI**\n\n1. Configure the `log_disconnections` database flag for every Cloud SQL PosgreSQL database instance using the below command:\n```\ngcloud sql instances patch  --database-flags log_disconnections=on\n```\n```\nNote: This command will overwrite all previously set database flags. To keep those and add new ones, include the values for all flags to be set on the instance; any flag not specifically included is set to its default value. For flags that do not take a value, specify the flag name followed by an equals sign (\"=\").\n```",
-          "AuditProcedure": "**From Google Cloud Console**\n\n1. Go to the Cloud SQL Instances page in the Google Cloud Console by visiting https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances).\n2. Select the instance to open its `Instance Overview` page\n3. Go to the `Configuration` card.\n4. Under `Database flags`, check the value of `log_disconnections` flag is configured as expected.\n\n**From Google Cloud CLI**\n\n1. Ensure the below command returns `on` for every Cloud SQL PostgreSQL database instance:\n```\ngcloud sql instances list --format=json | jq '..settings.databaseFlags | select(.name==\"log_disconnections\")|.value'\n```",
-          "AdditionalInformation": "```\nWARNING: This patch modifies database flag values, which may require your instance to be restarted. Check the list of supported flags - https://cloud.google.com/sql/docs/postgres/flags - to see if your instance will be restarted when this patch is submitted.\n```\n```\nNote: some database flag settings can affect instance availability or stability and remove the instance from the Cloud SQL SLA. For information about these flags, see Operational Guidelines.\n```\n```\nNote: Configuring the above flag does not require restarting the Cloud SQL instance.\n```",
+          "RemediationProcedure": "**From Google Cloud Console**  1. Go to the Cloud SQL Instances page in the Google Cloud Console by visiting https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances). 2. Select the PostgreSQL instance where the database flag needs to be enabled. 3. Click `Edit`. 4. Scroll down to the `Flags` section. 5. To set a flag that has not been set on the instance before, click `Add item`, choose the flag `log_disconnections` from the drop-down menu and set the value as `on`. 6. Click `Save`. 7. Confirm the changes under `Flags` on the Overview page.  **From Google Cloud CLI**  1. Configure the `log_disconnections` database flag for every Cloud SQL PosgreSQL database instance using the below command: ``` gcloud sql instances patch  --database-flags log_disconnections=on ``` ``` Note: This command will overwrite all previously set database flags. To keep those and add new ones, include the values for all flags to be set on the instance; any flag not specifically included is set to its default value. For flags that do not take a value, specify the flag name followed by an equals sign (\"=\"). ```",
+          "AuditProcedure": "**From Google Cloud Console**  1. Go to the Cloud SQL Instances page in the Google Cloud Console by visiting https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances). 2. Select the instance to open its `Instance Overview` page 3. Go to the `Configuration` card. 4. Under `Database flags`, check the value of `log_disconnections` flag is configured as expected.  **From Google Cloud CLI**  1. Ensure the below command returns `on` for every Cloud SQL PostgreSQL database instance: ``` gcloud sql instances list --format=json | jq '..settings.databaseFlags | select(.name==\"log_disconnections\")|.value' ```",
+          "AdditionalInformation": "``` WARNING: This patch modifies database flag values, which may require your instance to be restarted. Check the list of supported flags - https://cloud.google.com/sql/docs/postgres/flags - to see if your instance will be restarted when this patch is submitted. ``` ``` Note: some database flag settings can affect instance availability or stability and remove the instance from the Cloud SQL SLA. For information about these flags, see Operational Guidelines. ``` ``` Note: Configuring the above flag does not require restarting the Cloud SQL instance. ```",
           "References": "https://cloud.google.com/sql/docs/postgres/flags:https://www.postgresql.org/docs/9.6/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-WHAT"
         }
       ]
@@ -1508,16 +1508,16 @@
           "Description": "The `log_min_duration_statement` flag defines the minimum amount of execution time of a statement in milliseconds where the total duration of the statement is logged. Ensure that `log_min_duration_statement` is disabled, i.e., a value of `-1` is set.",
           "RationaleStatement": "Logging SQL statements may include sensitive information that should not be recorded in logs. This recommendation is applicable to PostgreSQL database instances.",
           "ImpactStatement": "Turning on logging will increase the required storage over time. Mismanaged logs may cause your storage costs to increase. Setting custom flags via command line on certain instances will cause all omitted flags to be reset to defaults. This may cause you to lose custom flags and could result in unforeseen complications or instance restarts. Because of this, it is recommended you apply these flags changes during a period of low usage.",
-          "RemediationProcedure": "**From Google Cloud Console**\n\n1. Go to the Cloud SQL Instances page in the Google Cloud Console by visiting https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances).\n2. Select the PostgreSQL instance where the database flag needs to be enabled.\n3. Click `Edit`.\n4. Scroll down to the `Flags` section.\n5. To set a flag that has not been set on the instance before, click `Add item`, choose the flag `log_min_duration_statement` from the drop-down menu and set a value of `-1`.\n6. Click `Save`.\n7. Confirm the changes under `Flags` on the Overview page.\n\n**From Google Cloud CLI**\n\n1. List all Cloud SQL database instances using the following command:\n```\ngcloud sql instances list\n```\n2. Configure the `log_min_duration_statement` flag for every Cloud SQL PosgreSQL database instance using the below command:\n```\ngcloud sql instances patch  --database-flags log_min_duration_statement=-1\n```\n```\nNote: This command will overwrite all database flags previously set. To keep those and add new ones, include the values for all flags to be set on the instance; any flag not specifically included is set to its default value. For flags that do not take a value, specify the flag name followed by an equals sign (\"=\").\n```",
-          "AuditProcedure": "**From Google Cloud Console**\n\n1. Go to the Cloud SQL Instances page in the Google Cloud Console by visiting https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances).\n2. Select the instance to open its `Instance Overview` page.\n3. Go to the `Configuration` card.\n4. Under `Database flags`, check that the value of `log_min_duration_statement` flag is set to `-1`.\n\n**From Google Cloud CLI**\n\n1. Use the below command for every Cloud SQL PostgreSQL database instance to verify the value of `log_min_duration_statement` is set to `-1`.\n```\ngcloud sql instances list --format=json| jq '.settings.databaseFlags | select(.name==\"log_min_duration_statement\")|.value'\n```",
-          "AdditionalInformation": "```\nWARNING: This patch modifies database flag values, which may require your instance to be restarted. Check the list of supported flags - https://cloud.google.com/sql/docs/postgres/flags - to see if your instance will be restarted when this patch is submitted.\n```\n```\nNote: Some database flag settings can affect instance availability or stability and remove the instance from the Cloud SQL SLA. For information about these flags, see Operational Guidelines.\n```\n```\nNote: Configuring the above flag does not require restarting the Cloud SQL instance.\n```",
+          "RemediationProcedure": "**From Google Cloud Console**  1. Go to the Cloud SQL Instances page in the Google Cloud Console by visiting https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances). 2. Select the PostgreSQL instance where the database flag needs to be enabled. 3. Click `Edit`. 4. Scroll down to the `Flags` section. 5. To set a flag that has not been set on the instance before, click `Add item`, choose the flag `log_min_duration_statement` from the drop-down menu and set a value of `-1`. 6. Click `Save`. 7. Confirm the changes under `Flags` on the Overview page.  **From Google Cloud CLI**  1. List all Cloud SQL database instances using the following command: ``` gcloud sql instances list ``` 2. Configure the `log_min_duration_statement` flag for every Cloud SQL PosgreSQL database instance using the below command: ``` gcloud sql instances patch  --database-flags log_min_duration_statement=-1 ``` ``` Note: This command will overwrite all database flags previously set. To keep those and add new ones, include the values for all flags to be set on the instance; any flag not specifically included is set to its default value. For flags that do not take a value, specify the flag name followed by an equals sign (\"=\"). ```",
+          "AuditProcedure": "**From Google Cloud Console**  1. Go to the Cloud SQL Instances page in the Google Cloud Console by visiting https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances). 2. Select the instance to open its `Instance Overview` page. 3. Go to the `Configuration` card. 4. Under `Database flags`, check that the value of `log_min_duration_statement` flag is set to `-1`.  **From Google Cloud CLI**  1. Use the below command for every Cloud SQL PostgreSQL database instance to verify the value of `log_min_duration_statement` is set to `-1`. ``` gcloud sql instances list --format=json| jq '.settings.databaseFlags | select(.name==\"log_min_duration_statement\")|.value' ```",
+          "AdditionalInformation": "``` WARNING: This patch modifies database flag values, which may require your instance to be restarted. Check the list of supported flags - https://cloud.google.com/sql/docs/postgres/flags - to see if your instance will be restarted when this patch is submitted. ``` ``` Note: Some database flag settings can affect instance availability or stability and remove the instance from the Cloud SQL SLA. For information about these flags, see Operational Guidelines. ``` ``` Note: Configuring the above flag does not require restarting the Cloud SQL instance. ```",
           "References": "https://cloud.google.com/sql/docs/postgres/flags:https://www.postgresql.org/docs/current/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-WHAT"
         }
       ]
     },
     {
       "Id": "6.2.5",
-      "Description": "The `log_min_messages` flag defines the minimum message severity level that is considered as an error statement. Messages for error statements are logged with the SQL statement. Valid values include `DEBUG5`, `DEBUG4`, `DEBUG3`, `DEBUG2`, `DEBUG1`, `INFO`, `NOTICE`, `WARNING`, `ERROR`, `LOG`, `FATAL`, and `PANIC`.\nEach severity level includes the subsequent levels mentioned above. ERROR is considered the best practice setting. Changes should only be made in accordance with the organization's logging policy.",
+      "Description": "The `log_min_messages` flag defines the minimum message severity level that is considered as an error statement. Messages for error statements are logged with the SQL statement. Valid values include `DEBUG5`, `DEBUG4`, `DEBUG3`, `DEBUG2`, `DEBUG1`, `INFO`, `NOTICE`, `WARNING`, `ERROR`, `LOG`, `FATAL`, and `PANIC`. Each severity level includes the subsequent levels mentioned above. ERROR is considered the best practice setting. Changes should only be made in accordance with the organization's logging policy.",
       "Checks": [
         "cloudsql_instance_postgres_log_min_messages_flag"
       ],
@@ -1526,12 +1526,12 @@
           "Section": "6.2. PostgreSQL Database",
           "Profile": "Level 1",
           "AssessmentStatus": "Automated",
-          "Description": "The `log_min_messages` flag defines the minimum message severity level that is considered as an error statement. Messages for error statements are logged with the SQL statement. Valid values include `DEBUG5`, `DEBUG4`, `DEBUG3`, `DEBUG2`, `DEBUG1`, `INFO`, `NOTICE`, `WARNING`, `ERROR`, `LOG`, `FATAL`, and `PANIC`.\nEach severity level includes the subsequent levels mentioned above. ERROR is considered the best practice setting. Changes should only be made in accordance with the organization's logging policy.",
-          "RationaleStatement": "Auditing helps in troubleshooting operational problems and also permits forensic analysis. If `log_min_messages` is not set to the correct value, messages may not be classified as error messages appropriately. An organization will need to decide their own threshold for logging `log_min_messages` flag.\n\nThis recommendation is applicable to PostgreSQL database instances.",
-          "ImpactStatement": "Setting the threshold too low will might result in increased log storage size and length, making it difficult to find actual errors. Setting the threshold to 'Warning' will log messages for the most needed error messages. Higher severity levels may cause errors needed to troubleshoot to not be logged.\n\nNote: To effectively turn off logging failing statements, set this parameter to PANIC.",
-          "RemediationProcedure": "**From Google Cloud Console**\n\n1. Go to the Cloud SQL Instances page in the Google Cloud Console by visiting https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances)\n2. Select the PostgreSQL instance for which you want to enable the database flag.\n3. Click `Edit`.\n4. Scroll down to the `Flags` section.\n5. To set a flag that has not been set on the instance before, click `Add item`, choose the flag `log_min_messages` from the drop-down menu and set appropriate value.\n6. Click `Save` to save the changes.\n7. Confirm the changes under `Flags` on the Overview page.\n\n**From Google Cloud CLI**\n\n1. Configure the `log_min_messages` database flag for every Cloud SQL PosgreSQL database instance using the below command.\n```\ngcloud sql instances patch  --database-flags log_min_messages=\n```\n```\nNote: This command will overwrite all database flags previously set. To keep those and add new ones, include the values for all flags to be set on the instance; any flag not specifically included is set to its default value. For flags that do not take a value, specify the flag name followed by an equals sign (\"=\").\n```",
-          "AuditProcedure": "**From Google Cloud Console**\n\n1. Go to the Cloud SQL Instances page in the Google Cloud Console by visiting https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances).\n2. Select the instance to open its `Instance Overview` page.\n3. Go to the `Configuration` card.\n4. Under `Database flags`, check the value of `log_min_messages` flag is in accordance with the organization's logging policy.\n\n**From Google Cloud CLI**\n\n1. Use the below command for every Cloud SQL PostgreSQL database instance to verify that the value of `log_min_messages` is in accordance with the organization's logging policy.\n```\ngcloud sql instances list --format=json | jq '.settings.databaseFlags | select(.name==\"log_min_messages\")|.value'\n```",
-          "AdditionalInformation": "```\nWARNING: This patch modifies database flag values, which may require your instance to be restarted. Check the list of supported flags - https://cloud.google.com/sql/docs/postgres/flags - to see if your instance will be restarted when this patch is submitted.\n```\n```\nNote: Some database flag settings can affect instance availability or stability and remove the instance from the Cloud SQL SLA. For information about these flags, see Operational Guidelines.\n```\n```\nNote: Configuring the above flag does not require restarting the Cloud SQL instance.\n```",
+          "Description": "The `log_min_messages` flag defines the minimum message severity level that is considered as an error statement. Messages for error statements are logged with the SQL statement. Valid values include `DEBUG5`, `DEBUG4`, `DEBUG3`, `DEBUG2`, `DEBUG1`, `INFO`, `NOTICE`, `WARNING`, `ERROR`, `LOG`, `FATAL`, and `PANIC`. Each severity level includes the subsequent levels mentioned above. ERROR is considered the best practice setting. Changes should only be made in accordance with the organization's logging policy.",
+          "RationaleStatement": "Auditing helps in troubleshooting operational problems and also permits forensic analysis. If `log_min_messages` is not set to the correct value, messages may not be classified as error messages appropriately. An organization will need to decide their own threshold for logging `log_min_messages` flag.  This recommendation is applicable to PostgreSQL database instances.",
+          "ImpactStatement": "Setting the threshold too low will might result in increased log storage size and length, making it difficult to find actual errors. Setting the threshold to 'Warning' will log messages for the most needed error messages. Higher severity levels may cause errors needed to troubleshoot to not be logged.  Note: To effectively turn off logging failing statements, set this parameter to PANIC.",
+          "RemediationProcedure": "**From Google Cloud Console**  1. Go to the Cloud SQL Instances page in the Google Cloud Console by visiting https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances) 2. Select the PostgreSQL instance for which you want to enable the database flag. 3. Click `Edit`. 4. Scroll down to the `Flags` section. 5. To set a flag that has not been set on the instance before, click `Add item`, choose the flag `log_min_messages` from the drop-down menu and set appropriate value. 6. Click `Save` to save the changes. 7. Confirm the changes under `Flags` on the Overview page.  **From Google Cloud CLI**  1. Configure the `log_min_messages` database flag for every Cloud SQL PosgreSQL database instance using the below command. ``` gcloud sql instances patch  --database-flags log_min_messages= ``` ``` Note: This command will overwrite all database flags previously set. To keep those and add new ones, include the values for all flags to be set on the instance; any flag not specifically included is set to its default value. For flags that do not take a value, specify the flag name followed by an equals sign (\"=\"). ```",
+          "AuditProcedure": "**From Google Cloud Console**  1. Go to the Cloud SQL Instances page in the Google Cloud Console by visiting https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances). 2. Select the instance to open its `Instance Overview` page. 3. Go to the `Configuration` card. 4. Under `Database flags`, check the value of `log_min_messages` flag is in accordance with the organization's logging policy.  **From Google Cloud CLI**  1. Use the below command for every Cloud SQL PostgreSQL database instance to verify that the value of `log_min_messages` is in accordance with the organization's logging policy. ``` gcloud sql instances list --format=json | jq '.settings.databaseFlags | select(.name==\"log_min_messages\")|.value' ```",
+          "AdditionalInformation": "``` WARNING: This patch modifies database flag values, which may require your instance to be restarted. Check the list of supported flags - https://cloud.google.com/sql/docs/postgres/flags - to see if your instance will be restarted when this patch is submitted. ``` ``` Note: Some database flag settings can affect instance availability or stability and remove the instance from the Cloud SQL SLA. For information about these flags, see Operational Guidelines. ``` ``` Note: Configuring the above flag does not require restarting the Cloud SQL instance. ```",
           "References": "https://cloud.google.com/sql/docs/postgres/flags:https://www.postgresql.org/docs/9.6/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-WHEN"
         }
       ]
@@ -1550,9 +1550,9 @@
           "Description": "It is recommended to set `3625 (trace flag)` database flag for Cloud SQL SQL Server instance to `on`.",
           "RationaleStatement": "Microsoft SQL Trace Flags are frequently used to diagnose performance issues or to debug stored procedures or complex computer systems, but they may also be recommended by Microsoft Support to address behavior that is negatively impacting a specific workload. All documented trace flags and those recommended by Microsoft Support are fully supported in a production environment when used as directed. `3625(trace log)` Limits the amount of information returned to users who are not members of the sysadmin fixed server role, by masking the parameters of some error messages using '******'. Setting this in a Google Cloud flag for the instance allows for security through obscurity and prevents the disclosure of sensitive information, hence this is recommended to set this flag globally to on to prevent the flag having been left off, or changed by bad actors. This recommendation is applicable to SQL Server database instances.",
           "ImpactStatement": "Changing flags on a database may cause it to be restarted. The best time to do this is at a time where there is low usage.",
-          "RemediationProcedure": "**From Google Cloud Console**\n\n1. Go to the Cloud SQL Instances page in the Google Cloud Console by visiting https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances).\n2. Select the SQL Server instance for which you want to enable to database flag.\n3. Click `Edit`.\n4. Scroll down to the `Flags` section.\n5. To set a flag that has not been set on the instance before, click `Add item`, choose the flag `3625` from the drop-down menu, and set its value to `on`.\n6. Click `Save` to save your changes.\n7. Confirm your changes under `Flags` on the Overview page.\n\n**From Google Cloud CLI**\n\n1. Configure the `3625` database flag for every Cloud SQL SQL Server database instance using the below command.\n```\ngcloud sql instances patch  --database-flags \"3625=on\"\n```\nNote : \n\nThis command will overwrite all database flags previously set. To keep those and add new ones, include the values for all flags you want set on the instance; any flag not specifically included is set to its default value. For flags that do not take a value, specify the flag name followed by an equals sign (\"=\").",
-          "AuditProcedure": "**From Google Cloud Console**\n\n1. Go to the Cloud SQL Instances page in the Google Cloud Console by visiting https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances).\n2. Select the instance to open its `Instance Overview` page\n3. Ensure the database flag `3625` that has been set is listed under the `Database flags` section.\n\n**From Google Cloud CLI**\n\n1. Ensure the below command returns `on` for every Cloud SQL SQL Server database instance\n\n```\ngcloud sql instances list --format=json | jq '.settings.databaseFlags | select(.name==\"3625\")|.value'\n```",
-          "AdditionalInformation": "WARNING: \n\nThis patch modifies database flag values, which may require \nyour instance to be restarted. Check the list of supported flags - \nhttps://cloud.google.com/sql/docs/sqlserver/flags - to see if your \ninstance will be restarted when this patch is submitted.\n\nNote: \n\nsome database flag settings can affect instance availability or stability, and remove the instance from the Cloud SQL SLA. For information about these flags, see Operational Guidelines.\n\nNote: \n\nConfiguring the above flag restarts the Cloud SQL instance.",
+          "RemediationProcedure": "**From Google Cloud Console**  1. Go to the Cloud SQL Instances page in the Google Cloud Console by visiting https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances). 2. Select the SQL Server instance for which you want to enable to database flag. 3. Click `Edit`. 4. Scroll down to the `Flags` section. 5. To set a flag that has not been set on the instance before, click `Add item`, choose the flag `3625` from the drop-down menu, and set its value to `on`. 6. Click `Save` to save your changes. 7. Confirm your changes under `Flags` on the Overview page.  **From Google Cloud CLI**  1. Configure the `3625` database flag for every Cloud SQL SQL Server database instance using the below command. ``` gcloud sql instances patch  --database-flags \"3625=on\" ``` Note :   This command will overwrite all database flags previously set. To keep those and add new ones, include the values for all flags you want set on the instance; any flag not specifically included is set to its default value. For flags that do not take a value, specify the flag name followed by an equals sign (\"=\").",
+          "AuditProcedure": "**From Google Cloud Console**  1. Go to the Cloud SQL Instances page in the Google Cloud Console by visiting https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances). 2. Select the instance to open its `Instance Overview` page 3. Ensure the database flag `3625` that has been set is listed under the `Database flags` section.  **From Google Cloud CLI**  1. Ensure the below command returns `on` for every Cloud SQL SQL Server database instance  ``` gcloud sql instances list --format=json | jq '.settings.databaseFlags | select(.name==\"3625\")|.value' ```",
+          "AdditionalInformation": "WARNING:   This patch modifies database flag values, which may require  your instance to be restarted. Check the list of supported flags -  https://cloud.google.com/sql/docs/sqlserver/flags - to see if your  instance will be restarted when this patch is submitted.  Note:   some database flag settings can affect instance availability or stability, and remove the instance from the Cloud SQL SLA. For information about these flags, see Operational Guidelines.  Note:   Configuring the above flag restarts the Cloud SQL instance.",
           "References": "https://cloud.google.com/sql/docs/sqlserver/flags:https://docs.microsoft.com/en-us/sql/t-sql/database-console-commands/dbcc-traceon-trace-flags-transact-sql?view=sql-server-ver15#trace-flags:https://github.com/ktaranov/sqlserver-kit/blob/master/SQL%20Server%20Trace%20Flag.md"
         }
       ]
@@ -1571,9 +1571,9 @@
           "Description": "It is recommended to set `external scripts enabled` database flag for Cloud SQL SQL Server instance to `off`",
           "RationaleStatement": "`external scripts enabled` enable the execution of scripts with certain remote language extensions. This property is OFF by default. When Advanced Analytics Services is installed, setup can optionally set this property to true. As the External Scripts Enabled feature allows scripts external to SQL such as files located in an R library to be executed, which could adversely affect the security of the system, hence this should be disabled. This recommendation is applicable to SQL Server database instances.",
           "ImpactStatement": "Setting custom flags via command line on certain instances will cause all omitted flags to be reset to defaults. This may cause you to lose custom flags and could result in unforeseen complications or instance restarts. Because of this, it is recommended you apply these flags changes during a period of low usage.",
-          "RemediationProcedure": "**From Google Cloud Console**\n\n1. Go to the Cloud SQL Instances page in the Google Cloud Console by visiting https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances).\n2. Select the SQL Server instance for which you want to enable to database flag.\n3. Click `Edit`.\n4. Scroll down to the `Flags` section.\n5. To set a flag that has not been set on the instance before, click `Add item`, choose the flag `external scripts enabled` from the drop-down menu, and set its value to `off`.\n6. Click `Save` to save your changes.\n7. Confirm your changes under `Flags` on the Overview page.\n\n**From Google Cloud CLI**\n\n1. Configure the `external scripts enabled` database flag for every Cloud SQL SQL Server database instance using the below command.\n```\ngcloud sql instances patch  --database-flags \"external scripts enabled=off\"\n```\n\n```\nNote : \n\nThis command will overwrite all database flags previously set. To keep those and add new ones, include the values for all flags you want set on the instance; any flag not specifically included is set to its default value. For flags that do not take a value, specify the flag name followed by an equals sign (\"=\").\n```",
-          "AuditProcedure": "**From Google Cloud Console**\n\n1. Go to the Cloud SQL Instances page in the Google Cloud Console by visiting https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances).\n2. Select the instance to open its `Instance Overview` page\n3. Ensure the database flag `external scripts enabled` that has been set is listed under the `Database flags` section.\n\n**From Google Cloud CLI**\n\n1. Ensure the below command returns `off` for every Cloud SQL SQL Server database instance\n```\ngcloud sql instances list --format=json | jq '.settings.databaseFlags | select(.name==\"external scripts enabled\")|.value'\n```",
-          "AdditionalInformation": "```\n\"WARNING: This patch modifies database flag values, which may require \nyour instance to be restarted. Check the list of supported flags - \nhttps://cloud.google.com/sql/docs/sqlserver/flags - to see if your \ninstance will be restarted when this patch is submitted.\n```\n\n```\nNote: some database flag settings can affect instance availability or stability, and remove the instance from the Cloud SQL SLA. For information about these flags, see Operational Guidelines.\"\n\n```\n\n```\nNote: Configuring the above flag restarts the Cloud SQL instance.\n```",
+          "RemediationProcedure": "**From Google Cloud Console**  1. Go to the Cloud SQL Instances page in the Google Cloud Console by visiting https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances). 2. Select the SQL Server instance for which you want to enable to database flag. 3. Click `Edit`. 4. Scroll down to the `Flags` section. 5. To set a flag that has not been set on the instance before, click `Add item`, choose the flag `external scripts enabled` from the drop-down menu, and set its value to `off`. 6. Click `Save` to save your changes. 7. Confirm your changes under `Flags` on the Overview page.  **From Google Cloud CLI**  1. Configure the `external scripts enabled` database flag for every Cloud SQL SQL Server database instance using the below command. ``` gcloud sql instances patch  --database-flags \"external scripts enabled=off\" ```  ``` Note :   This command will overwrite all database flags previously set. To keep those and add new ones, include the values for all flags you want set on the instance; any flag not specifically included is set to its default value. For flags that do not take a value, specify the flag name followed by an equals sign (\"=\"). ```",
+          "AuditProcedure": "**From Google Cloud Console**  1. Go to the Cloud SQL Instances page in the Google Cloud Console by visiting https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances). 2. Select the instance to open its `Instance Overview` page 3. Ensure the database flag `external scripts enabled` that has been set is listed under the `Database flags` section.  **From Google Cloud CLI**  1. Ensure the below command returns `off` for every Cloud SQL SQL Server database instance ``` gcloud sql instances list --format=json | jq '.settings.databaseFlags | select(.name==\"external scripts enabled\")|.value' ```",
+          "AdditionalInformation": "``` \"WARNING: This patch modifies database flag values, which may require  your instance to be restarted. Check the list of supported flags -  https://cloud.google.com/sql/docs/sqlserver/flags - to see if your  instance will be restarted when this patch is submitted. ```  ``` Note: some database flag settings can affect instance availability or stability, and remove the instance from the Cloud SQL SLA. For information about these flags, see Operational Guidelines.\"  ```  ``` Note: Configuring the above flag restarts the Cloud SQL instance. ```",
           "References": "https://docs.microsoft.com/en-us/sql/database-engine/configure-windows/external-scripts-enabled-server-configuration-option?view=sql-server-ver15:https://cloud.google.com/sql/docs/sqlserver/flags:https://docs.microsoft.com/en-us/sql/advanced-analytics/concepts/security?view=sql-server-ver15:https://www.stigviewer.com/stig/ms_sql_server_2016_instance/2018-03-09/finding/V-79347"
         }
       ]
@@ -1592,9 +1592,9 @@
           "Description": "It is recommended to set `remote access` database flag for Cloud SQL SQL Server instance to `off`.",
           "RationaleStatement": "The `remote access` option controls the execution of stored procedures from local or remote servers on which instances of SQL Server are running. This default value for this option is 1. This grants permission to run local stored procedures from remote servers or remote stored procedures from the local server. To prevent local stored procedures from being run from a remote server or remote stored procedures from being run on the local server, this must be disabled. The Remote Access option controls the execution of local stored procedures on remote servers or remote stored procedures on local server. 'Remote access' functionality can be abused to launch a Denial-of-Service (DoS) attack on remote servers by off-loading query processing to a target, hence this should be disabled. This recommendation is applicable to SQL Server database instances.",
           "ImpactStatement": "Setting custom flags via command line on certain instances will cause all omitted flags to be reset to defaults. This may cause you to lose custom flags and could result in unforeseen complications or instance restarts. Because of this, it is recommended you apply these flags changes during a period of low usage.",
-          "RemediationProcedure": "**From Google Cloud Console**\n\n1. Go to the Cloud SQL Instances page in the Google Cloud Console by visiting https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances).\n2. Select the SQL Server instance for which you want to enable to database flag.\n3. Click `Edit`.\n4. Scroll down to the `Flags` section.\n5. To set a flag that has not been set on the instance before, click `Add item`, choose the flag `remote access` from the drop-down menu, and set its value to `off`.\n6. Click `Save` to save your changes.\n7. Confirm your changes under `Flags` on the Overview page.\n\n**From Google Cloud CLI**\n\n1. Configure the `remote access` database flag for every Cloud SQL SQL Server database instance using the below command\n```\ngcloud sql instances patch  --database-flags \"remote access=off\"\n```\n\n```\nNote : \n\nThis command will overwrite all database flags previously set. To keep those and add new ones, include the values for all flags you want set on the instance; any flag not specifically included is set to its default value. For flags that do not take a value, specify the flag name followed by an equals sign (\"=\").\n```",
-          "AuditProcedure": "**From Google Cloud Console**\n\n1. Go to the Cloud SQL Instances page in the Google Cloud Console by visiting https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances).\n2. Select the instance to open its `Instance Overview` page\n3. Ensure the database flag `remote access` that has been set is listed under the `Database flags` section.\n\n**From Google Cloud CLI**\n\n1. Ensure the below command returns `off` for every Cloud SQL SQL Server database instance\n```\ngcloud sql instances list --format=json | jq '.settings.databaseFlags | select(.name==\"remote access\")|.value'\n```",
-          "AdditionalInformation": "```\nWARNING: This patch modifies database flag values, which may require \nyour instance to be restarted. Check the list of supported flags - \nhttps://cloud.google.com/sql/docs/sqlserver/flags - to see if your \ninstance will be restarted when this patch is submitted.\n```\n\n```\nNote: some database flag settings can affect instance availability or stability, and remove the instance from the Cloud SQL SLA. For information about these flags, see Operational Guidelines.\n\n```\n\n```\nNote: Configuring the above flag does not restart the Cloud SQL instance.\n```",
+          "RemediationProcedure": "**From Google Cloud Console**  1. Go to the Cloud SQL Instances page in the Google Cloud Console by visiting https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances). 2. Select the SQL Server instance for which you want to enable to database flag. 3. Click `Edit`. 4. Scroll down to the `Flags` section. 5. To set a flag that has not been set on the instance before, click `Add item`, choose the flag `remote access` from the drop-down menu, and set its value to `off`. 6. Click `Save` to save your changes. 7. Confirm your changes under `Flags` on the Overview page.  **From Google Cloud CLI**  1. Configure the `remote access` database flag for every Cloud SQL SQL Server database instance using the below command ``` gcloud sql instances patch  --database-flags \"remote access=off\" ```  ``` Note :   This command will overwrite all database flags previously set. To keep those and add new ones, include the values for all flags you want set on the instance; any flag not specifically included is set to its default value. For flags that do not take a value, specify the flag name followed by an equals sign (\"=\"). ```",
+          "AuditProcedure": "**From Google Cloud Console**  1. Go to the Cloud SQL Instances page in the Google Cloud Console by visiting https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances). 2. Select the instance to open its `Instance Overview` page 3. Ensure the database flag `remote access` that has been set is listed under the `Database flags` section.  **From Google Cloud CLI**  1. Ensure the below command returns `off` for every Cloud SQL SQL Server database instance ``` gcloud sql instances list --format=json | jq '.settings.databaseFlags | select(.name==\"remote access\")|.value' ```",
+          "AdditionalInformation": "``` WARNING: This patch modifies database flag values, which may require  your instance to be restarted. Check the list of supported flags -  https://cloud.google.com/sql/docs/sqlserver/flags - to see if your  instance will be restarted when this patch is submitted. ```  ``` Note: some database flag settings can affect instance availability or stability, and remove the instance from the Cloud SQL SLA. For information about these flags, see Operational Guidelines.  ```  ``` Note: Configuring the above flag does not restart the Cloud SQL instance. ```",
           "References": "https://cloud.google.com/sql/docs/sqlserver/flags:https://docs.microsoft.com/en-us/sql/database-engine/configure-windows/configure-the-remote-access-server-configuration-option?view=sql-server-ver15:https://www.stigviewer.com/stig/ms_sql_server_2016_instance/2018-03-09/finding/V-79337"
         }
       ]
@@ -1613,9 +1613,9 @@
           "Description": "It is recommended to check the `user connections` for a Cloud SQL SQL Server instance to ensure that it is not artificially limiting connections.",
           "RationaleStatement": "The `user connections` option specifies the maximum number of simultaneous user connections that are allowed on an instance of SQL Server. The actual number of user connections allowed also depends on the version of SQL Server that you are using, and also the limits of your application or applications and hardware. SQL Server allows a maximum of 32,767 user connections. Because user connections is by default a self-configuring value, with SQL Server adjusting the maximum number of user connections automatically as needed, up to the maximum value allowable. For example, if only 10 users are logged in, 10 user connection objects are allocated. In most cases, you do not have to change the value for this option. The default is 0, which means that the maximum (32,767) user connections are allowed. However if there is a number defined here that limits connections, SQL Server will not allow anymore above this limit. If the connections are at the limit, any new requests will be dropped, potentially causing lost data or outages for those using the database.",
           "ImpactStatement": "Setting custom flags via command line on certain instances will cause all omitted flags to be reset to defaults. This may cause you to lose custom flags and could result in unforeseen complications or instance restarts. Because of this, it is recommended you apply these flags changes during a period of low usage.",
-          "RemediationProcedure": "**From Google Cloud Console**\n\n1. Go to the Cloud SQL Instances page in the Google Cloud Console by visiting https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances).\n2. Select the SQL Server instance for which you want to enable to database flag.\n3. Click `Edit`.\n4. Scroll down to the `Flags` section.\n5. To set a flag that has not been set on the instance before, click `Add item`, choose the flag `user connections` from the drop-down menu, and set its value to your organization recommended value.\n6. Click `Save` to save your changes.\n7. Confirm your changes under `Flags` on the Overview page.\n\n**From Google Cloud CLI**\n\n1. Configure the `user connections` database flag for every Cloud SQL SQL Server database instance using the below command.\n```\ngcloud sql instances patch  --database-flags \"user connections=0-32,767\"\n```\n\n```\nNote : \n\nThis command will overwrite all database flags previously set. To keep those and add new ones, include the values for all flags you want set on the instance; any flag not specifically included is set to its default value. For flags that do not take a value, specify the flag name followed by an equals sign (\"=\").\n```",
-          "AuditProcedure": "**From Google Cloud Console**\n\n1. Go to the Cloud SQL Instances page in the Google Cloud Console by visiting https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances).\n2. Select the instance to open its `Instance Overview` page\n3. Ensure the database flag `user connections` listed under the `Database flags` section is 0.\n\n**From Google Cloud CLI**\n\n1. Ensure the below command returns a value of 0, for every Cloud SQL SQL Server database instance.\n```\ngcloud sql instances list --format=json | jq '.settings.databaseFlags | select(.name==\"user connections\")|.value'\n```",
-          "AdditionalInformation": "```\nWARNING: This patch modifies database flag values, which may require \nyour instance to be restarted. Check the list of supported flags - \nhttps://cloud.google.com/sql/docs/sqlserver/flags - to see if your \ninstance will be restarted when this patch is submitted.\n```\n\n```\nNote: some database flag settings can affect instance availability or stability, and remove the instance from the Cloud SQL SLA. For information about these flags, see Operational Guidelines.\n```\n\n```\nNote: Configuring the above flag does not restart the Cloud SQL instance.\n```",
+          "RemediationProcedure": "**From Google Cloud Console**  1. Go to the Cloud SQL Instances page in the Google Cloud Console by visiting https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances). 2. Select the SQL Server instance for which you want to enable to database flag. 3. Click `Edit`. 4. Scroll down to the `Flags` section. 5. To set a flag that has not been set on the instance before, click `Add item`, choose the flag `user connections` from the drop-down menu, and set its value to your organization recommended value. 6. Click `Save` to save your changes. 7. Confirm your changes under `Flags` on the Overview page.  **From Google Cloud CLI**  1. Configure the `user connections` database flag for every Cloud SQL SQL Server database instance using the below command. ``` gcloud sql instances patch  --database-flags \"user connections=0-32,767\" ```  ``` Note :   This command will overwrite all database flags previously set. To keep those and add new ones, include the values for all flags you want set on the instance; any flag not specifically included is set to its default value. For flags that do not take a value, specify the flag name followed by an equals sign (\"=\"). ```",
+          "AuditProcedure": "**From Google Cloud Console**  1. Go to the Cloud SQL Instances page in the Google Cloud Console by visiting https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances). 2. Select the instance to open its `Instance Overview` page 3. Ensure the database flag `user connections` listed under the `Database flags` section is 0.  **From Google Cloud CLI**  1. Ensure the below command returns a value of 0, for every Cloud SQL SQL Server database instance. ``` gcloud sql instances list --format=json | jq '.settings.databaseFlags | select(.name==\"user connections\")|.value' ```",
+          "AdditionalInformation": "``` WARNING: This patch modifies database flag values, which may require  your instance to be restarted. Check the list of supported flags -  https://cloud.google.com/sql/docs/sqlserver/flags - to see if your  instance will be restarted when this patch is submitted. ```  ``` Note: some database flag settings can affect instance availability or stability, and remove the instance from the Cloud SQL SLA. For information about these flags, see Operational Guidelines. ```  ``` Note: Configuring the above flag does not restart the Cloud SQL instance. ```",
           "References": "https://cloud.google.com/sql/docs/sqlserver/flags:https://docs.microsoft.com/en-us/sql/database-engine/configure-windows/configure-the-user-connections-server-configuration-option?view=sql-server-ver15:https://www.stigviewer.com/stig/ms_sql_server_2016_instance/2018-03-09/finding/V-79119"
         }
       ]
@@ -1632,11 +1632,11 @@
           "Profile": "Level 1",
           "AssessmentStatus": "Automated",
           "Description": "It is recommended that, `user options` database flag for Cloud SQL SQL Server instance should not be configured.",
-          "RationaleStatement": "The `user options` option specifies global defaults for all users. A list of default query processing options is established for the duration of a user's work session. The user options option allows you to change the default values of the SET options (if the server's default settings are not appropriate).\n\nA user can override these defaults by using the SET statement. You can configure user options dynamically for new logins. After you change the setting of user options, new login sessions use the new setting; current login sessions are not affected. This recommendation is applicable to SQL Server database instances.",
+          "RationaleStatement": "The `user options` option specifies global defaults for all users. A list of default query processing options is established for the duration of a user's work session. The user options option allows you to change the default values of the SET options (if the server's default settings are not appropriate).  A user can override these defaults by using the SET statement. You can configure user options dynamically for new logins. After you change the setting of user options, new login sessions use the new setting; current login sessions are not affected. This recommendation is applicable to SQL Server database instances.",
           "ImpactStatement": "Setting custom flags via command line on certain instances will cause all omitted flags to be reset to defaults. This may cause you to lose custom flags and could result in unforeseen complications or instance restarts. Because of this, it is recommended you apply these flags changes during a period of low usage.",
-          "RemediationProcedure": "**From Google Cloud Console**\n\n1. Go to the Cloud SQL Instances page in the Google Cloud Console by visiting https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances).\n2. Select the SQL Server instance for which you want to enable to database flag.\n3. Click `Edit`.\n4. Scroll down to the `Flags` section.\n5. Click the X next `user options` flag shown\n6. Click `Save` to save your changes.\n7. Confirm your changes under `Flags` on the Overview page.\n\n**From Google Cloud CLI**\n\n1. List all Cloud SQL database Instances\n```\ngcloud sql instances list\n```\n2. Clear the `user options` database flag for every Cloud SQL SQL Server database instance using either of the below commands.\n\n```\n1.Clearing all flags to their default value\n\ngcloud sql instances patch  --clear-database-flags\n\nOR\n2. To clear only `user options` database flag, configure the database flag by overriding the `user options`. Exclude `user options` flag and its value, and keep all other flags you want to configure.\n\ngcloud sql instances patch  --database-flags FLAG1=VALUE1,FLAG2=VALUE2\n```\n\n```\nNote : \n\nThis command will overwrite all database flags previously set. To keep those and add new ones, include the values for all flags you want set on the instance; any flag not specifically included is set to its default value. For flags that do not take a value, specify the flag name followed by an equals sign (\"=\").\n```",
-          "AuditProcedure": "**From Google Cloud Console**\n\n1. Go to the Cloud SQL Instances page in the Google Cloud Console by visiting https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances).\n2. Select the instance to open its `Instance Overview` page\n3. Ensure the database flag `user options` that has been set is not listed under the `Database flags` section.\n\n**From Google Cloud CLI**\n\n1. Ensure the below command returns empty result for every Cloud SQL SQL Server database instance\n```\ngcloud sql instances list --format=json | jq '.settings.databaseFlags | select(.name==\"user options\")|.value'\n```",
-          "AdditionalInformation": "```\nWARNING: This patch modifies database flag values, which may require \nyour instance to be restarted. Check the list of supported flags - \nhttps://cloud.google.com/sql/docs/sqlserver/flags - to see if your \ninstance will be restarted when this patch is submitted.\n```\n\n```\nNote: some database flag settings can affect instance availability or stability, and remove the instance from the Cloud SQL SLA. For information about these flags, see Operational Guidelines.\n\n```\n\n```\nNote: Configuring the above flag does not restart the Cloud SQL instance.\n```",
+          "RemediationProcedure": "**From Google Cloud Console**  1. Go to the Cloud SQL Instances page in the Google Cloud Console by visiting https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances). 2. Select the SQL Server instance for which you want to enable to database flag. 3. Click `Edit`. 4. Scroll down to the `Flags` section. 5. Click the X next `user options` flag shown 6. Click `Save` to save your changes. 7. Confirm your changes under `Flags` on the Overview page.  **From Google Cloud CLI**  1. List all Cloud SQL database Instances ``` gcloud sql instances list ``` 2. Clear the `user options` database flag for every Cloud SQL SQL Server database instance using either of the below commands.  ``` 1.Clearing all flags to their default value  gcloud sql instances patch  --clear-database-flags  OR 2. To clear only `user options` database flag, configure the database flag by overriding the `user options`. Exclude `user options` flag and its value, and keep all other flags you want to configure.  gcloud sql instances patch  --database-flags FLAG1=VALUE1,FLAG2=VALUE2 ```  ``` Note :   This command will overwrite all database flags previously set. To keep those and add new ones, include the values for all flags you want set on the instance; any flag not specifically included is set to its default value. For flags that do not take a value, specify the flag name followed by an equals sign (\"=\"). ```",
+          "AuditProcedure": "**From Google Cloud Console**  1. Go to the Cloud SQL Instances page in the Google Cloud Console by visiting https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances). 2. Select the instance to open its `Instance Overview` page 3. Ensure the database flag `user options` that has been set is not listed under the `Database flags` section.  **From Google Cloud CLI**  1. Ensure the below command returns empty result for every Cloud SQL SQL Server database instance ``` gcloud sql instances list --format=json | jq '.settings.databaseFlags | select(.name==\"user options\")|.value' ```",
+          "AdditionalInformation": "``` WARNING: This patch modifies database flag values, which may require  your instance to be restarted. Check the list of supported flags -  https://cloud.google.com/sql/docs/sqlserver/flags - to see if your  instance will be restarted when this patch is submitted. ```  ``` Note: some database flag settings can affect instance availability or stability, and remove the instance from the Cloud SQL SLA. For information about these flags, see Operational Guidelines.  ```  ``` Note: Configuring the above flag does not restart the Cloud SQL instance. ```",
           "References": "https://cloud.google.com/sql/docs/sqlserver/flags:https://docs.microsoft.com/en-us/sql/database-engine/configure-windows/configure-the-user-options-server-configuration-option?view=sql-server-ver15:https://www.stigviewer.com/stig/ms_sql_server_2016_instance/2018-03-09/finding/V-79335"
         }
       ]
@@ -1655,9 +1655,9 @@
           "Description": "It is recommended to set `contained database authentication` database flag for Cloud SQL on the SQL Server instance to `off`.",
           "RationaleStatement": "A contained database includes all database settings and metadata required to define the database and has no configuration dependencies on the instance of the Database Engine where the database is installed. Users can connect to the database without authenticating a login at the Database Engine level. Isolating the database from the Database Engine makes it possible to easily move the database to another instance of SQL Server. Contained databases have some unique threats that should be understood and mitigated by SQL Server Database Engine administrators. Most of the threats are related to the USER WITH PASSWORD authentication process, which moves the authentication boundary from the Database Engine level to the database level, hence this is recommended to disable this flag. This recommendation is applicable to SQL Server database instances.",
           "ImpactStatement": "When `contained database authentication` is off (0) for the instance, contained databases cannot be created, or attached to the Database Engine. Turning on logging will increase the required storage over time. Mismanaged logs may cause your storage costs to increase.Setting custom flags via command line on certain instances will cause all omitted flags to be reset to defaults. This may cause you to lose custom flags and could result in unforeseen complications or instance restarts. Because of this, it is recommended you apply these flags changes during a period of low usage.",
-          "RemediationProcedure": "**From Google Cloud Console**\n\n1. Go to the Cloud SQL Instances page in the Google Cloud Console by visiting https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances).\n2. Select the SQL Server instance for which you want to enable to database flag.\n3. Click `Edit`.\n4. Scroll down to the `Flags` section.\n5. To set a flag that has not been set on the instance before, click `Add item`, choose the flag `contained database authentication` from the drop-down menu, and set its value to `off`.\n6. Click `Save`.\n7. Confirm the changes under `Flags` on the Overview page.\n\n**From Google Cloud CLI**\n\n1. Configure the `contained database authentication` database flag for every Cloud SQL SQL Server database instance using the below command:\n```\ngcloud sql instances patch  --database-flags \"contained database authentication=off\"\n```\n\n```\nNote: \n\nThis command will overwrite all database flags previously set. To keep those and add new ones, include the values for all flags to be set on the instance; any flag not specifically included is set to its default value. For flags that do not take a value, specify the flag name followed by an equals sign (\"=\").\n```",
-          "AuditProcedure": "**From Google Cloud Console**\n\n1. Go to the Cloud SQL Instances page in the Google Cloud Console by visiting https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances).\n2. Select the instance to open its `Instance Overview` page\n3. Ensure the database flag `contained database authentication` that has been set is listed under the `Database flags` section.\n\n**From Google Cloud CLI**\n\n1. Ensure the below command returns `off` for every Cloud SQL SQL Server database instance.\n```\ngcloud sql instances list --format=json | jq '.settings.databaseFlags | select(.name==\"contained database authentication\")|.value'\n```",
-          "AdditionalInformation": "```\nWARNING: This patch modifies database flag values, which may require \nyour instance to be restarted. Check the list of supported flags - \nhttps://cloud.google.com/sql/docs/sqlserver/flags - to see if your \ninstance will be restarted when this patch is submitted.\n```\n```\nNote: Some database flag settings can affect instance availability or stability, and remove the instance from the Cloud SQL SLA. For information about these flags, see Operational Guidelines.\n\n```\n```\nNote: Configuring the above flag does not restart the Cloud SQL instance.\n```",
+          "RemediationProcedure": "**From Google Cloud Console**  1. Go to the Cloud SQL Instances page in the Google Cloud Console by visiting https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances). 2. Select the SQL Server instance for which you want to enable to database flag. 3. Click `Edit`. 4. Scroll down to the `Flags` section. 5. To set a flag that has not been set on the instance before, click `Add item`, choose the flag `contained database authentication` from the drop-down menu, and set its value to `off`. 6. Click `Save`. 7. Confirm the changes under `Flags` on the Overview page.  **From Google Cloud CLI**  1. Configure the `contained database authentication` database flag for every Cloud SQL SQL Server database instance using the below command: ``` gcloud sql instances patch  --database-flags \"contained database authentication=off\" ```  ``` Note:   This command will overwrite all database flags previously set. To keep those and add new ones, include the values for all flags to be set on the instance; any flag not specifically included is set to its default value. For flags that do not take a value, specify the flag name followed by an equals sign (\"=\"). ```",
+          "AuditProcedure": "**From Google Cloud Console**  1. Go to the Cloud SQL Instances page in the Google Cloud Console by visiting https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances). 2. Select the instance to open its `Instance Overview` page 3. Ensure the database flag `contained database authentication` that has been set is listed under the `Database flags` section.  **From Google Cloud CLI**  1. Ensure the below command returns `off` for every Cloud SQL SQL Server database instance. ``` gcloud sql instances list --format=json | jq '.settings.databaseFlags | select(.name==\"contained database authentication\")|.value' ```",
+          "AdditionalInformation": "``` WARNING: This patch modifies database flag values, which may require  your instance to be restarted. Check the list of supported flags -  https://cloud.google.com/sql/docs/sqlserver/flags - to see if your  instance will be restarted when this patch is submitted. ``` ``` Note: Some database flag settings can affect instance availability or stability, and remove the instance from the Cloud SQL SLA. For information about these flags, see Operational Guidelines.  ``` ``` Note: Configuring the above flag does not restart the Cloud SQL instance. ```",
           "References": "https://cloud.google.com/sql/docs/sqlserver/flags:https://docs.microsoft.com/en-us/sql/database-engine/configure-windows/contained-database-authentication-server-configuration-option?view=sql-server-ver15:https://docs.microsoft.com/en-us/sql/relational-databases/databases/security-best-practices-with-contained-databases?view=sql-server-ver15"
         }
       ]
@@ -1676,9 +1676,9 @@
           "Description": "It is recommended to set `cross db ownership chaining` database flag for Cloud SQL SQL Server instance to `off`.",
           "RationaleStatement": "Use the `cross db ownership` for chaining option to configure cross-database ownership chaining for an instance of Microsoft SQL Server. This server option allows you to control cross-database ownership chaining at the database level or to allow cross-database ownership chaining for all databases. Enabling `cross db ownership` is not recommended unless all of the databases hosted by the instance of SQL Server must participate in cross-database ownership chaining and you are aware of the security implications of this setting. This recommendation is applicable to SQL Server database instances.",
           "ImpactStatement": "Updating flags may cause the database to restart. This may cause it to unavailable for a short amount of time, so this is best done at a time of low usage. You should also determine if the tables in your databases reference another table without using credentials for that database, as turning off cross database ownership will break this relationship.",
-          "RemediationProcedure": "**From Google Cloud Console**\n\n1. Go to the Cloud SQL Instances page in the Google Cloud Console by visiting https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances).\n2. Select the SQL Server instance for which you want to enable to database flag.\n3. Click `Edit`.\n4. Scroll down to the `Flags` section.\n5. To set a flag that has not been set on the instance before, click `Add item`, choose the flag `cross db ownership chaining` from the drop-down menu, and set its value to `off`.\n6. Click `Save`.\n7. Confirm the changes under `Flags` on the Overview page.\n\n**From Google Cloud CLI**\n\n1. Configure the `cross db ownership chaining` database flag for every Cloud SQL SQL Server database instance using the below command:\n```\ngcloud sql instances patch  --database-flags \"cross db ownership chaining=off\"\n```\n\nNote: \n\nThis command will overwrite all database flags previously set. To keep those and add new ones, include the values for all flags to be set on the instance; any flag not specifically included is set to its default value. For flags that do not take a value, specify the flag name followed by an equals sign (\"=\").",
-          "AuditProcedure": "**From Google Cloud Console**\n\n1. Go to the Cloud SQL Instances page in the Google Cloud Console.\n2. Select the instance to open its `Instance Overview` page\n3. Ensure the database flag `cross db ownership chaining` that has been set is listed under the `Database flags` section.\n\n**From Google Cloud CLI**\n\n1. Ensure the below command returns `off` for every Cloud SQL SQL Server database instance:\n```\ngcloud sql instances list --format=json | jq '.settings.databaseFlags | select(.name==\"cross db ownership chaining\")|.value'\n```",
-          "AdditionalInformation": "WARNING: This patch modifies database flag values, which may require \nyour instance to be restarted. Check the list of supported flags - \nhttps://cloud.google.com/sql/docs/sqlserver/flags - to see if your \ninstance will be restarted when this patch is submitted.\n\nNote: Some database flag settings can affect instance availability or stability, and remove the instance from the Cloud SQL SLA. For information about these flags, see Operational Guidelines.\n\nNote: Configuring the above flag does not restart the Cloud SQL instance.",
+          "RemediationProcedure": "**From Google Cloud Console**  1. Go to the Cloud SQL Instances page in the Google Cloud Console by visiting https://console.cloud.google.com/sql/instances(https://console.cloud.google.com/sql/instances). 2. Select the SQL Server instance for which you want to enable to database flag. 3. Click `Edit`. 4. Scroll down to the `Flags` section. 5. To set a flag that has not been set on the instance before, click `Add item`, choose the flag `cross db ownership chaining` from the drop-down menu, and set its value to `off`. 6. Click `Save`. 7. Confirm the changes under `Flags` on the Overview page.  **From Google Cloud CLI**  1. Configure the `cross db ownership chaining` database flag for every Cloud SQL SQL Server database instance using the below command: ``` gcloud sql instances patch  --database-flags \"cross db ownership chaining=off\" ```  Note:   This command will overwrite all database flags previously set. To keep those and add new ones, include the values for all flags to be set on the instance; any flag not specifically included is set to its default value. For flags that do not take a value, specify the flag name followed by an equals sign (\"=\").",
+          "AuditProcedure": "**From Google Cloud Console**  1. Go to the Cloud SQL Instances page in the Google Cloud Console. 2. Select the instance to open its `Instance Overview` page 3. Ensure the database flag `cross db ownership chaining` that has been set is listed under the `Database flags` section.  **From Google Cloud CLI**  1. Ensure the below command returns `off` for every Cloud SQL SQL Server database instance: ``` gcloud sql instances list --format=json | jq '.settings.databaseFlags | select(.name==\"cross db ownership chaining\")|.value' ```",
+          "AdditionalInformation": "WARNING: This patch modifies database flag values, which may require  your instance to be restarted. Check the list of supported flags -  https://cloud.google.com/sql/docs/sqlserver/flags - to see if your  instance will be restarted when this patch is submitted.  Note: Some database flag settings can affect instance availability or stability, and remove the instance from the Cloud SQL SLA. For information about these flags, see Operational Guidelines.  Note: Configuring the above flag does not restart the Cloud SQL instance.",
           "References": "https://cloud.google.com/sql/docs/sqlserver/flags:https://docs.microsoft.com/en-us/sql/database-engine/configure-windows/cross-db-ownership-chaining-server-configuration-option?view=sql-server-ver15"
         }
       ]
@@ -1695,10 +1695,10 @@
           "Profile": "Level 2",
           "AssessmentStatus": "Automated",
           "Description": "BigQuery by default encrypts the data as rest by employing `Envelope Encryption` using Google managed cryptographic keys. The data is encrypted using the `data encryption keys` and data encryption keys themselves are further encrypted using `key encryption keys`. This is seamless and do not require any additional input from the user. However, if you want to have greater control, Customer-managed encryption keys (CMEK) can be used as encryption key management solution for BigQuery Data Sets.",
-          "RationaleStatement": "BigQuery by default encrypts the data as rest by employing `Envelope Encryption` using Google managed cryptographic keys. This is seamless and does not require any additional input from the user.\n\nFor greater control over the encryption, customer-managed encryption keys (CMEK) can be used as encryption key management solution for BigQuery Data Sets. Setting a Default Customer-managed encryption key (CMEK) for a data set ensure any tables created in future will use the specified CMEK if none other is provided.\n\n```\nNote: Google does not store your keys on its servers and cannot access your protected data unless you provide the key. This also means that if you forget or lose your key, there is no way for Google to recover the key or to recover any data encrypted with the lost key.\n```",
+          "RationaleStatement": "BigQuery by default encrypts the data as rest by employing `Envelope Encryption` using Google managed cryptographic keys. This is seamless and does not require any additional input from the user.  For greater control over the encryption, customer-managed encryption keys (CMEK) can be used as encryption key management solution for BigQuery Data Sets. Setting a Default Customer-managed encryption key (CMEK) for a data set ensure any tables created in future will use the specified CMEK if none other is provided.  ``` Note: Google does not store your keys on its servers and cannot access your protected data unless you provide the key. This also means that if you forget or lose your key, there is no way for Google to recover the key or to recover any data encrypted with the lost key. ```",
           "ImpactStatement": "Using Customer-managed encryption keys (CMEK) will incur additional labor-hour investment to create, protect, and manage the keys.",
-          "RemediationProcedure": "**From Google Cloud CLI**\n\nThe default CMEK for existing data sets can be updated by specifying the default key in the `EncryptionConfiguration.kmsKeyName` field when calling the `datasets.insert` or `datasets.patch` methods",
-          "AuditProcedure": "**From Google Cloud Console**\n\n1. Go to `Analytics`\n2. Go to `BigQuery`\n3. Under `Analysis` click on `SQL Workspaces`, select the project\n4. Select Data Set\n5. Ensure `Customer-managed key` is present under `Dataset info` section.\n6. Repeat for each data set in all projects.\n\n**From Google Cloud CLI**\n\nList all dataset names\n```\nbq ls\n```\nUse the following command to view each dataset details.\n```\nbq show \n```\nVerify the `kmsKeyName` is present.",
+          "RemediationProcedure": "**From Google Cloud CLI**  The default CMEK for existing data sets can be updated by specifying the default key in the `EncryptionConfiguration.kmsKeyName` field when calling the `datasets.insert` or `datasets.patch` methods",
+          "AuditProcedure": "**From Google Cloud Console**  1. Go to `Analytics` 2. Go to `BigQuery` 3. Under `Analysis` click on `SQL Workspaces`, select the project 4. Select Data Set 5. Ensure `Customer-managed key` is present under `Dataset info` section. 6. Repeat for each data set in all projects.  **From Google Cloud CLI**  List all dataset names ``` bq ls ``` Use the following command to view each dataset details. ``` bq show  ``` Verify the `kmsKeyName` is present.",
           "AdditionalInformation": "",
           "References": "https://cloud.google.com/bigquery/docs/customer-managed-encryption"
         }
@@ -1716,10 +1716,10 @@
           "Profile": "Level 2",
           "AssessmentStatus": "Automated",
           "Description": "BigQuery by default encrypts the data as rest by employing `Envelope Encryption` using Google managed cryptographic keys. The data is encrypted using the `data encryption keys` and data encryption keys themselves are further encrypted using `key encryption keys`. This is seamless and do not require any additional input from the user. However, if you want to have greater control, Customer-managed encryption keys (CMEK) can be used as encryption key management solution for BigQuery Data Sets. If CMEK is used, the CMEK is used to encrypt the data encryption keys instead of using google-managed encryption keys.",
-          "RationaleStatement": "BigQuery by default encrypts the data as rest by employing `Envelope Encryption` using Google managed cryptographic keys. This is seamless and does not require any additional input from the user.\n\nFor greater control over the encryption, customer-managed encryption keys (CMEK) can be used as encryption key management solution for BigQuery tables. The CMEK is used to encrypt the data encryption keys instead of using google-managed encryption keys. BigQuery stores the table and CMEK association and the encryption/decryption is done automatically.\n\nApplying the Default Customer-managed keys on BigQuery data sets ensures that all the new tables created in the future will be encrypted using CMEK but existing tables need to be updated to use CMEK individually.\n\n```\nNote: Google does not store your keys on its servers and cannot access your protected data unless you provide the key. This also means that if you forget or lose your key, there is no way for Google to recover the key or to recover any data encrypted with the lost key.\n```",
+          "RationaleStatement": "BigQuery by default encrypts the data as rest by employing `Envelope Encryption` using Google managed cryptographic keys. This is seamless and does not require any additional input from the user.  For greater control over the encryption, customer-managed encryption keys (CMEK) can be used as encryption key management solution for BigQuery tables. The CMEK is used to encrypt the data encryption keys instead of using google-managed encryption keys. BigQuery stores the table and CMEK association and the encryption/decryption is done automatically.  Applying the Default Customer-managed keys on BigQuery data sets ensures that all the new tables created in the future will be encrypted using CMEK but existing tables need to be updated to use CMEK individually.  ``` Note: Google does not store your keys on its servers and cannot access your protected data unless you provide the key. This also means that if you forget or lose your key, there is no way for Google to recover the key or to recover any data encrypted with the lost key. ```",
           "ImpactStatement": "Using Customer-managed encryption keys (CMEK) will incur additional labor-hour investment to create, protect, and manage the keys.",
-          "RemediationProcedure": "**From Google Cloud CLI**\nUse the following command to copy the data. The source and the destination needs to be same in case copying to the original table.\n```\nbq cp --destination_kms_key  source_dataset.source_table destination_dataset.destination_table\n```",
-          "AuditProcedure": "**From Google Cloud Console**\n\n1. Go to `Analytics`\n2. Go to `BigQuery`\n3. Under `SQL Workspace`, select the project\n4. Select Data Set, select the table\n5. Go to `Details` tab\n6. Under `Table info`, verify `Customer-managed key` is present.\n7. Repeat for each table in all data sets for all projects.\n\n**From Google Cloud CLI**\n\nList all dataset names\n```\nbq ls\n```\nUse the following command to view the table details. Verify the `kmsKeyName` is present.\n```\nbq show \n```",
+          "RemediationProcedure": "**From Google Cloud CLI** Use the following command to copy the data. The source and the destination needs to be same in case copying to the original table. ``` bq cp --destination_kms_key  source_dataset.source_table destination_dataset.destination_table ```",
+          "AuditProcedure": "**From Google Cloud Console**  1. Go to `Analytics` 2. Go to `BigQuery` 3. Under `SQL Workspace`, select the project 4. Select Data Set, select the table 5. Go to `Details` tab 6. Under `Table info`, verify `Customer-managed key` is present. 7. Repeat for each table in all data sets for all projects.  **From Google Cloud CLI**  List all dataset names ``` bq ls ``` Use the following command to view the table details. Verify the `kmsKeyName` is present. ``` bq show  ```",
           "AdditionalInformation": "",
           "References": "https://cloud.google.com/bigquery/docs/customer-managed-encryption"
         }
@@ -1739,8 +1739,8 @@
           "Description": "It is recommended that the IAM policy on BigQuery datasets does not allow anonymous and/or public access.",
           "RationaleStatement": "Granting permissions to `allUsers` or `allAuthenticatedUsers` allows anyone to access the dataset. Such access might not be desirable if sensitive data is being stored in the dataset. Therefore, ensure that anonymous and/or public access to a dataset is not allowed.",
           "ImpactStatement": "The dataset is not publicly accessible. Explicit modification of IAM privileges would be necessary to make them publicly accessible.",
-          "RemediationProcedure": "**From Google Cloud Console**\n\n1. Go to `BigQuery` by visiting: https://console.cloud.google.com/bigquery(https://console.cloud.google.com/bigquery).\n2. Select the dataset from 'Resources'.\n3. Click `SHARING` near the right side of the window and select `Permissions`.\n4. Review each attached role.\n5. Click the delete icon for each member `allUsers` or `allAuthenticatedUsers`. On the popup click `Remove`.\n\n**From Google Cloud CLI**\n\nList the name of all datasets.\n```\nbq ls\n```\nRetrieve the data set details: \n```\nbq show --format=prettyjson PROJECT_ID:DATASET_NAME > PATH_TO_FILE\n```\nIn the access section of the JSON file, update the dataset information to remove all roles containing `allUsers` or `allAuthenticatedUsers`.\n\nUpdate the dataset:\n```\nbq update --source PATH_TO_FILE PROJECT_ID:DATASET_NAME\n```\n\n**Prevention:**\n\nYou can prevent Bigquery dataset from becoming publicly accessible by setting up the `Domain restricted sharing` organization policy at: https://console.cloud.google.com/iam-admin/orgpolicies/iam-allowedPolicyMemberDomains .",
-          "AuditProcedure": "**From Google Cloud Console**\n\n1. Go to `BigQuery` by visiting: https://console.cloud.google.com/bigquery(https://console.cloud.google.com/bigquery).\n2. Select a dataset from `Resources`.\n3. Click `SHARING` near the right side of the window and select `Permissions`.\n4. Validate that none of the attached roles contain `allUsers` or `allAuthenticatedUsers`.\n\n**From Google Cloud CLI**\n\nList the name of all datasets.\n```\nbq ls\n```\nRetrieve each dataset details using the following command:\n```\nbq show PROJECT_ID:DATASET_NAME\n```\nEnsure that `allUsers` and `allAuthenticatedUsers` have not been granted access to the dataset.",
+          "RemediationProcedure": "**From Google Cloud Console**  1. Go to `BigQuery` by visiting: https://console.cloud.google.com/bigquery(https://console.cloud.google.com/bigquery). 2. Select the dataset from 'Resources'. 3. Click `SHARING` near the right side of the window and select `Permissions`. 4. Review each attached role. 5. Click the delete icon for each member `allUsers` or `allAuthenticatedUsers`. On the popup click `Remove`.  **From Google Cloud CLI**  List the name of all datasets. ``` bq ls ``` Retrieve the data set details:  ``` bq show --format=prettyjson PROJECT_ID:DATASET_NAME > PATH_TO_FILE ``` In the access section of the JSON file, update the dataset information to remove all roles containing `allUsers` or `allAuthenticatedUsers`.  Update the dataset: ``` bq update --source PATH_TO_FILE PROJECT_ID:DATASET_NAME ```  **Prevention:**  You can prevent Bigquery dataset from becoming publicly accessible by setting up the `Domain restricted sharing` organization policy at: https://console.cloud.google.com/iam-admin/orgpolicies/iam-allowedPolicyMemberDomains .",
+          "AuditProcedure": "**From Google Cloud Console**  1. Go to `BigQuery` by visiting: https://console.cloud.google.com/bigquery(https://console.cloud.google.com/bigquery). 2. Select a dataset from `Resources`. 3. Click `SHARING` near the right side of the window and select `Permissions`. 4. Validate that none of the attached roles contain `allUsers` or `allAuthenticatedUsers`.  **From Google Cloud CLI**  List the name of all datasets. ``` bq ls ``` Retrieve each dataset details using the following command: ``` bq show PROJECT_ID:DATASET_NAME ``` Ensure that `allUsers` and `allAuthenticatedUsers` have not been granted access to the dataset.",
           "AdditionalInformation": "",
           "References": "https://cloud.google.com/bigquery/docs/dataset-access-controls"
         }