Prowler 2.7.0 - Brave (#998)

* Extra7161 EFS encryption at rest check

* Added check_extra7162 which checks if Log groups have 365 days retention

* fixed code to handle all regions and formatted output

* changed check title, resource type and service name as well as making the code more dynamic

* Extra7161 EFS encryption at rest check

* New check_extra7163 Secrets Manager key rotation enabled

* New check7160 Enabled AutomaticVersionUpgrade on RedShift Cluster

* Update ProwlerRole.yaml to have same permissions as util/org-multi-account/ProwlerRole.yaml

* Fix link to quicksight dashboard

* Install detect-secrets (e.g. for check_extra742)

* Updating check_extra7163 with requested changes

* fix(assumed-role): Check if -T and -A options are set

* docs(Readme): `-T` option is not mandatory

* fix(assume-role): Handle AWS STS CLI errors

* fix(assume-role): Handle AWS STS CLI errors

* Update group25_FTR

When trying to run the group 25 (Amazon FTR related security checks) nothing happens, after looking at the code there is a misconfiguration in 2 params: GROUP_RUN_BY_DEFAULT[9] and GROUP_CHECKS[9]. Updating values to 25 fixed the issue.

* Update README.md

broken link for capital letters in group file (group25_FTR)

* #938 issue assume_role multiple times should be fixed

* Label 2.7.0-1December2021 for tests

* Fixed error that appeared if the number of findings was very high.

* Adjusted the batch to only do 50 at a time. 100 caused capacity issues. Also added a check for an edge case where if the updated findings was a multiple of the batch size, it would throw an error for attempting to import 0 findings.

* Added line to delete the temp folder after everything is done.

* New check 7164 Check if Cloudwatch log groups are protected by AWS KMS@maisenhe

* updated CHECK_RISK

* Added checks extra7160,extra7161,extra7162,extra7163 to group Extras

* Added checks extra7160,extra7161,extra7162,extra7163 to group Extras

* Added issue templates

* New check 7165 DynamoDB: DAX encrypted at rest @Daniel-Peladeau

* New check 7165 DynamoDB: DAX encrypted at rest @Daniel-Peladeau

* Fix #963 check 792 to force json in ELB queries

* Fix #957 check 763 had us-east-1 region hardcoded

* Fix #962 check 7147 ALTERNATE NAME

* Fix #940 handling error when can not list functions

* Added new checks 7164 and 7165 to group extras

* Added invalid check or group id to the error message #962

* Fix Broken Link

* Add docker volume example to README.md

* Updated Dockerfile to use amazonlinux container

* Updated Dockerfile with AWS cli v2

* Added upgrade to the RUN

* Added cache purge to Dockerfile

* Backup AWS Credentials before AssumeRole and Restore them before CopyToS3

* exporting the ENV variables

* fixed bracket

* Improved documentation for install process

* fix checks with comma issues

* Added -D option to copy to S3 with the initial AWS credentials

* Cosmetic variable name change

* Added $PROFILE_OPT to CopyToS3 commands

* remove commas

* removed file as it is not needed

* Improved help usage options -h

* Fixed CIS LEVEL on 7163 through 7165

* When performing a restoreInitialAWSCredentials, unset the credentials ENV variables if they were never set

* New check 7166 Elastic IP addresses with associations are protected by AWS Shield Advanced

* New check 7167 Cloudfront distributions are protected by AWS Shield Advanced

* New check 7168 Route53 hosted zones are protected by AWS Shield Advanced

* New check 7169 Global accelerators are protected by AWS Shield Advanced

* New check 7170 Application load balancers are protected by AWS Shield Advanced

* New check 7171 Classic load balancers are protected by AWS Shield Advanced

* Include example for global resources

* Add AWS Advance Shield protection checks corrections

* Added Shield actions GetSubscriptionState and DescribeProtection

* Added Shield actions GetSubscriptionState and DescribeProtection

* docs(templates): Improve bug template with more info (#982)

* Removed echoes after role chaining fix

* Changed Route53 checks7152 and 7153 to INFO when no domains found

* Changed Route53 checks 7152 and 7153 title to clarify

* Added passed security groups in output to check 778

* Added passed security groups and updated title to check 777

* Added FAIL as error handling when SCP prevents queries to regions

* Label version 2.7.0-6January2022

* Updated .dockerignore with .github/

* Fix: issue #758 and #984

* Fix: issue #741 CloudFront and real-time logs

* Fix issues #971 set all as INFO instead of FAIL when no access to resource

* Fix: issue #986

* Add additional action permissions for Glue and Shield Advanced checks @lazize

* Add extra shield action permission

Allows the shield:GetSubscriptionState action

* Add permission actions

Make sure all files where permission actions are necessary will have the same actions

* Fix: Credential chaining from environment variables @lazize #996f

If profile is not defined, restore original credentials from environment variables,
if they exists, before assume-role

* Lable version 2.7.0-24January2022

Co-authored-by: Lee Myers <ichilegend@gmail.com>
Co-authored-by: Chinedu Obiakara <obiakac@amazon.com>
Co-authored-by: Daniel Peladeau <dcpeladeau@gmail.com>
Co-authored-by: Jonathan Lozano <jonloza@amazon.com>
Co-authored-by: Daniel Lorch <dlorch@gmail.com>
Co-authored-by: Pepe Fagoaga <jose.fagoaga@smartprotection.com>
Co-authored-by: Israel <6672089+lopmoris@users.noreply.github.com>
Co-authored-by: root <halfluke@gmail.com>
Co-authored-by: nikirby <nikirby@amazon.com>
Co-authored-by: Joel Maisenhelder <maisenhe@gmail.com>
Co-authored-by: RT <35173068+rtcms@users.noreply.github.com>
Co-authored-by: Andrea Di Fabio <39841198+sectoramen@users.noreply.github.com>
Co-authored-by: Joseph de CLERCK <clerckj@amazon.fr>
Co-authored-by: Michael Dickinson <45626543+michael-dickinson-sainsburys@users.noreply.github.com>
Co-authored-by: Pepe Fagoaga <pepe@verica.io>
Co-authored-by: Leonardo Azize Martins <lazize@users.noreply.github.com>
This commit is contained in:
Toni de la Fuente
2022-01-24 13:49:47 +01:00
committed by GitHub
parent 42e54c42cf
commit 2b2814723f
186 changed files with 1746 additions and 394 deletions

View File

@@ -11,9 +11,16 @@
# CONDITIONS OF ANY KIND, either express or implied. See the License for the
# specific language governing permissions and limitations under the License.
# both variables are mandatory to be set together
assume_role(){
if [[ -z $ROLE_TO_ASSUME ]]; then
PROFILE_OPT=$PROFILE_OPT_BAK
if [[ "${PROFILE_OPT}" = "" ]]; then
# If profile is not defined, restore original credentials from environment variables, if they exists!
restoreInitialAWSCredentials
fi
# Both variables are mandatory to be set together
if [[ -z $ROLE_TO_ASSUME || -z $ACCOUNT_TO_ASSUME ]]; then
echo "$OPTRED ERROR!$OPTNORMAL - Both Account ID (-A) and IAM Role to assume (-R) must be set"
exit 1
fi
@@ -28,6 +35,7 @@ assume_role(){
# temporary file where to store credentials
TEMP_STS_ASSUMED_FILE=$(mktemp -t prowler.sts_assumed-XXXXXX)
TEMP_STS_ASSUMED_ERROR=$(mktemp -t prowler.sts_assumed-XXXXXX)
# check if role arn or role name
if [[ $ROLE_TO_ASSUME == arn:* ]]; then
@@ -36,57 +44,70 @@ assume_role(){
PROWLER_ROLE=arn:${AWS_PARTITION}:iam::$ACCOUNT_TO_ASSUME:role/$ROLE_TO_ASSUME
fi
#Check if external ID has bee provided if so execute with external ID if not ignore
if [[ -z $ROLE_EXTERNAL_ID ]]; then
# assume role command
$AWSCLI $PROFILE_OPT sts assume-role --role-arn $PROWLER_ROLE \
--role-session-name ProwlerAssessmentSession \
--region $REGION_FOR_STS \
--duration-seconds $SESSION_DURATION_TO_ASSUME > $TEMP_STS_ASSUMED_FILE 2>&1
else
$AWSCLI $PROFILE_OPT sts assume-role --role-arn $PROWLER_ROLE \
--role-session-name ProwlerAssessmentSession \
--duration-seconds $SESSION_DURATION_TO_ASSUME \
--region $REGION_FOR_STS \
--external-id $ROLE_EXTERNAL_ID > $TEMP_STS_ASSUMED_FILE 2>&1
fi
if [[ $(grep AccessDenied $TEMP_STS_ASSUMED_FILE) ]]; then
textFail "Access Denied assuming role $PROWLER_ROLE"
EXITCODE=1
exit $EXITCODE
elif [[ "$(grep MaxSessionDuration $TEMP_STS_ASSUMED_FILE)" ]]; then
textFail "The requested DurationSeconds exceeds the MaxSessionDuration set for the role ${PROWLER_ROLE}"
EXITCODE=1
exit $EXITCODE
# Check if external ID has bee provided if so execute with external ID if not ignore
ROLE_EXTERNAL_ID_OPTION=""
if [[ -n "${ROLE_EXTERNAL_ID}" ]]; then
ROLE_EXTERNAL_ID_OPTION="--external-id ${ROLE_EXTERNAL_ID}"
fi
# assume role command
#$AWSCLI $PROFILE_OPT sts assume-role --role-arn arn:${AWS_PARTITION}:iam::$ACCOUNT_TO_ASSUME:role/$ROLE_TO_ASSUME \
# --role-session-name ProwlerAssessmentSession \
# --duration-seconds $SESSION_DURATION_TO_ASSUME > $TEMP_STS_ASSUMED_FILE
# if previous command fails exit with the given error from aws-cli
# this is likely to be due to session duration limit of 1h in case
# of assume role chaining:
# "The requested DurationSeconds exceeds the 1 hour session limit
# for roles assumed by role chaining."
# https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use.html
if [[ $? != 0 ]];then
exit 1
# Assume role
if ! $AWSCLI $PROFILE_OPT sts assume-role --role-arn $PROWLER_ROLE \
--role-session-name ProwlerAssessmentSession \
--duration-seconds $SESSION_DURATION_TO_ASSUME \
--region $REGION_FOR_STS \
"${ROLE_EXTERNAL_ID_OPTION}" > $TEMP_STS_ASSUMED_FILE 2>"${TEMP_STS_ASSUMED_ERROR}"
then
STS_ERROR="$(cat ${TEMP_STS_ASSUMED_ERROR} | tr '\n' ' ')"
textFail "${STS_ERROR}"
EXITCODE=1
exit $EXITCODE
fi
# echo FILE WITH TEMP CREDS: $TEMP_STS_ASSUMED_FILE
# The profile shouldn't be used for CLI
PROFILE=""
PROFILE_OPT=""
PROFILE_OPT=""
# Set AWS environment variables with assumed role credentials
ASSUME_AWS_ACCESS_KEY_ID=$(jq -r '.Credentials.AccessKeyId' "${TEMP_STS_ASSUMED_FILE}")
export AWS_ACCESS_KEY_ID=$ASSUME_AWS_ACCESS_KEY_ID
ASSUME_AWS_SECRET_ACCESS_KEY=$(jq -r '.Credentials.SecretAccessKey' "${TEMP_STS_ASSUMED_FILE}")
export AWS_SECRET_ACCESS_KEY=$ASSUME_AWS_SECRET_ACCESS_KEY
ASSUME_AWS_SESSION_TOKEN=$(jq -r '.Credentials.SessionToken' "${TEMP_STS_ASSUMED_FILE}")
export AWS_SESSION_TOKEN=$ASSUME_AWS_SESSION_TOKEN
ASSUME_AWS_SESSION_EXPIRATION=$(jq -r '.Credentials.Expiration | sub("\\+00:00";"Z") | fromdateiso8601' "${TEMP_STS_ASSUMED_FILE}")
export AWS_SESSION_EXPIRATION=$ASSUME_AWS_SESSION_EXPIRATION
# echo TEMP AWS_ACCESS_KEY_ID: $ASSUME_AWS_ACCESS_KEY_ID
# echo TEMP AWS_SECRET_ACCESS_KEY: $ASSUME_AWS_SECRET_ACCESS_KEY
# echo TEMP AWS_SESSION_TOKEN: $ASSUME_AWS_SESSION_TOKEN
# echo EXPIRATION EPOCH TIME: $ASSUME_AWS_SESSION_EXPIRATION
# set env variables with assumed role credentials
export AWS_ACCESS_KEY_ID=$(cat $TEMP_STS_ASSUMED_FILE | jq -r '.Credentials.AccessKeyId')
export AWS_SECRET_ACCESS_KEY=$(cat $TEMP_STS_ASSUMED_FILE | jq -r '.Credentials.SecretAccessKey')
export AWS_SESSION_TOKEN=$(cat $TEMP_STS_ASSUMED_FILE | jq -r '.Credentials.SessionToken')
export AWS_SESSION_EXPIRATION=$(cat $TEMP_STS_ASSUMED_FILE | jq -r '.Credentials.Expiration | sub("\\+00:00";"Z") | fromdateiso8601')
cleanSTSAssumeFile
}
cleanSTSAssumeFile() {
rm -fr "${TEMP_STS_ASSUMED_FILE}"
}
rm -fr "${TEMP_STS_ASSUMED_ERROR}"
}
backupInitialAWSCredentials() {
if [[ $(printenv AWS_ACCESS_KEY_ID) && $(printenv AWS_SECRET_ACCESS_KEY) && $(printenv AWS_SESSION_TOKEN) ]]; then
INITIAL_AWS_ACCESS_KEY_ID=$(printenv AWS_ACCESS_KEY_ID)
INITIAL_AWS_SECRET_ACCESS_KEY=$(printenv AWS_SECRET_ACCESS_KEY)
INITIAL_AWS_SESSION_TOKEN=$(printenv AWS_SESSION_TOKEN)
fi
}
restoreInitialAWSCredentials() {
if [[ $INITIAL_AWS_ACCESS_KEY_ID && $INITIAL_AWS_SECRET_ACCESS_KEY && $INITIAL_AWS_SESSION_TOKEN ]]; then
export AWS_ACCESS_KEY_ID=$INITIAL_AWS_ACCESS_KEY_ID
export AWS_SECRET_ACCESS_KEY=$INITIAL_AWS_SECRET_ACCESS_KEY
export AWS_SESSION_TOKEN=$INITIAL_AWS_SESSION_TOKEN
else
unset AWS_ACCESS_KEY_ID
unset AWS_SECRET_ACCESS_KEY
unset AWS_SESSION_TOKEN
fi
}

View File

@@ -45,7 +45,8 @@ else
PROFILE="default"
PROFILE_OPT="--profile $PROFILE"
fi
# Backing up $PROFILE_OPT needed to renew assume_role
PROFILE_OPT_BAK=$PROFILE_OPT
# Set default region by aws config, fall back to us-east-1
REGION_CONFIG=$(aws configure get region)
if [[ $REGION_OPT ]]; then

View File

@@ -18,7 +18,7 @@ check3x(){
# be based only on CloudTrail tail with CloudWatchLog configuration.
DESCRIBE_TRAILS_CACHE=$($AWSCLI cloudtrail describe-trails $PROFILE_OPT --region "$REGION" --query 'trailList[?CloudWatchLogsLogGroupArn != `null`]' 2>&1)
if [[ $(echo "$DESCRIBE_TRAILS_CACHE" | grep AccessDenied) ]]; then
textFail "$REGION: Access Denied trying to describe trails in $REGION" "$REGION"
textInfo "$REGION: Access Denied trying to describe trails in $REGION" "$REGION"
return
fi

View File

@@ -15,18 +15,18 @@ if [[ $OUTPUT_BUCKET ]]; then
# output mode has to be set to other than text
if [[ "${MODES[@]}" =~ "html" ]] || [[ "${MODES[@]}" =~ "csv" ]] || [[ "${MODES[@]}" =~ "json" ]] || [[ "${MODES[@]}" =~ "json-asff" ]]; then
OUTPUT_BUCKET_WITHOUT_FOLDERS=$(echo $OUTPUT_BUCKET | awk -F'/' '{ print $1 }')
OUTPUT_BUCKET_STATUS=$($AWSCLI s3api head-bucket --bucket "$OUTPUT_BUCKET" 2>&1 || true)
if [[ ! -z $OUTPUT_BUCKET_STATUS ]]; then
echo "$OPTRED ERROR!$OPTNORMAL wrong bucket name or not right permissions."
exit 1
else
# OUTPUT_BUCKET_STATUS=$($AWSCLI s3api head-bucket --bucket "$OUTPUT_BUCKET" 2>&1 || true)
# if [[ -z $OUTPUT_BUCKET_STATUS ]]; then
# echo "$OPTRED ERROR!$OPTNORMAL wrong bucket name or not right permissions."
# exit 1
# else
# need to make sure last / is not set to avoid // in S3
if [[ $OUTPUT_BUCKET != *"/" ]]; then
OUTPUT_BUCKET="$OUTPUT_BUCKET"
else
OUTPUT_BUCKET=${OUTPUT_BUCKET::-1}
fi
fi
# fi
else
echo "$OPTRED ERROR!$OPTNORMAL - Mode (-M) has to be set as well. Use -h for help."
exit 1
@@ -38,19 +38,19 @@ copyToS3(){
# and processing by Quicksight or others.
if [[ $OUTPUT_BUCKET ]]; then
if [[ "${MODES[@]}" =~ "csv" ]]; then
$AWSCLI s3 cp $OUTPUT_DIR/prowler-output-${ACCOUNT_NUM}-${OUTPUT_DATE}.$EXTENSION_CSV \
$AWSCLI $PROFILE_OPT s3 cp $OUTPUT_DIR/prowler-output-${ACCOUNT_NUM}-${OUTPUT_DATE}.$EXTENSION_CSV \
s3://$OUTPUT_BUCKET/csv/ --acl bucket-owner-full-control
fi
if [[ "${MODES[@]}" =~ "html" ]]; then
$AWSCLI s3 cp $OUTPUT_DIR/prowler-output-${ACCOUNT_NUM}-${OUTPUT_DATE}.$EXTENSION_HTML \
$AWSCLI $PROFILE_OPT s3 cp $OUTPUT_DIR/prowler-output-${ACCOUNT_NUM}-${OUTPUT_DATE}.$EXTENSION_HTML \
s3://$OUTPUT_BUCKET/html/ --acl bucket-owner-full-control
fi
if [[ "${MODES[@]}" =~ "json" ]]; then
$AWSCLI s3 cp $OUTPUT_DIR/prowler-output-${ACCOUNT_NUM}-${OUTPUT_DATE}.$EXTENSION_JSON \
$AWSCLI $PROFILE_OPT s3 cp $OUTPUT_DIR/prowler-output-${ACCOUNT_NUM}-${OUTPUT_DATE}.$EXTENSION_JSON \
s3://$OUTPUT_BUCKET/json/ --acl bucket-owner-full-control
fi
if [[ "${MODES[@]}" =~ "json-asff" ]]; then
$AWSCLI s3 cp $OUTPUT_DIR/prowler-output-${ACCOUNT_NUM}-${OUTPUT_DATE}.$EXTENSION_ASFF \
$AWSCLI $PROFILE_OPT s3 cp $OUTPUT_DIR/prowler-output-${ACCOUNT_NUM}-${OUTPUT_DATE}.$EXTENSION_ASFF \
s3://$OUTPUT_BUCKET/json-asff/ --acl bucket-owner-full-control
fi
fi

View File

@@ -42,29 +42,48 @@ checkSecurityHubCompatibility(){
resolveSecurityHubPreviousFails(){
# Move previous check findings RecordState to ARCHIVED (as prowler didn't re-detect them)
SH_TEMP_FOLDER="$PROWLER_DIR/SH-$ACCOUNT_NUM"
if [[ ! -d $SH_TEMP_FOLDER ]]; then
# this folder is deleted once the security hub update is completed
mkdir "$SH_TEMP_FOLDER"
fi
for regx in $REGIONS; do
REGION_FOLDER="$SH_TEMP_FOLDER/$regx"
if [[ ! -d $REGION_FOLDER ]]; then
mkdir "$REGION_FOLDER"
fi
local check="$1"
NEW_TIMESTAMP=$(get_iso8601_timestamp)
FILTER="{\"GeneratorId\":[{\"Value\": \"prowler-$check\",\"Comparison\":\"EQUALS\"}],\"RecordState\":[{\"Value\": \"ACTIVE\",\"Comparison\":\"EQUALS\"}],\"AwsAccountId\":[{\"Value\": \"$ACCOUNT_NUM\",\"Comparison\":\"EQUALS\"}]}"
NEW_FINDING_IDS=$(echo -n "${SECURITYHUB_NEW_FINDINGS_IDS[@]}" | jq -cRs 'split(" ")')
SECURITY_HUB_PREVIOUS_FINDINGS=$($AWSCLI securityhub --region "$regx" $PROFILE_OPT get-findings --filters "${FILTER}" | jq -c --argjson ids "$NEW_FINDING_IDS" --arg updated_at $NEW_TIMESTAMP '[ .Findings[] | select( .Id| first(select($ids[] == .)) // false | not) | .RecordState = "ARCHIVED" | .UpdatedAt = $updated_at ]')
NEW_FINDING_FILE="$REGION_FOLDER/findings.json"
NEW_FINDING_IDS=$(echo -n "${SECURITYHUB_NEW_FINDINGS_IDS[@]}" | jq -cRs 'split(" ")' > $NEW_FINDING_FILE)
EXISTING_FILE="$REGION_FOLDER/existing.json"
EXISTING_FINDINGS=$($AWSCLI securityhub --region "$regx" $PROFILE_OPT get-findings --filters "${FILTER}" > $EXISTING_FILE)
SECURITY_HUB_PREVIOUS_FINDINGS=$(for id in $(comm -23 <(jq '[.Findings[].Id] | sort | .[]' $EXISTING_FILE) <(jq '[.[]] | sort | .[]' $NEW_FINDING_FILE));
do
jq --arg updated_at $NEW_TIMESTAMP '.Findings[] | select(.Id == '"$id"') | .RecordState = "ARCHIVED" | .UpdatedAt = $updated_at ' < $EXISTING_FILE
done | jq -s '.')
if [[ $SECURITY_HUB_PREVIOUS_FINDINGS != "[]" ]]; then
FINDINGS_COUNT=$(echo $SECURITY_HUB_PREVIOUS_FINDINGS | jq '. | length')
for i in `seq 0 100 $FINDINGS_COUNT`;
for i in $(seq 0 50 $FINDINGS_COUNT);
do
BATCH_FINDINGS=$(echo $SECURITY_HUB_PREVIOUS_FINDINGS | jq -c '.['"$i:$i+100"']')
BATCH_IMPORT_RESULT=$($AWSCLI securityhub --region "$regx" $PROFILE_OPT batch-import-findings --findings "${BATCH_FINDINGS}")
if [[ -z "${BATCH_IMPORT_RESULT}" ]] || jq -e '.FailedCount >= 1' <<< "${BATCH_IMPORT_RESULT}" > /dev/null 2>&1; then
echo -e "\n$RED ERROR!$NORMAL Failed to send check output to AWS Security Hub\n"
BATCH_FINDINGS=$(echo $SECURITY_HUB_PREVIOUS_FINDINGS | jq -c '.['"$i:$i+50"']')
BATCH_FINDINGS_COUNT=$(echo $BATCH_FINDINGS | jq '. | length')
if [ "$BATCH_FINDINGS_COUNT" -gt 0 ]; then
BATCH_IMPORT_RESULT=$($AWSCLI securityhub --region "$regx" $PROFILE_OPT batch-import-findings --findings "${BATCH_FINDINGS}")
if [[ -z "${BATCH_IMPORT_RESULT}" ]] || jq -e '.FailedCount >= 1' <<< "${BATCH_IMPORT_RESULT}" > /dev/null 2>&1; then
echo -e "\n$RED ERROR!$NORMAL Failed to send check output to AWS Security Hub\n"
fi
fi
done
fi
done
rm -rf "$SH_TEMP_FOLDER"
}
sendToSecurityHub(){