...
Code Block |
---|
CONSUMPTION_BILLABLE_USAGE_PATH=s3a://<bucket>/<path_prefix>/billable-usage/csv
STORAGE_AWS_S3_REGION=<bucket_region> |
b) Using Assume Role for S3: for AWS managed KMS keys
S3 bucket and KMS permission role on AWS account where S3 bucket belongs
Code Block |
---|
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "S3ReadObject",
"Effect": "Allow",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::<bucket>/<path_prefix>/*"
},
{
"Sid": "S3ListBucket",
"Effect": "Allow",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::<bucket>",
"Condition": {
"StringLike": {
"s3:prefix": "<path_prefix>/*"
}
}
},
{
"Sid": "DecryptKMSbucket",
"Action": [
"kms:Decrypt"
],
"Effect": "Allow",
"Resource": "arn:aws:kms:<bucket_region>:<s3_aws_account_id>:key/*"
}
]
} |
|
Trusting policy for the S3 role (only trusting a remote role version, for account-id or PrincipalOrgId, see the examples above):
Code Block |
---|
# Exactly the IAM Role of the LHM Application in the AWS Account hosting it
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<LHM_App_Host_AWS_Account_Id>:role/<LHM_App_IAM_Role>"
},
"Action": "sts:AssumeRole"
}
]
}
|
|
LHM Application IAM Role permission policy:
Code Block |
---|
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": "arn:aws:iam::<s3_aws_account_id>:role/<s3_role_name>"
}
]
} |
|
Configuring Lakehouse monitor to read from s3:
Code Block |
---|
CONSUMPTION_BILLABLE_USAGE_PATH=s3a://<bucket>/<path_prefix>/billable-usage/csv
STORAGE_AWS_S3_REGION=<bucket_region>
CROSS_ACCOUNT_ASSUME_IAM_ROLE_S3_DBX_BILLING_APP=arn:aws:iam::<s3_aws_account_id>:role/<s3_role_name> |
|
DynamoDB and SQS:
Both the LHM Application and the LHM Agent running in the Databricks workspaces require access to DynamoDB tables and SQS queue that are created in the same AWS account as the LHM application, we will call this the “LHM_App_AWS_Account_Id” in the permission policies below:
...