...
Code Block | ||
---|---|---|
| ||
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "iam:CreateServiceLinkedRole", "iam:DeleteServiceLinkedRole", "iam:CreateInstanceProfile", "iam:AddRoleToInstanceProfile", "iam:DeleteInstanceProfile", "iam:GetUser", "iam:AttachRolePolicy", "iam:GetInstanceProfile", "iam:PassRole", "iam:CreatePolicy", "iam:ListEntitiesForPolicy", "iam:AttachUserPolicy", "iam:CreatePolicyVersion", "iam:ListAttachedUserPolicies", "iam:ListPolicies", "iam:DetachUserPolicy", "iam:ListUsers", "iam:ListGroups", "iam:CreateRole", "iam:GetPolicy", "iam:GetPolicyVersion", "iam:RemoveRoleFromInstanceProfile", "iam:DeleteRole", "iam:DeletePolicy", "iam:ListPolicyVersions", "iam:ListInstanceProfilesForRole", "iam:ListRolePolicies", "iam:ListAttachedRolePolicies", "iam:GetRole", "iam:ListRoles", "iam:DetachRolePolicy", "organizations:DescribeOrganization", "account:ListRegions" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "ec2:*", "dynamodb:*", "route53:*", "rds:*", "s3:*", "cloudshell:*", "resource-groups:*", "secretsmanager:TagResource", "secretsmanager:CreateSecret", "secretsmanager:DescribeSecret", "secretsmanager:GetResourcePolicy", "secretsmanager:GetSecretValue", "kms:CreateKey", "kms:DescribeKey", "kms:GetKeyPolicy", "kms:ScheduleKeyDeletion", "kms:GetKeyRotationStatus", "secretsmanager:DeleteSecret", "secretsmanager:PutSecretValue", "kms:ListResourceTags" ], "Resource": "*" } ] } |
Databricks metastore admin permission
Besides the AWS permissions listed above, the deploying user needs to be a Metastore Admin of the Databricks Unity Catalog. We recommend creating a group which shall be configured as Metastore Admin and having the admins added to this group.
Supported AWS Regions
The Lakehouse Optimizer supports any region where all the required AWS services can be deployed. You can check service availability by region by this AWS page to list AWS services available by region.
...
Databricks Service Principal
Ensure the Databricks service principal was created with the prerequisites linked below
...
dns_record_name
- Friendly, descriptive DNS name for the application. eg: lho-app-dev.yourdomain.com
. An 'A' record is created in an AWS Hosted Zone in the account this deployment is running in. Ensure that the hosted zone is in the AWS account.name_prefix
- AWS Resources name prefix. eg: lho-dev
the AWS account.
name_prefix
- AWS Resources name prefix. Note that this will be used to name the S3 bucket. The bucket name must be globally unique (across the entire AWS space) so we recommend using specific names instead of generic ones. E.g: lho-<your company name here>
instead of lho
acr_username
- Container registry username to authenticate and pull down the LHO app container. Contact Blueprint support if you do not have this information.
...
Configuring option 2, using role trusts
First run the The below code snippet to create creates a role trust file that is a newline separated list of all of the instance profile role ARNs or AWS accounts that are in use in your Databricks environments. You may also simply add the root ARN to have the LHO agent role trust all roles in the given account.environment. This file location is passed in as an argument to the deployment script.
Copy the snippet to a text editor. Change the below ARN values defined in the role_array variable declaration to suit your needs. After updating, run in AWS Cloudshell to create the role trust file.
Code Block |
---|
# declare role array role_array=( "arn:aws:iam::{aws account id}:role/rolename" "arn:aws:iam::{aws account id}:role/anotherrolename" ) # create role file printf "%s\n" "${role_array[@]}" > ~/lho/agent_trusted_roles.txt OR role_array=( "arn:aws:iam::{aws account id}:root" ) # create role file printf "%s\n" "${role_array[@]}" > ~/lho/agent_trusted_roles.txt |
...