Step 1) Prerequisites:
Create a Databricks service principal or select one already available
Determine a DNS name for the application VM, register a domain name if applicable.
If you are using azure AD as an identity provider, create an app registration in your AAD tenant of choice.
Also create a client secret, saving the secret value as input while running the infrastructure setup script
Step 2) Running the install script:
Open an AWS cloudshell instance and upload the install archive
./deploy-lm.sh --databricks_principal "example@domain.com" \ --databricks_account_id <GUID for databricks account> \ --databricks_principal_guid <GUID for databricks principal> \ --version "2.3" \ # this is the Lakehouse Monitor application Version --email_certbot "it_admin@example.com" \ --aws_region us-east-1 \ --service_principal <Azure service principal client id> \ --tenant_id <Azure Tenant ID> \
Step 3) After ./deploy-lm.sh
Login to AWS Management Console.
The virtual machine needs the policies described below assigned to it. One suggested way would be to create a specific role for the VM and assign the created policies to that role. The information below uses the ‘JSON’ view to enable faster policy creation
If you are creating new policies, prepending them with the same string will allow for easier retrieval when creating the role and selecting the policies assigned to it.
Allow read of cost and usage data
Navigate to the IAM console and create a new policy with the json described permissions below
{ "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": "ce:GetCostAndUsage", "Resource": "*" } ] }
Allow Read of created secret
Find the secret name in the script output and replace {SecretNameHere} with the secret name and {AWS account ID} with your account id
You will need to update this policy definition with your aws account ID and secret name.
{ "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": "secretsmanager:GetSecretValue", "Resource": "arn:aws:secretsmanager:*:{AWS Account ID}:secret:{SecretNameHere}*" } ] }
Allow management of DynamoDB and Simple Queue Service
Create the third policy:
You will need to update this policy definition with your aws account ID.
{ "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "dynamodb:CreateTable", "sqs:DeleteMessage", "sqs:GetQueueUrl", "dynamodb:UpdateTimeToLive", "dynamodb:DescribeTable", "sqs:ReceiveMessage", "dynamodb:Scan", "dynamodb:Query", "sqs:CreateQueue" ], "Resource": [ "arn:aws:dynamodb:*:{AWS account ID}:table/*bplm*", "arn:aws:sqs:*:{AWS account ID}:*bplm*" ] } ] }
Allow s3 bucket tag get\set
{ "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "s3:GetBucketTagging", "s3:PutBucketTagging" ], "Resource": "arn:aws:s3:::*" } ] }
Allow EC2 tag management
You will need to update this policy definition with your aws account ID.
{ "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "ec2:DeleteTags", "ec2:CreateTags" ], "Resource": "arn:aws:ec2:*:{AWS Account ID}:natgateway/*" }, { "Sid": "VisualEditor1", "Effect": "Allow", "Action": [ "ec2:DescribeVpcs", "ec2:DescribeNatGateways" ], "Resource": "*" } ] }
Once the role is created, navigate to the EC2 instance and assign the IAM role
Actions → Security → Modify IAM role
From here search for then select the IAM role and click ‘Update IAM role’
Step 4) Create DNS Entry
Navigate to the Route 53 service page, then further to the hosted zone you wish to manage. Create an 'A' record for the application providing the IP address output at the end of script execution.
run setup.sh
deploy-lm.sh creates an opening for the current IP of the AWS cloudshell session in the VM’s security group. If for some reason you have to restart your session and cannot connect via SSH, determine the IP address of the current cloudshell session and change the IP allowed on port 22.
Navigate back to your cloudshell instance and ssh into the vm to run the rest of the setup
ssh -i ~/.ssh/ec2key ubuntu@<vm public IP or DNS >
Run ./setup.sh providing the domain you wish to create an SSL cert for, the version of the lakehouse monitor, and an admin email that will be used to configure certbot’s notifications when creating an SSL certificate.
If you do not currently have a registered DNS entry for the lakehouse monitor, you can skip setting up SSL certs by not supplying the cert_domain
or email_certbot
arguments.
chmod +x setup.sh eg: ./setup.sh --cert_domain "lakehouse-monitor.company.com" --version 2.3 --email_certbot notifications@company.com
Update docker-compose.yml with sql password. You can find the password in secrets manager. It’s stored as one of the key value pairs under the configured secret name
vi docker-compose.yml
find the line
environment: SA_PASSWORD:
updating as such: SA_PASSWORD: yourpasswordhere
Post setup.sh steps
Edit app registration in Azure, changing the Redirect URI to https://<configured VM DNS>/login/oauth2/code/azure
run start.sh
After the setup script completes, run start.sh to pull down the application container and start it
ACR username and ACR password to be used by docker to pull the BPLM images from the container registry:
bplm-acr-token / <password to be provided upon deployment>
where
ACRUser
is the Blueprint Docker Registry userwhere
ACRPass
is the Blueprint Docker Registry password
chmod +x start.sh eg: ./start.sh example-acr-user someStrongPassword
All done! After initialization is complete, you should now be able to access the homepage from the configured DNS value
Add Comment