Setup AWS infrastructure with CloudFormation templates.
This repository helps you set up networking resources such as VPC (Virtual Private Cloud), Internet Gateway, Route Tables and Routes.
We will use AWS CloudFormation for infrastructure setup and tear-down.
NOTE: Provide unique names to the resources (wherever supported). You should be able to create multiple networks in the same account.
Use the following instructions to set up dev, prod and root profiles for resource creation using AWS CloudFormation.
- Sign in to your AWS
rootaccount console. - Navigate into the IAM console.
- Create a user group named
csye6225-tawithReadOnlyAccessprivileges. - Follow the above two steps for
devandprodaccounts.
- Sign in to your AWS
rootaccount console. - Navigate into the IAM console.
- Create a user by providing the
username. - Add the
usernameuser to the user groupcsye6225-tacreated above. - Do not configure credentials for the users. Leave the default setting "Autogenerated password" checked and copy the generated password. AWS does not email autogenerated passwords. You need to manually send the email with the password.
- Provide appropriate tag(s), they're highly recommended.
- Install and configure AWS Command Line Interface (CLI) on your development machine (laptop). See Install the AWS Command Line Interface for detailed instructions to use AWS CLI with Windows, MacOS or Linux.
- Below are the steps to download and use the AWS CLI on MacOS:
- Download the file using the
curlcommand:
# On macOS only
curl "https://awscli.amazonaws.com/AWSCLIV2.pkg" -o "AWSCLIV2.pkg"- Run the
macOS installerto install AWS CLI:
# On macOS only
sudo installer -pkg ./AWSCLIV2.pkg -target /- Verify that
zshcan find and runawscommand using the following commands:
which aws
#/usr/local/bin/aws
aws --version
#aws-cli/2.8.2 Python/3.9.11 Darwin/21.6.0 exe/x86_64 prompt/offNOTE: Alternatively, you can use the homebrew to install AWS CLI v2 on your Mac. See detailed instructions here.
- Create a
CLIgroup in yourdevandprodroot accounts, on the AWS Console. - Provide the
Administrator Accesspolicy to this group. - Add the
dev-cliandprod-cliusers to their respective user groups. - In the terminal, create
devuser profile for your dev AWS account andproduser profile for your production AWS account. Do not set up adefaultprofile. - Both
devandprodAWS CLI profiles should be set to use theus-east-1region or the region closest to you. - To create a profile, use the set of following command:
aws configure --profile <profile-name>-
The above command will ask you to fill out the following:
AWS Access Key IDAWS Secret Access KeyRegionOutput
-
To change the region on any profile, use the following command:
# change the region
aws configure set region <region-name> --profile dev# you can omit --profile dev is you have env variables set (see below)
aws configure set region <region-name>- To use a particular profile, use the command:
# For prod profile
export AWS_PROFILE=prod# For dev profile
export AWS_PROFILE=dev- To stop using a profile, use the following command:
# To stop using a profile
export AWS_PROFILE=Configure the networking infrastructure setup using AWS Cloudformation:
- Create a CloudFormation template
csye6225-infra.jsonorcsye6225-infra.ymlthat can be used to set up required networking resources. - Do not hardcode values for your VPCs and its networking resources.
- Create a Virtual Private Cloud(VPC).
- Create subnets in your VPC. You must create
3subnets, each in a different availability zone in the same region in the same VPC. - Create an Internet Gateway resource and attach the Internet Gateway to the VPC.
- Create a public route table. Attach all subnets created to the route table.
- Create a public route in the public route table created above with the destination CIDR block
0.0.0.0/0and internet gateway created above as the target.
To create a default VPC in case you deleted the default VPC in your AWS account, use the following command:
aws ec2 create-default-vpcTo create a stack with custom AMI, replace the AMI default value under the AMI parameter with the custom AMI id that is created using Packer:
parameters:
AMI:
Type: String
Default: "<your-ami-id>"
Description: "The custom AMI built using Packer"NOTE: For more details on how we'll be using HCP Packer, refer here.
To launch the EC2 AMI at CloudFormation stack creation, we need to have a few configurations in place.
We need to create a custom security group for our application with the following ingress rules to allow TCP traffic on our VPC:
SSHprotocol on PORT22.HTTPprotocol on PORT80.HTTPSprotocol on PORT443.- PORT
1337for our webapp to be hosted on. (This can vary according to developer needs) - Their IPs should be accessible from anywhere in the world.
To launch the custom EC2 AMI using the CloudFormation stack, we need to configure the EC2 instance with the custom security group we created above, and then define the EBS volumes with the following properties:
- Custom AMI ID (created using Packer)
- Instance type :
t2.micro - Protected against accidental termination:
no - Root volume size: 50
- Root volume type:
General Purpose SDD (GP2)
To use the RDS and S3 on AWS we need to configure the following:
-
AWS::S3::Bucket- Default encryption for bucket.
- Lifecycle policy to change storage type from
STANDARDtoSTANDARD_IAafter 30 days.
-
AWS::RDS::DBParameterGroup- DB Engine config.
-
AWS::RDS::DBSubnetGroup -
AWS::EC2::SecurityGroup- Ingress rule for
5432port for Postgres. Application Security Groupis the source for traffic.
- Ingress rule for
-
AWS::IAM::Role -
AWS::IAM::InstanceProfile -
AWS::IAM::Policy{ "Version": "2012-10-17", "Statement": [ { "Action": [ "s3:*" ], "Effect": "Allow", "Resource": [ "arn:aws:s3:::YOUR_BUCKET_NAME", "arn:aws:s3:::YOUR_BUCKET_NAME/*" ] } ] }NOTE: Replace
*with appropriate permissions for the S3 bucket to create security policies. -
AWS::RDS::DBInstance- Configure the following:
- Database Engine: MySQL/PostgreSQL
- DB Instance Class: db.t3.micro
- Multi-AZ deployment: No
- DB instance identifier: csye6225
- Master username: csye6225
- Master password: pick a strong password
- Subnet group: Private subnet for RDS instances
- Public accessibility: No
- Database name: csye6225
- Configure the following:
NOTE: To run the application on a custom bucket, we need to update the
UserDatafield in theAWS::EC2::Instance.
- To hard delete a bucket, you can use the following command:
aws s3 rm s3://<bucket-name> --recursiveTo configure the Domain Name System (DNS), we need to do the following from the AWS Console:
- Register a domain with a domain registrar (Namecheap). Namecheap offers free domain for a year with Github Student Developer pack.
- Configure AWS Route53 for DNS service:
- Create a
HostedZonefor the root AWS account, where we create a public hosted zone for domainyourdomainname.tld. - Configure Namecheap with the custom
Name Serversprovided by AWS Route53 to use Route53 name servers. - Create a public hosted zone in the dev AWS account, with the subdomain
dev.yourdomainname.tld. - Create a public hosted zone in the prod AWS account, with the subdomain
prod.yourdomainname.tld. - Configure the name servers and subdomain in the root AWS account (for both dev and prod).
- Create a
- AWS Route53 is updated from the CloudFormation template. We need to add an
Arecord to the Route53 zone so that your domain points to your EC2 instance and your web application is accessible throughhttp://your-domain-name.tld/. - The application must be accessible using root context i.e.
http://your-domain-name.tld/and nothttp://your-domain-name.tld/app-0.1/.
The following steps are done manually and only for the subdomain in the prod AWS account:
- Verify Domain in Amazon SES.
- Authenticate Email with DKIM in Amazon SES.
- Move Out of the Amazon SES Sandbox by Requesting Production Access.
Once you have production access, you can send out more than 50,000 mails per day. We need to create a custom MAIL FROM domain in the AWS account where we have Amazon SES production access. We need to publish the MX and TXT records to Route53 so that our DNS has access to our mail servers, and in turn can send out emails.
Add the following resources with appropriate properties and rules to the cloudformation template to get Amazon DynamoDB and Amazon SNS setup:
AWS::DynamoDB::TableReadCapacity: 1WriteCapacity: 1TimeToLive: 5 minutes
AWS::Lambda::FunctionAWS::Lambda::PermissionAWS::IAM::Rolefor Lambda FunctionManagedPolicyArns:arn:aws:iam::aws:policy/AmazonSESFullAccessarn:aws:iam::aws:policy/CloudWatchLogsFullAccessarn:aws:iam::aws:policy/AmazonS3FullAccessarn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRolearn:aws:iam::aws:policy/AmazonDynamoDBFullAccess
AWS::SNS::TopicAWS::SNS::TopicPolicy
Once the user hits the /v1/account/ endpoint to create an account, DynamoDB stores a unique token for that username, and the SNS topic will trigger the AWS Lambda function that will send out the mail to the user (via AWS SES) asking them to verify their account by clicking on a verifyUserEmail route in the REST API.
Once we have changes to be updated in our webapp, we will refresh/replace the instance currently running in the auto-scaling group of the previous AMI. When the new AMI is ready, we will refresh the current instance(s) with new ones that are created using the latest AMI. This workflow is to be executed using CI/CD pipelines in GitHub actions.
For reference, we'll be using the start-instance-refresh command.
Add the following resources with appropriate properties and rules to the cloudformation template to get AutoScaling and LoadBalancing setup:
AWS::EC2::LaunchTemplateAWS::AutoScaling::AutoScalingGroupAWS::AutoScaling::ScalingPolicyAWS::CloudWatch::AlarmAWS::EC2::SecurityGroupfor LoadBalancerAWS::ElasticLoadBalancingV2::TargetGroupAWS::ElasticLoadBalancingV2::LoadBalancerAWS::ElasticLoadBalancingV2::Listener
To secure our EBS volume and RDS instance, we will use Amazon KMS (Key Management System) to use encrypted keys.
Add the following resources with appropriate properties and rules to the cloudformation template to get the EBS and RDS setup:
AWS::KMS::KeyAWS::KMS::Alias
To get a SSL Certificate for your domain, visit ZeroSSL. Follow the instructions to setup SSL for Amazon Web Services.
You may need to add the CNAME record to Amazon Route 53 to get the SSL working.
To import the SSL certificate and private keys that you download from ZeroSSL, use the following command:
aws acm import-certificate --certificate fileb://certificate.crt --certificate-chain fileb://ca_bundle.crt --private-key fileb://private.keyValidate the CloudFormation template using the following command:
aws cloudformation validate-template --template-body file://<path-to-template-file>.yamlTo create the stack, run the following command:
aws cloudformation create-stack --stack-name <stack-name> --template-body file://<path-to-template-file>.yamlTo create a stack with custom parameters:
aws cloudformation create-stack --stack-name app-stack \
--template-body file://templates/<your-template>.yaml \
--parameters ParameterKey=Environment,ParameterValue=prod \
ParameterKey=AMI,ParameterValue=<ami-id> \
ParameterKey=SSLCertificateId, ParameterValue=<your-certificate-id> \
--capabilities CAPABILITY_NAMED_IAMIf you want to use a separate file that stores these parameters, you'll need to specify the path to this parameter file when creating (or updating) the stack.
This parameter file should have the extension .json or .yaml. However, support for YAML parameter files in AWS CLI is not yet implemented. Please refer this issue for more details. Native support for JSON parameters file is present and easy to use:
aws cloudformation create-stack --stack-name <your-stack-name> \
--template-body file://templates/<your-template>.yaml \
--parameters file://./<params-file>.json \
--capabilities CAPABILITY_NAMED_IAMHowever, for best practices, it's better to not have mixed markups for AWS Cloudformation configurations. Since our base template is in YAML, we'll use the parameter file written in YAML. The only hack here is we need to install a separate package called yq, which will help us parse our .yaml file as a valid parameter file to the AWS CLI Cloudformation command.
To install yq:
# mac install only
# Refer https://github.com/mikefarah/yq for installation options on other OS platforms
brew install yqTo use the .yaml parameter file:
aws cloudformation create-stack --stack-name <your-stack-name> \
--template-body file://templates/<your-template>.yaml \
--parameters $(yq eval -o=j ./<params-file>.yaml) \
--capabilities CAPABILITY_NAMED_IAMRefer this issue for more details on how to use YAML for AWS CLI Cloudformation parameters option.
To update the stack, run the following command:
aws cloudformation update-stack --stack-name <stack-name> --template-body file://<path-to-template-file>.ymlTo delete the stack, run the following command:
aws cloudformation delete-stack --stack-name <stack-name>- To list all the stacks in the current
AWS_PROFILE, use the following command:
aws cloudformation list-stacks --output table- To view the details of the stack created, run the following command:
# displays the result in a table format
aws cloudformation describe-stacks --stack-name <stack-name> --output table- To view the details of VPCs created, run the following command:
# displays the result in a table format
aws ec2 describe-vpcs --output table- To get the AZs(Availability Zones) of a region, use the following command:
aws ec2 describe-availability-zones [--region <region-name>] --output tableTo use the instance we build using the custom AMI and Cloudformation stack, use the following command:
ssh <username>@<ip-address> -v -i ~/.ssh/<key-name>Ex: ssh ubuntu@10.0.0.0 -v -i ~/.ssh/ec2-user
To access your database on the EC2 instance, use the following command:
psql --host=<your-rds.amazonaws.com-host> --port=5432 --username=<your-username> --password --dbname=<your-db-name>Running the above command will prompt your to enter the password to your database.