AWS Cloud Services Guide
Introduction
Amazon Web Services (AWS) is the leading cloud platform. This guide covers essential AWS services including EC2, S3, RDS, Lambda, VPC, CloudFront, and best practices for security, scalability, and cost optimization.
1. AWS CLI & IAM Setup
# Install AWS CLI
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
# Configure credentials
aws configure
# AWS Access Key ID: YOUR_KEY
# AWS Secret Access Key: YOUR_SECRET
# Default region: us-east-1
# Default output format: json
# Test configuration
aws sts get-caller-identity
# IAM User with CLI access
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:*",
"s3:*",
"rds:*"
],
"Resource": "*"
}
]
}
# Create IAM role for EC2
aws iam create-role --role-name EC2-S3-Access \
--assume-role-policy-document file://trust-policy.json
# Attach policy
aws iam attach-role-policy --role-name EC2-S3-Access \
--policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess
2. EC2 - Elastic Compute Cloud
# Launch EC2 instance
aws ec2 run-instances \
--image-id ami-0c55b159cbfafe1f0 \
--instance-type t3.micro \
--key-name my-key-pair \
--security-group-ids sg-0123456789 \
--subnet-id subnet-0123456789 \
--iam-instance-profile Name=EC2-S3-Access \
--user-data file://user-data.sh \
--tag-specifications 'ResourceType=instance,Tags=[{Key=Name,Value=WebServer}]'
# user-data.sh - Initialize on launch
#!/bin/bash
yum update -y
yum install -y docker
systemctl start docker
systemctl enable docker
docker run -d -p 80:80 nginx
# List instances
aws ec2 describe-instances \
--filters "Name=instance-state-name,Values=running" \
--query 'Reservations[*].Instances[*].[InstanceId,PublicIpAddress,Tags[?Key==`Name`].Value|[0]]' \
--output table
# Stop instance
aws ec2 stop-instances --instance-ids i-1234567890abcdef0
# Terminate instance
aws ec2 terminate-instances --instance-ids i-1234567890abcdef0
# Create AMI
aws ec2 create-image \
--instance-id i-1234567890abcdef0 \
--name "WebServer-AMI-$(date +%Y%m%d)" \
--description "Production web server"
# Auto Scaling Group
aws autoscaling create-auto-scaling-group \
--auto-scaling-group-name web-asg \
--launch-configuration-name web-lc \
--min-size 2 \
--max-size 10 \
--desired-capacity 3 \
--target-group-arns arn:aws:elasticloadbalancing:... \
--health-check-type ELB \
--health-check-grace-period 300
3. S3 - Simple Storage Service
# Create bucket
aws s3 mb s3://my-unique-bucket-name --region us-east-1
# Upload file
aws s3 cp file.txt s3://my-bucket/
aws s3 cp ./local-folder s3://my-bucket/folder/ --recursive
# Download file
aws s3 cp s3://my-bucket/file.txt ./
aws s3 sync s3://my-bucket ./local-folder
# List objects
aws s3 ls s3://my-bucket/
aws s3 ls s3://my-bucket/ --recursive --human-readable
# Delete object
aws s3 rm s3://my-bucket/file.txt
aws s3 rm s3://my-bucket/folder/ --recursive
# Bucket policy - public read
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my-bucket/*"
}
]
}
# Enable versioning
aws s3api put-bucket-versioning \
--bucket my-bucket \
--versioning-configuration Status=Enabled
# Lifecycle policy - delete old versions
{
"Rules": [
{
"Id": "DeleteOldVersions",
"Status": "Enabled",
"NoncurrentVersionExpiration": {
"NoncurrentDays": 30
}
}
]
}
# S3 with Node.js SDK
import { S3Client, PutObjectCommand, GetObjectCommand } from '@aws-sdk/client-s3';
const s3 = new S3Client({ region: 'us-east-1' });
// Upload
await s3.send(new PutObjectCommand({
Bucket: 'my-bucket',
Key: 'file.txt',
Body: Buffer.from('Hello World'),
ContentType: 'text/plain'
}));
// Download
const response = await s3.send(new GetObjectCommand({
Bucket: 'my-bucket',
Key: 'file.txt'
}));
const content = await response.Body.transformToString();
4. RDS - Relational Database Service
# Create PostgreSQL database
aws rds create-db-instance \
--db-instance-identifier mydb \
--db-instance-class db.t3.micro \
--engine postgres \
--engine-version 15.3 \
--master-username admin \
--master-user-password MyPassword123 \
--allocated-storage 20 \
--storage-type gp3 \
--vpc-security-group-ids sg-0123456789 \
--db-subnet-group-name my-db-subnet \
--backup-retention-period 7 \
--preferred-backup-window "03:00-04:00" \
--multi-az \
--publicly-accessible false \
--tags Key=Environment,Value=production
# Describe instance
aws rds describe-db-instances \
--db-instance-identifier mydb \
--query 'DBInstances[0].[Endpoint.Address,Endpoint.Port]'
# Create read replica
aws rds create-db-instance-read-replica \
--db-instance-identifier mydb-read-replica \
--source-db-instance-identifier mydb \
--db-instance-class db.t3.micro
# Create snapshot
aws rds create-db-snapshot \
--db-instance-identifier mydb \
--db-snapshot-identifier mydb-snapshot-$(date +%Y%m%d)
# Restore from snapshot
aws rds restore-db-instance-from-db-snapshot \
--db-instance-identifier mydb-restored \
--db-snapshot-identifier mydb-snapshot-20240101
# Connect to RDS
psql -h mydb.abc123.us-east-1.rds.amazonaws.com \
-U admin \
-d postgres
# Node.js connection
import { Pool } from 'pg';
const pool = new Pool({
host: process.env.RDS_HOST,
port: 5432,
database: 'mydb',
user: 'admin',
password: process.env.RDS_PASSWORD,
ssl: { rejectUnauthorized: false }
});
const result = await pool.query('SELECT NOW()');
5. Lambda - Serverless Compute
# Create Lambda function
aws lambda create-function \
--function-name MyFunction \
--runtime nodejs18.x \
--role arn:aws:iam::123456789012:role/lambda-role \
--handler index.handler \
--zip-file fileb://function.zip \
--environment Variables="{NODE_ENV=production}" \
--memory-size 512 \
--timeout 30
# Update function code
aws lambda update-function-code \
--function-name MyFunction \
--zip-file fileb://function.zip
# Invoke function
aws lambda invoke \
--function-name MyFunction \
--payload '{"key":"value"}' \
response.json
# Create API Gateway trigger
aws apigatewayv2 create-api \
--name MyAPI \
--protocol-type HTTP \
--target arn:aws:lambda:us-east-1:123456789012:function:MyFunction
# Lambda with DynamoDB stream
aws lambda create-event-source-mapping \
--function-name MyFunction \
--event-source-arn arn:aws:dynamodb:us-east-1:123456789012:table/MyTable/stream/... \
--starting-position LATEST
# Lambda layers
aws lambda publish-layer-version \
--layer-name my-dependencies \
--zip-file fileb://layer.zip \
--compatible-runtimes nodejs18.x
aws lambda update-function-configuration \
--function-name MyFunction \
--layers arn:aws:lambda:us-east-1:123456789012:layer:my-dependencies:1
6. VPC - Virtual Private Cloud
# Create VPC
aws ec2 create-vpc \
--cidr-block 10.0.0.0/16 \
--tag-specifications 'ResourceType=vpc,Tags=[{Key=Name,Value=MyVPC}]'
# Create subnets
# Public subnet
aws ec2 create-subnet \
--vpc-id vpc-0123456789 \
--cidr-block 10.0.1.0/24 \
--availability-zone us-east-1a \
--tag-specifications 'ResourceType=subnet,Tags=[{Key=Name,Value=Public-1a}]'
# Private subnet
aws ec2 create-subnet \
--vpc-id vpc-0123456789 \
--cidr-block 10.0.2.0/24 \
--availability-zone us-east-1a \
--tag-specifications 'ResourceType=subnet,Tags=[{Key=Name,Value=Private-1a}]'
# Internet Gateway
aws ec2 create-internet-gateway \
--tag-specifications 'ResourceType=internet-gateway,Tags=[{Key=Name,Value=MyIGW}]'
aws ec2 attach-internet-gateway \
--vpc-id vpc-0123456789 \
--internet-gateway-id igw-0123456789
# Route table
aws ec2 create-route-table \
--vpc-id vpc-0123456789 \
--tag-specifications 'ResourceType=route-table,Tags=[{Key=Name,Value=PublicRT}]'
aws ec2 create-route \
--route-table-id rtb-0123456789 \
--destination-cidr-block 0.0.0.0/0 \
--gateway-id igw-0123456789
# Security group
aws ec2 create-security-group \
--group-name web-sg \
--description "Web server security group" \
--vpc-id vpc-0123456789
aws ec2 authorize-security-group-ingress \
--group-id sg-0123456789 \
--protocol tcp \
--port 80 \
--cidr 0.0.0.0/0
aws ec2 authorize-security-group-ingress \
--group-id sg-0123456789 \
--protocol tcp \
--port 443 \
--cidr 0.0.0.0/0
7. CloudFront - CDN
# Create CloudFront distribution
aws cloudfront create-distribution \
--origin-domain-name my-bucket.s3.amazonaws.com \
--default-root-object index.html
# Distribution config
{
"CallerReference": "my-distribution-2024",
"Origins": {
"Quantity": 1,
"Items": [
{
"Id": "S3-my-bucket",
"DomainName": "my-bucket.s3.amazonaws.com",
"S3OriginConfig": {
"OriginAccessIdentity": ""
}
}
]
},
"DefaultCacheBehavior": {
"TargetOriginId": "S3-my-bucket",
"ViewerProtocolPolicy": "redirect-to-https",
"AllowedMethods": {
"Quantity": 2,
"Items": ["GET", "HEAD"]
},
"MinTTL": 0,
"DefaultTTL": 86400,
"MaxTTL": 31536000
},
"Enabled": true
}
# Invalidate cache
aws cloudfront create-invalidation \
--distribution-id E1234567890ABC \
--paths "/*"
8. Cost Optimization
# Use AWS Cost Explorer
aws ce get-cost-and-usage \
--time-period Start=2024-01-01,End=2024-01-31 \
--granularity MONTHLY \
--metrics UnblendedCost \
--group-by Type=DIMENSION,Key=SERVICE
# Cost optimization strategies:
1. Right-sizing EC2
- Use AWS Compute Optimizer
- Monitor CPU/memory usage
- Use t3/t4g instances for variable workloads
2. Reserved Instances
- 1-year or 3-year commitment
- Up to 75% savings
- Good for predictable workloads
3. Savings Plans
- Flexible pricing model
- Commitment to compute usage
- Covers EC2, Lambda, Fargate
4. Spot Instances
- Up to 90% savings
- For fault-tolerant workloads
- Batch processing, CI/CD
5. S3 Storage Classes
- Standard: Frequent access
- Intelligent-Tiering: Auto-optimization
- Glacier: Archive (cheap)
# S3 lifecycle policy
{
"Rules": [
{
"Id": "Archive",
"Status": "Enabled",
"Transitions": [
{
"Days": 30,
"StorageClass": "STANDARD_IA"
},
{
"Days": 90,
"StorageClass": "GLACIER"
}
]
}
]
}
6. Delete unused resources
- Unattached EBS volumes
- Old snapshots
- Unused Elastic IPs
- Orphaned load balancers
# Find unattached volumes
aws ec2 describe-volumes \
--filters Name=status,Values=available \
--query 'Volumes[*].[VolumeId,Size,CreateTime]'
7. Enable AWS Budgets
aws budgets create-budget \
--account-id 123456789012 \
--budget file://budget.json \
--notifications-with-subscribers file://notifications.json
9. Monitoring & Logging
# CloudWatch Logs
aws logs create-log-group --log-group-name /aws/lambda/MyFunction
# Put metric
aws cloudwatch put-metric-data \
--namespace MyApp \
--metric-name RequestCount \
--value 1 \
--unit Count
# Create alarm
aws cloudwatch put-metric-alarm \
--alarm-name high-cpu \
--alarm-description "Alert when CPU > 80%" \
--metric-name CPUUtilization \
--namespace AWS/EC2 \
--statistic Average \
--period 300 \
--threshold 80 \
--comparison-operator GreaterThanThreshold \
--evaluation-periods 2 \
--alarm-actions arn:aws:sns:us-east-1:123456789012:MyTopic
# CloudWatch Logs Insights query
aws logs start-query \
--log-group-name /aws/lambda/MyFunction \
--start-time $(date -d '1 hour ago' +%s) \
--end-time $(date +%s) \
--query-string 'fields @timestamp, @message | filter @message like /ERROR/ | sort @timestamp desc'
# X-Ray tracing
import AWS from 'aws-sdk';
import AWSXRay from 'aws-xray-sdk-core';
const AWS_SDK = AWSXRay.captureAWS(AWS);
const s3 = new AWS_SDK.S3();
10. Infrastructure as Code
# CloudFormation template
AWSTemplateFormatVersion: '2010-09-09'
Description: Web application stack
Parameters:
InstanceType:
Type: String
Default: t3.micro
AllowedValues:
- t3.micro
- t3.small
- t3.medium
Resources:
WebServerSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Enable HTTP and HTTPS
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: 80
ToPort: 80
CidrIp: 0.0.0.0/0
- IpProtocol: tcp
FromPort: 443
ToPort: 443
CidrIp: 0.0.0.0/0
WebServer:
Type: AWS::EC2::Instance
Properties:
InstanceType: !Ref InstanceType
ImageId: ami-0c55b159cbfafe1f0
SecurityGroups:
- !Ref WebServerSecurityGroup
UserData:
Fn::Base64: !Sub |
#!/bin/bash
yum update -y
yum install -y httpd
systemctl start httpd
systemctl enable httpd
S3Bucket:
Type: AWS::S3::Bucket
Properties:
BucketName: !Sub '${AWS::StackName}-bucket'
VersioningConfiguration:
Status: Enabled
Outputs:
WebsiteURL:
Description: URL of the website
Value: !GetAtt WebServer.PublicDnsName
# Deploy stack
aws cloudformation create-stack \
--stack-name my-stack \
--template-body file://template.yaml \
--parameters ParameterKey=InstanceType,ParameterValue=t3.small
# Update stack
aws cloudformation update-stack \
--stack-name my-stack \
--template-body file://template.yaml
# Delete stack
aws cloudformation delete-stack --stack-name my-stack
11. Best Practices
✓ AWS Best Practices:
- ✓ Use IAM roles instead of access keys
- ✓ Enable MFA for root account
- ✓ Use multiple availability zones
- ✓ Implement automated backups
- ✓ Tag all resources for cost tracking
- ✓ Use VPC for network isolation
- ✓ Enable CloudTrail for auditing
- ✓ Implement least privilege access
- ✓ Use Systems Manager Parameter Store for secrets
- ✓ Enable encryption at rest and in transit
- ✓ Use Auto Scaling for elasticity
- ✓ Implement monitoring and alerting
- ✓ Use Infrastructure as Code (CloudFormation/Terraform)
- ✓ Regular security audits with AWS Config
- ✓ Enable AWS Shield for DDoS protection
Conclusion
AWS provides comprehensive cloud services for scalable applications. Master EC2, S3, RDS, Lambda, and VPC for production workloads. Always implement proper security, monitoring, and cost optimization strategies.
💡 Pro Tip: Use AWS Well-Architected Tool to review your architecture against AWS best practices. It provides recommendations for operational excellence, security, reliability, performance, and cost optimization. Free to use and invaluable for production systems.