Migrating Aurora Postgres Cluster To Graviton Instances
In this post, we’ll see how to migrate an Aurora Postgres cluster to Graviton instances which are more cost effective.
Step 1: Create a single AZ Aurora Postgres version 14.15 cluster with db.r5.large instance.
Step 2: Launch a t3.medium EC2 instance with Amazon Linux 2 AMI. Connect to it and execute following commands
sudo yum install -y postgresql-contrib postgresql postgresql-devel libpq-devel sysbench gcc gcc-devel python3-devel
sudo pip3 install --upgrade pip
sudo pip3 install awscli
sudo pip3 install requests
sudo pip3 install boto3
sudo pip3 install aws_encryption_sdk
sudo pip3 install pycrypto
sudo pip3 install psycopg2
Download scripts provided by AWS using following commands:
curl https://auroraworkshopassets.s3.us-west-2.amazonaws.com/labs-scripts/aslv2_scaling.tar.gz > /home/ec2-user/aslv2_scaling.tar.gz
tar xvzf aslv2_scaling.tar.gz
cd aslv2_scaling
Open setup_schema.sql file and before /c mylab; add following command.
CREATE database mylab;
and execute
psql -f setup_schema.sql -U postgres -d postgres -h <RDS_CLUSTER_WRITER_ENDPOINT>
If you get ‘psql: SCRAM authentication requires libpq version 10 or above’ error, execute following commands:
sudo yum install -y amazon-linux-extras
sudo amazon-linux-extras install postgresql10
pip3 install --force-reinstall psycopg2==2.9.3
Now connect to mylab database and create a table with following definition.
CREATE TABLE accounts (EMPLOYEE VARCHAR ( 50 ) PRIMARY KEY)
Step 3: Create a Python script with following code.Replace values for <PASSWORD> ,<WRITER_ENDPOINT> and <WRITER_ENDPOINT> accordingly.
import psycopg2
import random
import string
import time
#Establishing the connection
while True:
print("------------------")
time.sleep(1)
try:
conn = psycopg2.connect(database="mylab", user='postgres', password='<PASSWORD>', host='<WRITER_ENDPOINT>', port= '5432',connect_timeout=3)
conn.autocommit = True
cursor = conn.cursor()
letters = string.ascii_letters
name=''.join(random.choice(letters) for i in range(50))
cursor.execute("INSERT INTO accounts(EMPLOYEE) VALUES (%s);", (name,))
conn.commit()
print("Records inserted "+name)
except Exception as e:
print("Error writing data")
print(e)
try:
conn1 = psycopg2.connect(database="mylab", user='postgres', password='<PASSWORD>', host='<READER_ENDPOINT>', port= '5432',connect_timeout=3)
cursor = conn1.cursor()
cursor.execute("SELECT count(*) from accounts;")
result = cursor.fetchone()
print("Table count is "+str(result[0]))
except Exception as e:
print("Error reading data")
print(e)
Run this script
Step 4: Now modify the RDS instance and change instance type to db.r8g.large
After some time, our script will encounter connection error for couple of minutes
Once the instance is modified, script will resume working normally.
Step 5: Now create a multi AZ Aurora Postgres version 14.15 cluster with db.r5.large instance.
Step 6: Now first upgrade reader instance to db.r8g.large and there won’t be any downtime.
Then perform a failover so that our upgraded reader instance becomes writer instance and then upgrade original writer instance to db.r8g.large. There will be downtime for few seconds
Once both the instances are updated,our cluster migration is completed . Now even if you launch any autoscaled read replicas, they will also be using graviton instances.