In this post, we’ll see how we can create EFS File System and Mount Targets using Terraform.

Note: We’ll run our setup in the us-east-1 region. Terraform and AWS credentials must be configured before proceeding with the execution.

Step 1: Create the following terraform files:

efs.tf

resource "aws_efs_file_system" "efs" {
creation_token = "efs"
performance_mode = "generalPurpose"
throughput_mode = "bursting"
encrypted = "true"
tags = {
Name = "EFS"
}
}


resource "aws_efs_mount_target" "efs-mt" {
count = length(data.aws_availability_zones.available.names)
file_system_id = aws_efs_file_system.efs.id
subnet_id = aws_subnet.subnet[count.index].id
security_groups = [aws_security_group.efs.id]
}

network.tf

data "aws_availability_zones" "available" {}

resource "aws_vpc" "vpc" {
cidr_block = "10.0.0.0/16"…


In this post, we’ll see how we can use VPC Reachability Analyzer to debug networking issues in AWS environment.

Step1: Create 2 VPCs and connect them using VPC peering. Next, launch 1 instance in both the VPCs. In the security group settings of these instances, allow access on port 22 from respective VPC CIDR only.

For both instances, we’ll also deny all traffic at the NACL level.


In this post, we’ll set up Vault on AWS EKS with TLS and Persistent Storage.

Step 1: Launch a CloudShell terminal in us-east-1 region. We’ll use it as our workstation and execute all commands here. Create an IAM user with Administrator permission and set keys using aws configure command.

Step 2: Install eksctl,kubectl,cfssl,cfssljson,consul and helm using following commands:

#eksctl installation
curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
sudo mv /tmp/eksctl /usr/bin
sudo chmod +x /usr/bin/eksctl
#kubectl installation
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
chmod +x kubectl
sudo cp kubectl /usr/bin/kubectl
#helm installation
sudo yum install -y openssl
curl -fsSL…


In this post, we’ll see how we can use LDAP credentials to connect to CentOS instances.

Step 1: Launch an instance using Ubuntu 18.04 AMI and follow https://www.digitalocean.com/community/tutorials/how-to-install-and-configure-openldap-and-phpldapadmin-on-ubuntu-16-04 to set up an OpenLDAP server along with phpLdapAdmin.

Step 2: When you try to access http://<your_public_ip>/phpldapadmin, you’ll see some PHP related errors.

Follow the steps given at https://stackoverflow.com/questions/50698477/cant-create-new-entry-phpldapadmin to fix these errors.

Step 3: Login to phpLdapAdmin console using cn=admin,dc=example,dc=com and password which you set during installation.


In this post, we’ll see how we can use AWS Managed Microsoft AD to connect to Linux instances.

Step 1: Go to Directory Services and create a AWS Managed Microsoft AD.
You can specify the Directory DNS name as directory.example.com and set the admin password. Next, select your VPC where you want to create this AD and specify 2 Private Subnets.

Step 2: Create a private hosted zone named example.com and create a record named directory.example.com of type A and specify directory DNS IP as value.

Step 3: Follow https://docs.aws.amazon.com/directoryservice/latest/admin-guide/microsoftadbasestep3.html to create an EC2 instance to manage our AD.

Step…


Recently we had a scenario where we need to allow developers access to the parameter store so that they can check and update the parameters. Since developers may not be that comfortable working with AWS CLI, we decided to generate temporary IAM users with console access. In this post, we’ll see how we can create these temporary users and then delete them after some time.

Note: Cloudtrail must be enabled in us-east-1 region and we need to setup CloudWatch event, Lambda and Step Function in us-east-1 region.

Step 1: Create a policy named parameter with the following permissions. …


In https://faun.pub/using-iam-authorizer-with-api-gateway-4f3ae2292491 we saw how we can use IAM authorizer with API gateway to allow users to invoke a Lambda function to start/stop an EC2 instance. In this post, we’ll see how we can implement the same solution with Cognito, ALB and Lambda.

Step 1: Launch an EC2 instance and add Env: Dev tag to it. Our Lambda function will check for this tag to start/stop the instance.

Step 2: Create a Lambda function which we’ll use as our ALB target. Create a function with Python 3.7 runtime and use the code given at https://raw.githubusercontent.com/vinycoolguy2015/awslambda/master/alb_ec2_start_stop.py

Make sure you set the…


In this post, we’ll see how we can utilize AWS Endpoint Services to securely expose our applications to other AWS accounts. Traditionally we used to so using VPC peering, but I find Endpoint Services as an elegant solution for this use case.

Note: We’ll need 2 AWS accounts for this setup. Account A will be used for creating our application and Endpoint Service while Account B will be used for creating Endpoint and accessing application running in Account A.

Steps to be performed in Account A

1: Launch an instance using Amazon Linux 2 AMI in the private subnet, using…


In https://vinayakpandey-7997.medium.com/configure-openvpn-server-to-access-private-ec2-instances-2b0a51970042, we saw how we can use OpenVPN to access our private AWS resources. However in this case, we were exposing our OpenVPN server IP to public, which is not a good fit for production setup.

In this post, we’ll see how we can run our VPN server behind ALB and use NLB for Client-Server communication. This way we don’t need to associate or expose public IP of our OpenVPN server.

Step 1: Launch OpenVPN Access Server from AWS market place. We don’t need any public IP for this instance. …


In this post, we’ll see how we can use AWS IAM Auth method provided by Vault to authenticate our client with vault server and receive a token to access some secrets.

Step 1: Launch 2 EC2 instances with Amazon Linux2 AMI. Allow all traffic between these 2 instances. We’ll use 1 of them as vault server and another one as vault client.

Step 2: Install vault on both the instances using following commands:

sudo yum-config-manager --add-repo https://rpm.releases.hashicorp.com/AmazonLinux/hashicorp.repo
sudo yum -y install vault

Step 3: Start vault on Vault server instance using following command. …

Vinayak Pandey

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store