Thursday, April 30, 2015

AlertLogic Whitepaper: Understanding AWS Shared Security Model

As you may know AWS shares the responsibility with the consumer of their IaaS services in terms of security. It terms of ownership, below diagram essentially outlines the part that AWS is responsible for and for the part that consumers of their services are responsible for:-



The whitepaper from AlertLogic outlines the below 7 best practices:-

SEVEN BEST PRACTICES FOR CLOUD SECURITY

There are seven key best practices for cloud security that you should implement in order to protect yourself from the next vulnerability and/or wide scale attack:

1. SECURE YOUR CODE
Securing code is 100% your responsibility, and hackers are continually looking for ways to compromise your applications. Code that has not been thoroughly tested and secure makes it all the more easy for them to do harm. Make sure that security is part of your software development lifecycle: testing your libraries, scanningplugins etc.

2. CREATE AN ACCESS MANAGEMENT POLICY
Logins are the keys to your kingdom and should be treated as such. Make sure you have a solid access management policy in place, especially concerning those who are granted access on a temporary basis. Integration of all applications and cloud environments into your corporate AD or LDAP centralized authentication model will help with this process as will two factor authentication.

3. ADOPT A PATCH MANAGEMENT APPROACH
Unpatched software and systems can lead to major issues; keep your environment secure by outlining a process where you update your systems on a regular basis. Consider developing a checking of important procedures, Test all updates to confirm that they do not damage or create vulnerabilities before implementation into your live environment.

4. LOG MANAGEMENT
Log reviews should be an essential component of your organizations security protocols. Logs are now useful for far more than compliance, they become a powerful security tool. You can use log data to monitor for malicious activity and forensic investigation.

5. BUILD A SECURITY TOOLKIT
No single piece of software is going to handle all of your security needs. You have to implement a defence-in-depth strategy that covers all your responsibilities in the stack. Implement IP tables, web application firewalls, antivirus, intrusion detection, encryption and log management.

6. STAY INFORMED
Stay informed of the latest vulnerabilities that may affect you, the internet is a wealth of information. Use it to your advantage, search for the breaches and exploits that are happening in your industry.

7. UNDERSTAND YOUR CLOUD SERVICE PROVIDER SECURITY MODEL
Finally, as discussed get to know your provider and understand where the lines are drawn, and plan accordingly. Cyber attacks are going to happen; vulnerabilities and exploits are going to be identified. By having a solid security in depth strategy, coupled with the right tools and people that understand how to respond you will out you into a position to minimise your exposure and risk.

Monday, April 27, 2015

Resizing an XFS root volume of an EC2 HVM instance

On many occasions you may find that the original allocated size for the root volume may not be sufficient and you will need to resize the volume. For the purposes of this example, we will begin by creating a new instance with 10g root volume which we will expand to 20g.

1. Create a new instance

*************
$aws ec2 run-instances --image-id ami-12663b7a --count 1 --instance-type t2.micro --key <key_name> --security-group-ids sg-7ad9a61e --subnet-id subnet-4d8df83a --associate-public-ip-address
*************

2. From the output of the "run-instances" command, query for vol-id (vol-292a653f), snapshot-id (snap-a2948fc3) and availability zone (AZ: us-east-1d) of the volume and instance

3. Now create a new 20g volume from the snapshot in step #2

*************
$aws ec2 create-volume --snapshot-id snap-a2948fc3 --size 20 --availability-zone us-east-1d --volume-type gp2 
{
    "AvailabilityZone": "us-east-1d",
    "Encrypted": false,
    "VolumeType": "gp2",
    "VolumeId": "vol-b8347bae",
    "State": "creating",
    "Iops": 60,
    "SnapshotId": "snap-a2948fc3",
    "CreateTime": "2015-04-26T05:05:24.865Z",
    "Size": 20
}
*************

4. Attach the expanded volume to another instance

*************
$aws ec2 attach-volume --volume-id vol-b8347bae --instance-id i-6a452e97 --device /dev/xvdf 
{
    "AttachTime": "2015-04-26T05:08:26.001Z",
    "InstanceId": "i-6a452e97",
    "VolumeId": "vol-b8347bae",
    "State": "attaching",
    "Device": "/dev/xvdf"
}
*************

5. Describe instances to check whether the new volume is attached

*************
$aws ec2 describe-instances --filters Name=instance-id,Values=i-6a452e97 --output table

+------------------------+-------------------------------
||                  BlockDeviceMappings                 |
|+---------------------------+--------------------------+
||  DeviceName               |  /dev/sda1               |
|+---------------------------+--------------------------+
|||                         Ebs                        ||
||+----------------------+-----------------------------+|
|||  AttachTime          |  2015-04-26T04:23:35.000Z   ||
|||  DeleteOnTermination |  True                       ||
|||  Status              |  attached                   ||
|||  VolumeId            |  vol-292a653f               ||
||+----------------------+-----------------------------+|
||                  BlockDeviceMappings                 |
|+---------------------------+--------------------------+
||  DeviceName               |  /dev/xvdf               |
|+---------------------------+--------------------------+
|||                         Ebs                        ||
||+----------------------+-----------------------------+|
|||  AttachTime          |  2015-04-26T05:08:25.000Z   ||
|||  DeleteOnTermination |  False                      ||
|||  Status              |  attached                   ||
|||  VolumeId            |  vol-b8347bae               ||
||+----------------------+-----------------------------+|
*************

6. Now log into the EC2 instance where the two volumes are attached and confirm the volume attachments

*************
$ sudo cat /proc/partitions
major minor  #blocks  name

 202        0   10485760 xvda
 202        1       1024 xvda1
 202        2   10483695 xvda2
 202       80   20971520 xvdf
 202       81       1024 xvdf1
 202       82   10483695 xvdf2
$ ls -l /dev/xvd*
brw-rw----. 1 root disk 202,  0 Apr 26 00:24 /dev/xvda
brw-rw----. 1 root disk 202,  1 Apr 26 00:24 /dev/xvda1
brw-rw----. 1 root disk 202,  2 Apr 26 00:24 /dev/xvda2
brw-rw----. 1 root disk 202, 80 Apr 26 01:08 /dev/xvdf
brw-rw----. 1 root disk 202, 81 Apr 26 01:08 /dev/xvdf1
brw-rw----. 1 root disk 202, 82 Apr 26 01:08 /dev/xvdf2
*************

7. Since the block device has two partitions, you can run fdisk command

*************
$ sudo fdisk -l
WARNING: fdisk GPT support is currently new, and therefore in an experimental ph
ase. Use at your own discretion.

Disk /dev/xvda: 10.7 GB, 10737418240 bytes, 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: gpt


#         Start          End    Size  Type            Name
 1         2048         4095      1M  BIOS boot parti
 2         4096     20971486     10G  Microsoft basic

Disk /dev/xvdf: 21.5 GB, 21474836480 bytes, 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x00000000
*************

8. check the size of the partions using "lsblk" command and see whether second partition can expand

*************
$ lsblk
NAME    MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda    202:0    0  10G  0 disk
+-xvda1 202:1    0   1M  0 part
+-xvda2 202:2    0  10G  0 part /
xvdf    202:80   0  20G  0 disk
+-xvdf1 202:81   0   1M  0 part
+-xvdf2 202:82   0  10G  0 part
**************

9. Now if you try to mount the second ebs volume, you may get an error

**************
$ sudo mount /dev/xvdf2 /vol
mount: /dev/xvdf1 is write-protected, mounting read-only
mount: unknown filesystem type '(null)'
**************

10. You can grep the dmesg output for more information about the above mount error

**************
$ dmesg |tail
[ 5233.363312] XFS (xvdf2): Filesystem has duplicate UUID 6785eb86-c596-4229-85f
b-4d30c848c6e8 - can't mount
**************

11. Since the above error indicate a duplicate uuid for the second volume, you can generate a new uuid

**************
$ sudo xfs_admin -U generate /dev/xvdf2
Clearing log and setting UUID
writing all SBs
new UUID = 59c3b4c4-ca99-45f0-9c25-ffd7bbc93581
**************

NOTE - If you would like to temporarily mount the volume without a uuid, you can run "mount -o nouuid /dev/xvdf2 /vol" command.

12. Verify whether the unique uuid are present for both ebs volumes

**************
$ blkid
/dev/xvda2: UUID="6785eb86-c596-4229-85fb-4d30c848c6e8" TYPE="xfs" PARTUUID="e8c
8ba12-3669-4698-b59b-2db878461f9a"
/dev/xvdf2: UUID="59c3b4c4-ca99-45f0-9c25-ffd7bbc93581" TYPE="xfs" PARTUUID="e8c
8ba12-3669-4698-b59b-2db878461f9a"
**************

13. Verify whether the volume can be understood by xfs_info

**************
$ sudo xfs_info /dev/xvdf2
meta-data=/dev/xvdf2             isize=256    agcount=7, agsize=393216 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=0        finobt=0
data     =                       bsize=4096   blocks=2620923, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal               bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
**************

14. unmount the volume before running gdisk or parted to expand the volume

*************
$ sudo umount /dev/xvdf /vol
umount: /dev/xvdf: not mounted
*************

15. Follow the steps outlined in AWS documentation for expanding the volume using gdisk (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/storage_expand_partition.html#expanding-partition-gdisk)

16. run gdisk on /dev/xvdf 

*************
$ sudo gdisk /dev/xvdf
GPT fdisk (gdisk) version 0.8.6

Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with protective MBR; using GPT.

Command (? for help): p
Disk /dev/xvdf: 41943040 sectors, 20.0 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): E804A732-997D-4B6A-B0F4-652EBF839AFB
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 20971486
Partitions will be aligned on 2048-sector boundaries
Total free space is 2014 sectors (1007.0 KiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048            4095   1024.0 KiB  EF02
   2            4096        20971486   10.0 GiB    0700

Command (? for help): o
This option deletes all partitions and creates a new protective MBR.
Proceed? (Y/N): y

Command (? for help): p
Disk /dev/xvdf: 41943040 sectors, 20.0 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): 922D5EF3-D83A-4395-AEEE-0019A36FB2E0
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 41943006
Partitions will be aligned on 2048-sector boundaries
Total free space is 41942973 sectors (20.0 GiB)


Number  Start (sector)    End (sector)  Size       Code  Name

Command (? for help): n
Partition number (1-128, default 1): 1
First sector (34-41943006, default = 2048) or {+-}size{KMGTP}: 2048
Last sector (2048-41943006, default = 41943006) or {+-}size{KMGTP}: 4095
Current type is 'Linux filesystem'
Hex code or GUID (L to show codes, Enter = 8300): EF02
Changed type of partition to 'BIOS boot partition'

Command (? for help): n
Partition number (2-128, default 2): 2
First sector (34-41943006, default = 4096) or {+-}size{KMGTP}: 4096
Last sector (4096-41943006, default = 41943006) or {+-}size{KMGTP}: 41943006
Current type is 'Linux filesystem'
Hex code or GUID (L to show codes, Enter = 8300): 0700
Changed type of partition to 'Microsoft basic data'

Command (? for help): x

Expert command (? for help): g
Enter the disk's unique GUID ('R' to randomize): E804A732-997D-4B6A-B0F4-652EBF839AFB
The new disk GUID is E804A732-997D-4B6A-B0F4-652EBF839AFB

Expert command (? for help): w

Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING
PARTITIONS!!

Do you want to proceed? (Y/N): y
OK; writing new GUID partition table (GPT) to /dev/xvdf.
The operation has completed successfully.

*************

Instead of gdisk, you can also use a tool like "parted" and the steps would be as below:-

*************
$sudo parted /dev/xvdf
GNU Parted 3.1
Using /dev/xvdf
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) unit s
(parted) p
Model: Xen Virtual Block Device (xvd)
Disk /dev/xvdf: 41943040s
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start  End    Size   File system  Name                 Flags
 1      2048s  4095s  2048s               BIOS boot partition  bios_grub

(parted) mkpart Linux 4096s 100%
(parted) p
Model: Xen Virtual Block Device (xvd)
Disk /dev/xvdf: 41943040s
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start  End        Size       File system  Name                 Flags
 1      2048s  4095s      2048s                   BIOS boot partition  bios_grub

 2      4096s  41940991s  41936896s  xfs          Linux

(parted) q
Information: You may need to update /etc/fstab.

*************

17. Confirm whether the volume partitions have been resized

*************
$sudo lsblk
NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
xvda    202:0    0   10G  0 disk
├─xvda1 202:1    0  512B  0 part
└─xvda2 202:2    0   10G  0 part /
xvdf    202:80   0   20G  0 disk
├─xvdf1 202:81   0    1M  0 part
└─xvdf2 202:82   0   20G  0 part
*************

18. Now you can mount the volume back to the instance

*************
$mount -n /dev/xvdf2 /vol
*************

19. Now grow the volume partition using xfs_gow utility

*************
$sudo xfs_growfs /dev/xvdf2
meta-data=/dev/xvdf2             isize=256    agcount=14, agsize=393216 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=0        finobt=0
data     =                       bsize=4096   blocks=5242112, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal               bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
*************

20. Confirm that the mount point (/vol) is now reflecting the correct size

*************
$ df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/xvda2       10G  867M  9.2G   9% /
devtmpfs        480M     0  480M   0% /dev
tmpfs           497M     0  497M   0% /dev/shm
tmpfs           497M   13M  484M   3% /run
tmpfs           497M     0  497M   0% /sys/fs/cgroup
/dev/xvdf2       20G  866M   20G   5% /vol

*************

Thursday, April 23, 2015

Running Nginx+ on AWS

Nginx+ attempts to fill out some of the gaps that are present in vanilla ELB offered by AWS. Nginx+ is an enterprise version of Nginx open source reverse proxy. For details on Nginx+ you can look up

http://nginx.com/products/

Scott Ward @AWS published a whitepaper for running Nginx+ on AWS, the link is below:-

http://d0.awsstatic.com/whitepapers/AWS_NGINX_Plus-whitepaper-final_v4.pdf


Sunday, April 12, 2015

VPC Peering in Amazon AWS

VPC peering is an useful option in cases where a dedicated VPN tunnel need not be established between two VPC networks. Typically, the use-case that this scenario fits the most is if there are multiple products and each product is operationally managed by different ops teams. In such cases the two ops teams can install their products in a common region in two separate VPC's and can do the pairing between the two; so that instances in either VPC's can talk to each other. However, before proceeding, it would good to refer to the AWS VPC peering docs below:

http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-peering.html

VPC Peering Limitations

To create a VPC peering connection with another VPC, you need to be aware of the following limitations and rules:
  • You cannot create a VPC peering connection between VPCs that have matching or overlapping CIDR blocks.
  • You cannot create a VPC peering connection between VPCs in different regions.
  • You have a limit on the number active and pending VPC peering connections that you can have per VPC. For more information about VPC limits, see Amazon VPC Limits.
  • VPC peering does not support transitive peering relationships; in a VPC peering connection, your VPC will not have access to any other VPCs that the peer VPC may be peered with. This includes VPC peering connections that are established entirely within your own AWS account. For more information and examples of peering relationships that are supported, see the Amazon VPC Peering Guide.
  • You cannot have more than one VPC peering connection between the same two VPCs at the same time.
  • The Maximum Transmission Unit (MTU) across a VPC peering connection is 1500 bytes.
  • A placement group can span peered VPCs; however, you will not get full-bisection bandwidth between instances in peered VPCs. For more information about placement groups, see Placement Groups in the Amazon EC2 User Guide for Linux Instances.
  • Unicast reverse path forwarding in VPC peering connections is not supported. For more information, see Routing for Response Traffic in the Amazon VPC Peering Guide.
  • You cannot reference a security group from the peer VPC as a source or destination for ingress or egress rules in your security group. Instead, reference CIDR blocks of the peer VPC as the source or destination of your security group's ingress or egress rules.
  • Private DNS values cannot be resolved between instances in peered VPCs.

To establish VPC peering, you can use the below set of steps:-

1. Create vpc peering connection

****************
$aws ec2 create-vpc-peering-connection --vpc-id <source vpc id> --peer-vpc-id <peer vpc id> --peer-owner-id <aws account number of source vpc> 
{
    "VpcPeeringConnection": {
        "Status": {
            "Message": "Initiating Request to <dest vpc aws account #>",
            "Code": "initiating-request"
        },
        "Tags": [],
        "RequesterVpcInfo": {
            "OwnerId": "<aws account number of source vpc>",
            "VpcId": "<source vpc id>",
            "CidrBlock": "198.162.0.0/28"
        },
        "VpcPeeringConnectionId": "pcx-2f5ba346",
        "ExpirationTime": "2015-04-19T04:54:55.000Z",
        "AccepterVpcInfo": {
            "OwnerId": "<aws account number of source vpc>",
            "VpcId": "<peer vpc id>"
        }
    }
}
****************

2. Accept the VPC peering connection from the second VPC

****************
$aws ec2 accept-vpc-peering-connection --vpc-peering-connection-id pcx-2f5ba346 
{
    "VpcPeeringConnection": {
        "Status": {
            "Message": "Provisioning",
            "Code": "provisioning"
        },
        "Tags": [],
        "AccepterVpcInfo": {
            "OwnerId": "<aws account number of source vpc>",
            "VpcId": "<peer vpc id>",
            "CidrBlock": "10.98.0.0/16"
        },
        "VpcPeeringConnectionId": "pcx-2f5ba346",
        "RequesterVpcInfo": {
            "OwnerId": "<aws account number of source vpc>",
            "VpcId": "<source vpc id>",
            "CidrBlock": "198.162.0.0/28"
        }
    }
}
****************

3. Add peering connection route to the route tables of both subnets in which instances are located

****************
$aws ec2 create-route --route-table-id rtb-348ebf51 --destination-cidr-block 10.98.0.0/16 --vpc-peering-connection-id pcx-2f5ba346

$aws ec2 create-route --route-table-id rtb-54df1f31 --destination-cidr-block 192.168.0.0/28 --vpc-peering-connection-id pcx-2f5ba346 

****************

4. Check if the peering connection is active

****************
$aws ec2 describe-vpc-peering-connections --profile apix
{
    "VpcPeeringConnections": [
        {
            "Status": {
                "Message": "Active",
                "Code": "active"
            },
            "Tags": [],
            "AccepterVpcInfo": {
                "OwnerId": "<peer aws a/c #>",
                "VpcId": "<peer vpc id>",
                "CidrBlock": "10.98.0.0/16"
            },
            "VpcPeeringConnectionId": "pcx-2f5ba346",
            "RequesterVpcInfo": {
                "OwnerId": "<src aws a/c #>",
                "VpcId": "<source vpc id>",
                "CidrBlock": "198.162.0.0/28"
            }
        }
    ]
}
****************

5. Modify the ingress security rules for each instance to add the other instances **private ip** address

****************
$aws ec2 authorize-security-group-ingress --group-id sg-6e9c550a --protocol tcp --port 22 --cidr 198.162.0.7/32
$aws ec2 authorize-security-group-ingress --group-id sg-7ad9a61e --protocol tcp --port 22 --cidr 10.98.1.108/32
****************

NOTE - Some of the instances that you try to connect to in the peer vpc may be in a subnet that have internet gateway (igw) in its route table entry. In such cases, the instances may have a public or private dns name. However, VPC peering will not work with private dns as per the limitation documented in AWS docs in the above link. If you would still like to have a hostname for the instances, you will need to add a mapping in /etc/hosts file of each instance in the peering vpcs.

6. Check using nmap whether SSH are ports are open and not "filtered"

****************
[ec2-user@ip-198-162-0-7 ~]$ nmap -p 22 10.98.1.108 -P0

Starting Nmap 6.40 ( http://nmap.org ) at 2015-04-12 16:08 EDT
Nmap scan report for 10.98.1.108
Host is up (0.00077s latency).
PORT   STATE SERVICE
22/tcp open  ssh

****************
While VPC peering is useful where the overhead of vpn tunnel is unnecessary, it possible to run into the limitations of data transfer exceeding MTU  limitation of 1500 bytes between the peer vpc's.


Saturday, April 11, 2015

Creating and launching instances into Amazon VPC through AWS CLI

1. Create VPC:-

***************
$aws ec2 create-vpc --cidr-block 192.168.0.0/28 --query Vpc.VpcId 
"vpc-89e1b9ec"

***************
NOTE - /28 VPC is the smallest allowed cidr block in Amazon VPC. You cannot subdivide this VPC into private and public subnet because each subnet has 5 reserved host addresses (16 - 5 = 11 available hosts) in this VPC. Refer to AWS documentation - http://aws.amazon.com/vpc/faqs/

2. Create Internet Gateway:-

***************
$aws ec2 create-internet-gateway 
{
    "InternetGateway": {
        "Tags": [],
        "InternetGatewayId": "igw-958019f0",
        "Attachments": []
    }
}
***************

3.Attach an Internet Gateway to VPC:-

***************
$aws ec2 attach-internet-gateway --internet-gateway-id igw-958019f0 --vpc-id vpc-89e1b9ec 
***************

4. Creating a Subnet

***************
$aws ec2 create-subnet --vpc-id vpc-89e1b9ec --cidr-block 198.162.0.0/28 
{
    "Subnet": {
        "VpcId": "vpc-89e1b9ec",
        "CidrBlock": "198.162.0.0/28",
        "State": "pending",
        "AvailabilityZone": "us-east-1d",
        "SubnetId": "subnet-4d8df83a",
        "AvailableIpAddressCount": 11
    }
}
***************
NOTE - the first 4 and the last host addresses are reserved for AWS use. As expected "AvailableIpAddressCount" show 11 available host addresses instead of 16.


5. Create Route Table that needs to be associated with each subnet

***************
$aws ec2 create-route-table --vpc-id vpc-89e1b9ec 
{
    "RouteTable": {
        "Associations": [],
        "RouteTableId": "rtb-348ebf51",
        "VpcId": "vpc-89e1b9ec",
        "PropagatingVgws": [],
        "Tags": [],
        "Routes": [
            {
                "GatewayId": "local",
                "DestinationCidrBlock": "198.162.0.0/28",
                "State": "active",
                "Origin": "CreateRouteTable"
            }
        ]
    }
}
***************

6. Attach Route table to subnet in that VPC

***************
$aws ec2 associate-route-table --route-table-id rtb-348ebf51 --subnet-id subnet-4d8df83a 
{
    "AssociationId": "rtbassoc-bd3ddcd9"
}
***************

7. Add a route to the route table to associate internet gateway so that traffic is allowed from internet to the instances in the vpc.

***************
$aws ec2 create-route --route-table-id rtb-348ebf51 --destination-cidr-block 0.0.0.0/0 --gateway-id igw-958019f0 

***************

8. Create security group within VPC

***************
$aws ec2 create-security-group --vpc-id vpc-89e1b9ec --group-name TestVPC-sg --description "Test VPC sg" 
{
    "GroupId": "sg-7ad9a61e"
}
***************

9. Create inbound security group rules 

***************
$aws ec2 authorize-security-group-ingress --group-id sg-7ad9a61e --protocol tcp --port 22 --cidr 0.0.0.0/0 
***************

10. Launch an instance into the test VPC

***************
$aws ec2 run-instances --image-id ami-12663b7a --count 1 --instance-type t2.micro --key-name <key-name> --security-group-ids sg-7ad9a61e --subnet-id subnet-4d8df83a --associate-public-ip-address
{
    ...
    "Instances": [
        {
          ...
            "State": {
                "Code": 0,
                "Name": "pending"
            },
            ...
            "InstanceId": "i-52a60faf",
            "ImageId": "ami-12663b7a",
            ....
            "InstanceType": "t2.micro",
            ....
            "RootDeviceName": "/dev/sda1",
            "VirtualizationType": "hvm",
            ...
        }
    ]
}

***************
11. Run describe instances to check whether the instance is in "running" state

***************
$aws ec2 describe-instances --instance-ids i-52a60faf --query Reservations[0].Instances[0].{State:State}
{
    "State": {
        "Code": 16,
        "Name": "running"
    }
}
***************

12. Get the public url for the instance

***************
$aws ec2 describe-instances --instance-ids i-52a60faf --query Reservations[0].Instances[0].PublicIpAddress --output text

***************

Now you should be able to log into the machine using the key and the public ip of the machine.