Monday, March 30, 2015

Granting billing access to IAM users

If you are the root account holder, then IAM users in that a/c by default will not be able to view billing or account preference information (even if you give them Administrator access). As mentioned in the AWS docs:-

http://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/control-access-billing.html

the below checkbox under (My Account -> Account Settings) needs to be enabled if you want IAM users to view billing. This can further be restricted by user level IAM inline policies or group level policies.



Saturday, March 28, 2015

Sorting contents of S3 bucket via AWS CLI since AWS console doesn't allow sort today

In AWS console, the columns don't sort if we click on them :-


So the current way to sort would be to use AWS CLI using "sort_by" function as below:

$aws s3api list-objects --bucket sample-bucket --max-items 3 --query "sort_by(Contents, &LastModified)[*].{Key:Key,Size:Size,LastModified:LastModified}"
[
    {
        "LastModified": "2015-02-04T10:55:29.000Z",
        "Key": "sample1.csv",
        "Size": 27848
    },
    {
        "LastModified": "2015-02-04T10:55:29.000Z",
        "Key": "sample2.csv",
        "Size": 469266754
    },
    {
        "LastModified": "2015-02-04T10:55:44.000Z",
        "Key": "sample3.csv",
        "Size": 469529478
    }
]

Since s3api does not have a "sort-order" switch unlike some of the other api's, you will have to use the "reverse" function specified as below:-

$aws s3api list-objects --bucket sample-bucket --max-items 3 --query "reverse(sort_by(Contents, &LastModified)[*].[Key, LastModified])" --output table

sample3.csv 2015-02-04T10:55:44.000Z
sample2.csv  2015-02-04T10:55:29.000Z
sample1.csv  2015-02-04T10:55:29.000Z

For more details on jmespath json query language specification, pl. refer to

http://jmespath.org/

Four ways to secure your S3 object storage


  1. IAM user policies - http://docs.aws.amazon.com/AmazonS3/latest/dev/example-policies-s3.html
  2. ACLs - http://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html
  3. Bucket policies - http://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies.html
  4. Query String authentication - http://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-authenticating-requests.html

Sunday, March 22, 2015

IAM policy to allow AWS console view to "Preferences"

If you are using tags on your EC2 instances, you will want to look at "manage resources and tags" option under preferences. If you would like to give a particular user (e.g a linked user a/c) access to consolidated billing account, but restrict all other views, you can use an IAM policy like below to allow access:-


*******************

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "aws-portal:ViewBilling",
                "aws-portal:ModifyBilling",
                "aws-portal:ViewAccount"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Deny",
            "Action": [
                "aws-portal:ViewPaymentMethods",
                "aws-portal:ModifyPaymentMethods",
                "aws-portal:ModifyAccount"
            ],
            "Resource": "*"
        }
    ]
}


****************************

Wednesday, March 18, 2015

Debugging a http 504 (Gateway Timeout) issue from Amazon ELB logs

Sometimes when you are testing an application that is hosted on AWS and front ended by ELB, you may see "504 Gateway Timeout" as below in the browser:-


The sequence of events that lead up to 504 client side error is as below:

1) Client connects to the ELB and submits HTTP request
2) ELB Picks a backend and sends the request onto the backend.
3) The backend receives the request and starts processing it
4) While the backend is still processing the request the client gives up waiting and ends the connection (this sometimes can be reported as 504 by proxies and other HTTP libraries)
5) The backend finishes processing the request and replys to the ELB
6) As the client has ended the connection the ELB can't pass the request on to the client and instead logs a 408 error.
7) A network packet capture on the client machine should indicate a client socket getting timed out.

For details on the various ELB error codes, you can refer to

http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/ts-elb-error-message.html

In the ELB logs, you will see

*************
2015-03-18T14:55:24.507386Z <ELB NAME> <Client IP>:51305 - -1 -1 -1 408 0 0 0 "GET <URL> HTTP/1.1"
*************

If you note that "-1" value reported by the ELB, it indicates that client socket was closed and the timing for the response and content-length are reported incorrectly as "0". AWS ELB team is looking into addressing this in future. A similar log for HTTP 404 will look like

************
2015-03-18T14:00:15.618471Z <ELB NAME> <Client IP>:50855 <Instance in ELB pool IP>:<Listen Port> 0.000071 0.010108 0.000021 404 404 0 2096 "GET <URL> HTTP/1.1"

************

Friday, March 13, 2015

Current Limitations of the Amazon ELB

If you are using Amazon ELB for handling production workloads for your SaaS/PaaS customers, you would have encountered the below limitations with Amazon Elastic Load Balancers

  • No iRULES mapping - If you have worked with F5 load balancers, you would already be familiar with iRules (https://devcentral.f5.com/articles/irules-101-01-introduction-to-irules). It allows for inspecting the incoming packets and execute rules that can allow for redirection of traffic. Today, ELB is not "application aware", which means that the load balancing feature is pretty basic and can only do "round-robin" balancing (as opposed to weighted load balancing etc). The options to send the traffic to a particular instance based on the client-ip header do not exist. NOTE - The good news is that Amazon ELB team is working on a potential solution with high priority and should be available before end of this year. 

  • No Fixed IP for ELB - There are situations where your customer might outbound traffic from their firewall to access only certain ip addresses on port 80 or 443. ELB's today have a domain name and have public ip addresses (could be multiple if ELB is available across zones). If the below public ip's are whitelisted in the corporate firewall, then these ip addresses will change when ELB's are scaled out or rebooted for maintenance. NOTE - Amazon's workaround for today is to whitelist the entire CIDR block of EC2 addresses (which includes ELB addresses - https://aws.amazon.com/blogs/aws/aws-ip-ranges-json/). The other option is to pre-warm the ELB's to estimate the load and then whitelist upto 10 public ip addresses for the 10 ELB instances.

***********
$ host Test-ELB-669438560.us-east-1.elb.amazonaws.com
Test-ELB-669438560.us-east-1.elb.amazonaws.com has address 107.23.x.x
Test-ELB-669438560.us-east-1.elb.amazonaws.com has address 54.236.x.x
***********
  • No Mutual Authentication for SSL - One of the nice features of the ELBs is that they can be made as the SSL termination point instead of your reverse proxy or application. There are several benefits of this approach including reduced configuration on the application/reverse proxies. However, the downside is that currently we cannot perform mutual authentication as part of the SSL handshake, i.e., we cannot perform client certificate verification, which is necessary to make sure that only valid clients are accessing the application. NOTE - Amazon indicated that they are aware of this limitation and working on a potential solution before the end of this year.

Tuesday, March 10, 2015

IOPS as measured by Amazon

AWS doc - http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-io-characteristics.html
EBS benchmarking for piops -  - http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/benchmark_piops.html

"IOPS are input/output operations per second. Amazon EBS measures each I/O operation per second (that is 256 KB or smaller) as one IOPS. I/O operations that are larger than 256 KB are counted in 256 KB capacity units. For example, a 1,024 KB I/O operation would count as 4 IOPS. When you provision a 4,000 IOPS volume and attach it to an EBS-optimized instance that can provide the necessary bandwidth, you can transfer up to 4,000 chunks of data per second (provided that the I/O does not exceed the 128 MB/s per volume throughput limit of General Purpose (SSD) and Provisioned IOPS (SSD) volumes).
This configuration could transfer 4,000 32 KB chunks, 2,000 64 KB chunks, or 1,000 128 KB chunks of data per second as well, before hitting the 128 MB/s per volume throughput limit. If your I/O chunks are very large, you may experience a smaller number of IOPS than you provisioned because you are hitting the volume throughput limit. For more information, see Amazon EBS Volume Types.
For 32 KB or smaller I/O operations, you should see the amount of IOPS that you have provisioned, provided that you are driving enough I/O to keep the drives busy. For smaller I/O operations, you may even see an IOPS value that is higher than what you have provisioned (when measured on the client side), and this is because the client may be coalescing multiple smaller I/O operations into a smaller number of large chunks.
If you are not experiencing the expected IOPS or throughput you have provisioned, ensure that your EC2 bandwidth is not the limiting factor; your instance should be EBS-optimized (or include 10 Gigabit network connectivity) and your instance type EBS dedicated bandwidth should exceed the I/O throughput you intend to drive."

ECC vs non-ECC RAM on Amazon Linux

If you would like to check with your EC2 instance is running ECC memory or non-ECC memory you will have to install "dmidecode" utility

$sudo yum install -y dmidecode

Next, you can run

$sudo dmidecode --type memory
# dmidecode 2.12
SMBIOS 2.4 present.

Handle 0x1000, DMI type 16, 15 bytes
Physical Memory Array
        Location: Other
        Use: System Memory
        Error Correction Type: Multi-bit ECC
        Maximum Capacity: 3840 MB
        Error Information Handle: Not Provided
        Number Of Devices: 1

Handle 0x1100, DMI type 17, 21 bytes
Memory Device
        Array Handle: 0x1000
        Error Information Handle: 0x0000
        Total Width: 64 bits
        Data Width: 64 bits
        Size: 3840 MB
        Form Factor: DIMM
        Set: None
        Locator: DIMM 0
        Bank Locator: Not Specified
        Type: RAM
        Type Detail: None

NOTE - If you are enabling ZFS Deduplication table (which is stored in RAM) and trying to determine whether to enable on your EC2 instance with ECC RAM, then you may want to read this blog first - http://nex7.blogspot.ch/2013/03/readme1st.html

Sunday, March 8, 2015

Benchmarking zfs on linux using IOZone3 utility

IOZone is a filesystem benchmarking tool. It can be compiled on a variety of operating systems including linux. The steps are fairly straightforward.


  • Download the source file - http://www.iozone.org/src/current/iozone3_430.tar
  • tar xvf iozone3_430.tar; cd iozone3_430/src/current
  • make && make linux
After you have compiled successfully, you can run a throughput test

$ ./iozone -t 1
        Iozone: Performance Test of File I/O
                Version $Revision: 3.430 $
                Compiled for 64 bit mode.
                Build: linux

        Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins
                     Al Slater, Scott Rhine, Mike Wisner, Ken Goss
                     Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,
                     Randy Dunlap, Mark Montague, Dan Million, Gavin Brebner,
                     Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy, Dave Boone,

                     Erik Habbinga, Kris Strecker, Walter Wong, Joshua Root,
                     Fabrice Bacchella, Zhenghua Xue, Qin Li, Darren Sawyer,
                     Vangel Bojaxhi, Ben England, Vikentsi Lapa.

        Run began: Sun Mar  8 21:19:35 2015

        Command line used: ./iozone -t 1
        Output is in kBytes/sec
        Time Resolution = 0.000001 seconds.
        Processor cache size set to 1024 kBytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.
        Throughput test with 1 process
        Each process writes a 512 kByte file in 4 kByte records

        Children see throughput for  1 initial writers  = 1209982.38 kB/sec
        Parent sees throughput for  1 initial writers   =   65886.25 kB/sec
        Min throughput per process                      = 1209982.38 kB/sec
        Max throughput per process                      = 1209982.38 kB/sec
        Avg throughput per process                      = 1209982.38 kB/sec
        Min xfer                                        =     512.00 kB

        Children see throughput for  1 rewriters        = 2089386.88 kB/sec
        Parent sees throughput for  1 rewriters         =   60742.64 kB/sec
        Min throughput per process                      = 2089386.88 kB/sec
        Max throughput per process                      = 2089386.88 kB/sec
        Avg throughput per process                      = 2089386.88 kB/sec
        Min xfer                                        =     512.00 kB

        Children see throughput for  1 readers          = 5067142.00 kB/sec
        Parent sees throughput for  1 readers           = 1614885.28 kB/sec
        Min throughput per process                      = 5067142.00 kB/sec
        Max throughput per process                      = 5067142.00 kB/sec
        Avg throughput per process                      = 5067142.00 kB/sec
        Min xfer                                        =     512.00 kB

        Children see throughput for 1 re-readers        = 5177083.50 kB/sec
        Parent sees throughput for 1 re-readers         = 1645826.39 kB/sec
        Min throughput per process                      = 5177083.50 kB/sec
        Max throughput per process                      = 5177083.50 kB/sec
        Avg throughput per process                      = 5177083.50 kB/sec
        Min xfer                                        =     512.00 kB

        Children see throughput for 1 reverse readers   = 3822466.75 kB/sec
        Parent sees throughput for 1 reverse readers    = 1500891.18 kB/sec
        Min throughput per process                      = 3822466.75 kB/sec
        Max throughput per process                      = 3822466.75 kB/sec
        Avg throughput per process                      = 3822466.75 kB/sec
        Min xfer                                        =     512.00 kB

        Children see throughput for 1 stride readers    = 3556580.50 kB/sec
        Parent sees throughput for 1 stride readers     = 1438555.37 kB/sec
        Min throughput per process                      = 3556580.50 kB/sec
        Max throughput per process                      = 3556580.50 kB/sec
        Avg throughput per process                      = 3556580.50 kB/sec
        Min xfer                                        =     512.00 kB

        Children see throughput for 1 random readers    = 4091959.25 kB/sec
        Parent sees throughput for 1 random readers     = 1514653.00 kB/sec
        Min throughput per process                      = 4091959.25 kB/sec
        Max throughput per process                      = 4091959.25 kB/sec
        Avg throughput per process                      = 4091959.25 kB/sec
        Min xfer                                        =     512.00 kB

        Children see throughput for 1 mixed workload    = 4099771.25 kB/sec
        Parent sees throughput for 1 mixed workload     = 1550747.27 kB/sec
        Min throughput per process                      = 4099771.25 kB/sec
        Max throughput per process                      = 4099771.25 kB/sec
        Avg throughput per process                      = 4099771.25 kB/sec
        Min xfer                                        =     512.00 kB

        Children see throughput for 1 random writers    = 1904125.62 kB/sec
        Parent sees throughput for 1 random writers     =   70243.93 kB/sec
        Min throughput per process                      = 1904125.62 kB/sec
        Max throughput per process                      = 1904125.62 kB/sec
        Avg throughput per process                      = 1904125.62 kB/sec
        Min xfer                                        =     512.00 kB

        Children see throughput for 1 pwrite writers    = 1276891.62 kB/sec
        Parent sees throughput for 1 pwrite writers     =   68956.01 kB/sec
        Min throughput per process                      = 1276891.62 kB/sec
        Max throughput per process                      = 1276891.62 kB/sec
        Avg throughput per process                      = 1276891.62 kB/sec
        Min xfer                                        =     512.00 kB

        Children see throughput for 1 pread readers     = 4447925.50 kB/sec
        Parent sees throughput for 1 pread readers      = 1551867.91 kB/sec
        Min throughput per process                      = 4447925.50 kB/sec
        Max throughput per process                      = 4447925.50 kB/sec
        Avg throughput per process                      = 4447925.50 kB/sec
        Min xfer                                        =     512.00 kB

        Children see throughput for  1 fwriters         = 1992459.00 kB/sec
        Parent sees throughput for  1 fwriters          =   71191.70 kB/sec
        Min throughput per process                      = 1992459.00 kB/sec
        Max throughput per process                      = 1992459.00 kB/sec
        Avg throughput per process                      = 1992459.00 kB/sec
        Min xfer                                        =     512.00 kB

        Children see throughput for  1 freaders         = 4237292.00 kB/sec
        Parent sees throughput for  1 freaders          = 1594501.46 kB/sec
        Min throughput per process                      = 4237292.00 kB/sec
        Max throughput per process                      = 4237292.00 kB/sec
        Avg throughput per process                      = 4237292.00 kB/sec
        Min xfer                                        =     512.00 kB


iozone test complete.

Now you can run automated test with different record sizes and file sizes

$./iozone -a -b output.xls

and you will have outputs that can be charted as below (e.g writes) - 



Saturday, March 7, 2015

Creating ZFS RAID10 on Amazon linux

There are several considerations to be made before embarking on any particular RAID configuration. As a first step, Amazon documentation provides some good insights

http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/raid-config.html

If you have zfs running on linux (the same can be achieved in xfs or ext3/ext4), then there are several nifty features that one would come to like. For e.g creating a RAID10 (mirrored array with striping) can be set up with 6 ebs volumes (NOTE - should ideally be of equal sizes). If you take six 1gig volumes, then you can add an additional capacity of 3gig with RAID10 configuration.

First, we have to create the ebs volumes using AWS CLI:-

$aws ec2 create-volume --size 1 --region us-east-1 --availability-zone us-east-1a --volume-type gp2
{
    "AvailabilityZone": "us-east-1a",
    "Attachments": [],
    "Tags": [],
    "VolumeType": "gp2",
    "VolumeId": "vol-e7d33aa0",
    "State": "creating",
    "Iops": 300,
    "SnapshotId": null,
    "CreateTime": "2015-02-19T22:26:54.738Z",
    "Size": 1
}

NOTE  - With GP2 SSDs, the default IOPs performance level is 3 IOPs per GB, with the ability to burst to 3,000 IOPs for 30 minutes. For additional information on ebs volumes and Amazon recommends use of provisioned iops volumes where i/o bound applications are running - "Each volume receives an initial I/O credit balance of 5,400,000 I/O credits, which is enough to sustain the maximum burst performance of 3,000 IOPS for 30 minutes. This initial credit balance is designed to provide a fast initial boot cycle for boot volumes and to provide a good bootstrapping experience for other applications. Volumes earn I/O credits every second at a baseline performance rate of 3 IOPS per GiB of volume size. For example, a 100 GiB General Purpose (SSD) volume has a baseline performance of 300 IOPS."

http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html#EBSVolumeTypes_gp2

In general, if your bottleneck is IOPs, provisioned IOPs are going to be a better choice than GP2 in most use cases. Once the burstable IO is expended, you go back down to the default amount of IOPs - for example, on a 100 gig GP2 volume, that'd be 300 IOPs. If your application is pushing 3,000 IOPs and goes down to 300 abruptly, you can experience applications and even the OS hanging due to the sudden drop in IO.

On the other hand, with provisioned IOPs, you can push 4,000 IOPs per volume, all day, every day. It's a bit more expensive, but the performance difference is often worth it."

Once the volumes have been created, they can be attached to the instance using CLI

$aws ec2 attach-volume --volume-id vol-e7d33aa0 --instance-id i-9e169771 --device /dev/xvdb
{
    "AttachTime": "2015-02-19T22:45:08.368Z",
    "InstanceId": "i-9e169771",
    "VolumeId": "vol-e7d33aa0",
    "State": "attaching",
    "Device": "/dev/xvdb"
}

NOTE - If you create a volume in a different zone than where your instance is running, then you will get an error like - "A client error (InvalidVolume.ZoneMismatch) occurred when calling the AttachVolume operation: The volume 'vol-24de8556' is not in the same availability zone as instance 'i-9e169771'".

After you attach 6 volumes, you should see something like below:-

$ ls -l /dev/sd*
lrwxrwxrwx 1 root root 4 Mar  3 05:50 /dev/sda -> xvda
lrwxrwxrwx 1 root root 5 Mar  3 05:50 /dev/sda1 -> xvda1
lrwxrwxrwx 1 root root 4 Mar  7 18:42 /dev/sdb -> xvdb
lrwxrwxrwx 1 root root 4 Mar  3 20:43 /dev/sdc -> xvdc
lrwxrwxrwx 1 root root 4 Mar  3 20:43 /dev/sdd -> xvdd
lrwxrwxrwx 1 root root 4 Mar  3 20:43 /dev/sde -> xvde
lrwxrwxrwx 1 root root 4 Mar  3 20:43 /dev/sdf -> xvdf
lrwxrwxrwx 1 root root 4 Mar  3 20:43 /dev/sdg -> xvdg

you can also check the details of each disk as below:-

$ sudo fdisk -l /dev/xvdb
Disk /dev/xvdb: 1073 MB, 1073741824 bytes, 2097152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
no pools available

Now we are ready to create RAID10 array using zfs, we can use the below command:-

$ sudo zpool create -f testraid10 mirror sdb sdc mirror sdd sde mirror sdf sdg

now we can check the status of the raid10 pool

$ sudo zpool status
  pool: testraid10
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        testraid10  ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            sdb     ONLINE       0     0     0
            sdc     ONLINE       0     0     0
          mirror-1  ONLINE       0     0     0
            sdd     ONLINE       0     0     0
            sde     ONLINE       0     0     0
          mirror-2  ONLINE       0     0     0
            sdf     ONLINE       0     0     0
            sdg     ONLINE       0     0     0

you can confirm that pool has been mounted using "df -h"

$ df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/xvda1      7.8G  1.5G  6.2G  20% /
devtmpfs        3.7G   80K  3.7G   1% /dev
tmpfs           3.7G     0  3.7G   0% /dev/shm
testraid10      3.0G     0  3.0G   0% /testraid10

To check the io performance of the individual volumes in the pool, you can using iostat command as below:-

$ sudo zpool iostat testraid10 -v
               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
testraid10   129K  2.98T      0      0     40    657
  mirror      48K  1016G      0      0     21    235
    sdb         -      -      0      0     76  4.71K
    sdc         -      -      0      0     95  4.71K
  mirror      33K  1016G      0      0      0    186
    sdd         -      -      0      0     74  4.66K
    sde         -      -      0      0     74  4.66K
  mirror      48K  1016G      0      0     18    235
    sdf         -      -      0      0     88  4.71K
    sdg         -      -      0      0     79  4.71K
----------  -----  -----  -----  -----  -----  -----

NOTE - the performance of the RAID10 will inline with the slowest disk in the volume.

With the above steps you will have a functioning RAID10 pool in your Amazon linux using zfs.


Thursday, March 5, 2015

CVE-2015-0204 - "FREAK" Openssl vulnerability

OpenSSL clients accepted EXPORT-grade (insecure) keys even when the client had not initially asked for them. This could be exploited using a man-in-the-middle attack, which would intercept the client's initial request for a standard key and ask the server for an EXPORT-grade key. The client would then accept the weak key, allowing the attacker to factor it and decrypt communication between the client and the server.

links:-
https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-0204
https://access.redhat.com/articles/1369543

Amazon as released security bulletin that explains how to mitigate the issue in ELB load balancer if ELB is the SSL termination point


First check the openssl version installed on the OS by checking the changelog for a fix, e.g.

$sudo rpm -qa openssl --changelog |grep CVE-2015-0204
- fix CVE-2015-0204 - remove support for RSA ephemeral keys for non-export

and check if your ELB is using the default ELB security policy called "ELBSecurityPolicy-2015-02". This will disable the below two ciphers:-


ECDHE-RSA-RC4-SHA
RC4-SHA

Monday, March 2, 2015

Installing ZFS on Amazon Linux

Now that zfs on linux is considered stable, it would make a great alternative to xfs or ext3/ext4 file systems. zfs is a volume manager and file system rolled into one. There are many excellent sites related to zfs, some of which you must peruse before you start:-

https://www.freebsd.org/doc/handbook/zfs-term.html
https://pthree.org/2012/04/17/install-zfs-on-debian-gnulinux/
https://sysadmincasts.com/episodes/35-zfs-on-linux-part-1-of-2
https://sysadmincasts.com/episodes/37-zfs-on-linux-part-2-of-2

Once you have a good understanding of the benefits zfs offers, you can proceed with the installation. The below steps are relevant to Amazon linux:-


  • sudo yum update -y

After you run update on pkg manager, make sure your kernel and kernel headers are of the same version
***************
$ sudo rpm -qa |grep kernel
kernel-3.14.33-26.47.amzn1.x86_64
kernel-tools-3.14.33-26.47.amzn1.x86_64
kernel-headers-3.14.33-26.47.amzn1.x86_64
kernel-devel-3.14.33-26.47.amzn1.x86_64
***************
  • sudo yum remove -y <kernel-3.14.27-25.47.amzn1.x86_64 | old kernel>
  • sudo yum repolist enabled
  • sudo yum install -y gcc
  • sudo yum install kernel-devel zlib-devel libuuid-devel
  • download spl-0.6.3(http://archive.zfsonlinux.org/downloads/zfsonlinux/spl/spl-0.6.3.tar.gz)
  • download zfs-0.6.3(http://archive.zfsonlinux.org/downloads/zfsonlinux/zfs/zfs-0.6.3.tar.gz)
  • cd spl;./configure;sudo make && sudo make install
  • cd zfs;./configure --with-spl=/usr/local/src/spl-0.6.3; sudo make && sudo make install
  • export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib
  • edit /etc/sudoers and add /usr/local/sbin to Defaults secure_path variable
  • sudo modprobe zfs
  • $lsmod |grep zfs
***************
zfs                  1170768  0
zunicode              323435  1 zfs
zavl                    6874  1 zfs
zcommon                45353  1 zfs
znvpair                81478  2 zfs,zcommon
spl                   165402  5 zfs,zavl,zunicode,zcommon,znvpair
***************

  • now you can run zpool command to see if it works
***************
$sudo zpool status
no pools available
***************