Wednesday, August 10, 2016

Downloading Oracle 11g XE on headless machines

To download Oracle products like Oracle XE or JDK, we need to visit Oracle site and accept a license and also may have to log into OTN from a browser in order to download. On headless machines, you can still download using "curl" or "wget" by setting appropriate cookies. In order to get the correct cookie, you will have to open the "download.oracle.com" site from a machine with a browser and then copy the cookie value for "Host:edelivery.oracle.com" using Chrome->Developer Tools and pass it to curl on the headless machine. E.g.


curl -v -j -k -L -H "Cookie: _ga=GA1.2.281280113.1444226476; s_fid=4F07C7A4FC8D19B8-3F875E44865B80E4; ORA_WWW_MRKT=v:1~g:1510B7015351B4A1E050E60A8D7F2297~t:EMPLOYEE~c:MP05; ORA_WWW_PERSONALIZE=v:1~i:NOT_FOUND~r:NOT_FOUND~g:NOT_FOUND~l:NOT_FOUND~cs:NOT_FOUND~cn:NOT_FOUND; s_nr=1469210337690; atgRecVisitorId=131Dsnb..A16; oraclelicense=accept-sqldev-cookie; gpw_e24=http%3A%2F%2Fwww.oracle.com%2Ftechnetwork%2Fdatabase%2Fdatabase-technologies%2Fexpress-edition%2Fdownloads%2Findex.html; s_sq=%5B%5BB%5D%5D; ORASSO_AUTH_HINT=v1.0~20160811010208; ORA_UCM_INFO=3~1510B7015351B4A1E050E60A8D7F2297~<OTN Login>; OHS-edelivery.oracle.com-443=539...B4~" http://download.oracle.com/otn/linux/oracle11g/xe/oracle-xe-11.2.0-1.0.x86_64.rpm.zip -o oracle-xe-11.2.0-1.0.x86_64.rpm.zip

Monday, June 6, 2016

Check what kind of hypervisor is running in a VM


If you are unsure of what virtualization layer you are running, you can run 'virt-what' command

[root@sample-box]# virt-what
xen
xen-hvm

If you are running 'vmware', your output will be similar to what is shown in this link

Wednesday, June 1, 2016

Vagrant up error on virtuabox 5.0.16

After cloning a vagrant box, vagrant up resulted in the below error:-

Failed to mount folders in Linux guest. This is usually because
the "vboxsf" file system is not available. Please verify that
the guest additions are properly installed in the guest and
can work properly. The command attempted was:

mount -t vboxsf -o uid=`id -u vagrant`,gid=`getent group vagrant | cut -d: -f3`,dmode=777,fmode=777 vagrant /vagrant

mount -t vboxsf -o uid=`id -u vagrant`,gid=`id -g vagrant`,dmode=777,fmode=777 vagrant /vagrant

The error output from the last command was:

/sbin/mount.vboxsf: mounting failed with the error: No such device


Steps to resolve the issue was found in vagrant discussion board - https://github.com/mitchellh/vagrant/issues/3341

Resolution:-

1. On vagrant host - $vagrant plugin install vagrant-vbguest
2. $vagrant ssh
3. On guest box - $sudo ln -s /opt/VBoxGuestAdditions-5.0.16/lib/VBoxGuestAdditions /usr/lib/VBoxGuestAdditions
4. vagrant reload



Monday, December 21, 2015

brew update seems to break brew cask update

First I did a brew update, subsequently, I tried to update brew-cask and it threw the below error:-

********
==> Homebrew Origin:
https://github.com/Homebrew/homebrew
==> Homebrew-cask Version:
0.57.0
test-mac:~ test$ brew-cask update
==> Error: Could not link caskroom/cask manpages to:
==>   /usr/local/share/man/man1/brew-cask.1
==> 
==> Please delete these files and run `brew tap --repair`.
*******

The "list" of software on brew-cask was broken

*******
test-mac:~ test$ brew-cask list
Error: Cask 'sublime-text' definition is invalid: Bad header line: parse failed
*******

Basically, brew cask doctor and cleanup did not help and had to be reinstalled:-

*******
test-mac:~ test$ brew unlink brew-cask
Unlinking /usr/local/Cellar/brew-cask/0.57.0... 2 symlinks removed
test-mac:~ test$ brew install brew-cask
==> Installing brew-cask from caskroom/cask
==> Cloning https://github.com/caskroom/homebrew-cask.git
Updating /Library/Caches/Homebrew/brew-cask--git
==> Checking out tag v0.60.0
==> Caveats
You can uninstall this formula as `brew tap Caskroom/cask` is now all that's
needed to install Homebrew Cask and keep it up to date.
==> Summary
🍺  /usr/local/Cellar/brew-cask/0.60.0: 3 files, 12K, built in 78 seconds
test-mac:~ test$ brew-cask list
-bash: /usr/local/bin/brew-cask: No such file or directory
test-mac:~ test$ brew cask list
sublime-text virtualbox
test-mac:~ test$ brew cask cleanup
==> Removing dead symlinks
Nothing to do
==> Removing cached downloads
Nothing to do
test-mac:~ test$ brew cask update
Already up-to-date.
test-mac:~ test$ brew update
Already up-to-date.
*******

Friday, September 25, 2015

Upgrade to OSX 10.10 messed up the "Terminal"

Recently, an automatic patch update of OSX Yosemite 10.10 made my terminal window not understand any bash commands, e.g. 

*********
Last login: Fri Sep 25 18:46:40 on ttys000
test-mac:~ test$ ls
-bash: ls: command not found
test-mac:~ test$ vi
-bash: vi: command not found
test-mac:~ test$ 
*********

I tried the solution suggested in stackexchange, and that did the trick!:

$export PATH=/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin

Thursday, May 28, 2015

Mounting XFS RAID 10 volume over NFS

In certain situations you may want to share a RAID10 volme of NFS as a shared mount point across the instances in the VPC. You can follow the steps below

NFS Server instance:-

1. Install "nfs-utils" package

************
$sudo yum install -y nfs-utils
************

2. Add the below services at the instance boot up time

************
$sudo chkconfig --levels 345 nfs on
$sudo chkconfig --levels 345 nfslock on
$sudo chkconfig --levels 345 rpcbind on
************

3. Export the mounted volume to the machines in the VPC cidr block

************
$ cat /etc/exports
/mnt/md0    <VPC_CIDR>(rw)
************

4. Set the permissions for the mount point and also sub folders if any

************
$ ls -l
total 0
drwxrwxrwx 2 root root 69 May 28 06:22 md0
************

NOTE - I had give 777 as the permissions for /mnt/md0 folders. Without appropriate permissions, there will be a mount point error. For some reason 766 doesn't work as well.

5. Start the services

*************
$ sudo service rpcbind start
Starting rpcbind:                                          [  OK  ]
$ sudo service nfs start
Initializing kernel nfsd:                                  [  OK  ]
Starting NFS services:                                     [  OK  ]
Starting NFS mountd:                                       [  OK  ]
Starting NFS daemon:                                       [  OK  ]
Starting RPC idmapd:                                       [  OK  ]
$ sudo service nfslock start
Starting NFS statd:                                        [  OK  ]
*************

6. Export the mounted RAID volume to all the instances in the VPC

*************
$ sudo exportfs -av
exporting <VPC_CIDR>:/mnt/md0
*************

7. Allow ingress rules on nfs server instance's security group for TCP and UDP ports 2049 and 111 for NFS and rpcbind

*************
$aws ec2 authorize-security-group-ingress --group-id sg-7ad9a61e --protocol tcp --port 2049 --cidr <VPC_CIDR>
$aws ec2 authorize-security-group-ingress --group-id sg-7ad9a61e --protocol udp --port 2049 --cidr <VPC_CIDR>
$aws ec2 authorize-security-group-ingress --group-id sg-7ad9a61e --protocol tcp --port 111 --cidr <VPC_CIDR>
$aws ec2 authorize-security-group-ingress --group-id sg-7ad9a61e --protocol udp --port 111 --cidr <VPC_CIDR>
*************

NFS client instance:-

1. Install "nfs-utils" package

************
$sudo yum install -y nfs-utils
************

2. Create a mount point on the instance

************
$sudo mkdir /vol
************

2. Allow ingress rules for TCP and UDP ports for 2049 and 111 for nfs and rpcbind communication

*************
$aws ec2 authorize-security-group-ingress --group-id sg-7ad9a61e --protocol tcp --port 2049 --cidr <VPC_CIDR>
$aws ec2 authorize-security-group-ingress --group-id sg-7ad9a61e --protocol udp --port 2049 --cidr <VPC_CIDR>
$aws ec2 authorize-security-group-ingress --group-id sg-7ad9a61e --protocol tcp --port 111 --cidr <VPC_CIDR>
$aws ec2 authorize-security-group-ingress --group-id sg-7ad9a61e --protocol udp --port 111 --cidr <VPC_CIDR>
*************

3. mount the nfs volume on the nfs client machine

*************
$sudo mount -t nfs <private ip of nfs server>:/mnt/md0 /vol
*************

4. Confirm the mounted raid volume shows available disk space

*************
$ df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/xvda1            7.8G  1.1G  6.6G  15% /
devtmpfs              490M   56K  490M   1% /dev
tmpfs                 499M     0  499M   0% /dev/shm
<private ip>:/mnt/md0  3.0G   33M  3.0G   2% /vol
*************

5. Test by writing a file on the mounted nfs volume on the client instance

*************
$ sudo echo "this is a test" >> /vol/test.txt
*************

6. Also check the system logs using dmesg

*************
$ sudo dmesg |tail
[  360.660410] FS-Cache: Loaded
[  360.773794] RPC: Registered named UNIX socket transport module.
[  360.777793] RPC: Registered udp transport module.
[  360.779718] RPC: Registered tcp transport module.
[  360.781867] RPC: Registered tcp NFSv4.1 backchannel transport module.
[  360.845503] FS-Cache: Netfs 'nfs' registered for caching
[  443.240670] Key type dns_resolver registered
[  443.251609] NFS: Registering the id_resolver key type
[  443.253882] Key type id_resolver registered
[  443.255682] Key type id_legacy registered
*************



Tuesday, May 26, 2015

Creating XFS RAID10 on Amazon Linux

Typically AWS support recommends sticking with ext(x) based file system but for performance reasons you may want to create a RAID10 based on XFS file system. In order to create RAID10 based XFS, you can follow the steps below:-

1. Create a Amazon linux instance within a subnet in a VPC

************
$aws ec2 run-instances --image-id ami-1ecae776 --count 1 --instance-type t2.micro --key-name aminator --security-group-ids sg-7ad9a61e --subnet-id subnet-4d8df83a --associate-public-ip-address
************

2. Create ebs volumes. For RAID10, you will need 6 block storage devices created similar to the one shown below

************
$aws ec2 create-volume --size 1 --region us-east-1 --availability-zone us-east-1d --volume-type gp2
************
NOTE - The ebs volumes must be in the same region and availability zone as the instance. 

3.  Attach the created volumes to the instance as shown below

************
$aws ec2 attach-volume --volume-id vol-c33a982d --instance-id i-120d96c2 --device /dev/xvdb
************

4. Confirm that the devices have been attached successfully

************
$ ls -l /dev/sd*
lrwxrwxrwx 1 root root 4 May 27 00:44 /dev/sda -> xvda
lrwxrwxrwx 1 root root 5 May 27 00:44 /dev/sda1 -> xvda1
lrwxrwxrwx 1 root root 4 May 27 00:57 /dev/sdb -> xvdb
lrwxrwxrwx 1 root root 4 May 27 00:57 /dev/sdc -> xvdc
lrwxrwxrwx 1 root root 4 May 27 00:58 /dev/sdd -> xvdd
lrwxrwxrwx 1 root root 4 May 27 00:59 /dev/sde -> xvde
lrwxrwxrwx 1 root root 4 May 27 00:59 /dev/sdf -> xvdf
lrwxrwxrwx 1 root root 4 May 27 01:00 /dev/sdg -> xvdg
************

5. Check the block device I/O characteristics using "fdisk"

************
 $sudo fdisk -l /dev/xvdc

Disk /dev/xvdc: 1073 MB, 1073741824 bytes, 2097152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
************

6. Create RAID10 using "mdadm" command

************
$sudo mdadm --create --verbose /dev/md0 --level=raid10 --raid-devices=6 /dev/xvdb /dev/xvdc /dev/xvdd /dev/xvde /dev/xvdf /dev/xvdg
mdadm: layout defaults to n2
mdadm: layout defaults to n2
mdadm: chunk size defaults to 512K
mdadm: size set to 1047552K
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
************
NOTE - Incase, only striping or mirroring is required, then you can specify either "raid0" or "raid1" for the "level" parameter

7. Confirm that the raid10 has been created successfully

************
$ lsblk
NAME    MAJ:MIN RM SIZE RO TYPE   MOUNTPOINT
xvda    202:0    0   8G  0 disk
+-xvda1 202:1    0   8G  0 part   /
xvdb    202:16   0   1G  0 disk
+-md0     9:0    0   3G  0 raid10
xvdc    202:32   0   1G  0 disk
+-md0     9:0    0   3G  0 raid10
xvdd    202:48   0   1G  0 disk
+-md0     9:0    0   3G  0 raid10
xvde    202:64   0   1G  0 disk
+-md0     9:0    0   3G  0 raid10
xvdf    202:80   0   1G  0 disk
+-md0     9:0    0   3G  0 raid10
xvdg    202:96   0   1G  0 disk
+-md0     9:0    0   3G  0 raid10
************

8. Since Amazon linux does not come with mkfs.xfs program, you will have to install "xfsprogs" program from the package manager

************
$sudo yum install -y xfsprogs
$ ls -la /sbin/mkfs*
-rwxr-xr-x 1 root root   9496 Jul  9  2014 /sbin/mkfs
-rwxr-xr-x 1 root root  28808 Jul  9  2014 /sbin/mkfs.cramfs
-rwxr-xr-x 4 root root 103520 Feb 10 19:17 /sbin/mkfs.ext2
-rwxr-xr-x 4 root root 103520 Feb 10 19:17 /sbin/mkfs.ext3
-rwxr-xr-x 4 root root 103520 Feb 10 19:17 /sbin/mkfs.ext4
-rwxr-xr-x 1 root root 328632 Sep 12  2014 /sbin/mkfs.xfs
************

9. Create XFS file system on RAID10 volume

************
$ sudo mkfs.xfs /dev/md0
log stripe unit (524288 bytes) is too large (maximum is 256KiB)
log stripe unit adjusted to 32KiB
meta-data=/dev/md0               isize=256    agcount=8, agsize=98176 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=0
data     =                       bsize=4096   blocks=785408, imaxpct=25
         =                       sunit=128    swidth=384 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
************

10. Create a mount point to mount the raid device

************
$sudo mkdir /mnt/md0
************

11. Mount the raid volume to the mount point

************
$sudo mount -t xfs /dev/md0 /mnt/md0
************

12. Confirm the mount has been successful using "df" command

************
$ df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/xvda1      7.8G  1.1G  6.6G  14% /
devtmpfs        490M   88K  490M   1% /dev
tmpfs           499M     0  499M   0% /dev/shm
/dev/md0        3.0G   33M  3.0G   2% /mnt/md0
************

13. Check the I/O characteristics of the RAID10 volume

************
$sudo fdisk -l /dev/md0

Disk /dev/md0: 3218 MB, 3218079744 bytes, 6285312 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 524288 bytes / 1572864 bytes
************

14. To mount the volume on system bootup, you can map it to /etc/fstab and add the volume.

************
$ sudo vi /etc/fstab 
$ cat /etc/fstab
#
LABEL=/     /           ext4    defaults,noatime  1   1
tmpfs       /dev/shm    tmpfs   defaults        0   0
devpts      /dev/pts    devpts  gid=5,mode=620  0   0
sysfs       /sys        sysfs   defaults        0   0
proc        /proc       proc    defaults        0   0
/dev/md0    /mnt/md0    xfs     defaults,nofail 0   2
************

15. Run "mount -a" to confirm that there are no errors in the fstab

************
$ sudo mount -a
************

You could follow similar set of steps for setting up an ext4 based raid volume as per AWS docs link below:-