IT Functions braindump
Hieronder staat een hoeveelheid aan aantekeningen, voor mijzelf als kennisbank, en voor anderen om te gebruiken. Het meeste is Engelstalig, omdat de aantekeningen in een internationale setting zijn gemaakt en omdat de meeste zoekopdrachten in het Engels zullen zijn.
Iedere aanvulling en/of verbetering is vanzelfsprekend welkom!
Linux
LVM resizing
When using SAN storage and importing a snapshot of some storage on Linux, sometimes LVM can run out of sync. When the storage on the main host is in LVM and a Logical Volume gets resized, then after refreshing the snapshot, the LVM administration on the target host is out of sync.
This can be repaired with the following commands. Assuming /dev/dm-5 is the device mappers name for the volume on the source and /dev/dm-8 is the device mappers name on the target.
On the source machine display the device mapper table of the volume.
# dmsetup table /dev/dm-5
On the target machine modify the device mapper table of the volume with the corresponding figures. Do not modify the major and minor device numbers, as they will be different on that machine.
# dmsetup table /dev/dm-8 > table-dm-8.old # cp table-dm-8.old table-dm-8.new # vi table-dm-8.new # dmsetup suspend /dev/dm-8 # dmsetup reload /dev/dm-8 table-dm-8.new # dmsetup resume /dev/dm-8
Resize a SAN Volume and resize the Linux Volume
When a SAN Volume has been increased in size and the volume is already in use on a linux host, the the new size needs to be made known to the host. In the example, the Volume is made available through a multipathed LUN and in use as a LVM Physical Volume.
- Increase a VV in size example is VGNAME
3par cli% stoprcopygroup VGNAME Stopping group VGNAME. If volumes in the group are still synchronizing, then their snapshots on the secondary side may get promoted. Do you wish to continue? select q=quit y=yes n=no: y 3par cli% growvv VGNAME_PRI_001 20G 3par cli% showvv VGNAME_PRI_001 ---Rsvd(MB)--- -(MB)- Id Name Prov Type CopyOf BsId Rd -Detailed_State- Adm Snp Usr VSize 871 VGNAME_PRI_001 tpvv base --- 871 RW normal 256 3072 12800 40960 ---------------------------------------------------------------------------------- 1 total 256 3072 12800 40960 3par cli% startrcopygroup VGNAME
- On the linux host:
[root@linuxhost ~]# multipath -l VGNAME001 VGNAME001 (350002acb0456789a) dm-5 3PARdata,VV size=20G features='1 queue_if_no_path' hwhandler='0' wp=rw `-+- policy='round-robin 0' prio=0 status=active |- 8:0:2:4 sdi 8:128 active undef running |- 9:0:0:4 sdm 8:192 active undef running |- 9:0:2:4 sdq 65:0 active undef running |- 8:0:0:4 sde 8:64 active undef running |- 10:0:1:4 sdy 65:128 active undef running `- 10:0:0:4 sdu 65:64 active undef running [root@linuxhost ~]# for i in /sys/block/dm-5/slaves/*; do echo 1 > ${i}/device/rescan; done [root@linuxhost ~]# for i in i m q e y u do multipathd -k"del path sd${i}" multipathd -k"add path sd${i}" done [root@linuxhost ~]# multipathd -k'resize map VGNAME001' ok root@linuxhost ~]# multipath -ll VGNAME001 VGNAME001 (350002acb0456789a) dm-5 3PARdata,VV size=40G features='1 queue_if_no_path' hwhandler='0' wp=rw `-+- policy='round-robin 0' prio=1 status=active |- 8:0:2:4 sdi 8:128 active ready running |- 9:0:0:4 sdm 8:192 active ready running |- 9:0:2:4 sdq 65:0 active ready running |- 8:0:0:4 sde 8:64 active ready running |- 10:0:1:4 sdy 65:128 active ready running `- 10:0:0:4 sdu 65:64 active ready running [root@linuxhost ~]# pvresize /dev/mapper/VGNAME001 Physical volume "/dev/mapper/VGNAME001" changed [root@linuxhost ~]# pvscan PV /dev/mapper/VGNAME001 VG vg_VGNAME lvm2 [60.00 GiB / 40.00 GiB free] PV /dev/sda2 VG vg0 lvm2 [278.59 GiB / 234.59 GiB free] Total: 2 [274.59 GiB] / in use: 2 [274.59 GiB] / in no VG: 0 [0 ] [root@linuxhost ~]# df -h |grep VGNAME /dev/mapper/vg_VGNAME-admin 1008M 193M 765M 21% /oracle/admin/VGNAME /dev/mapper/vg_VGNAME-oradata 14G 7.2G 6.0G 55% /oracle/oradata/VGNAME /dev/mapper/vg_VGNAME-safeset1 1008M 643M 315M 68% /oracle/safeset1/VGNAME /dev/mapper/vg_VGNAME-arch 4.0G 887M 2.9G 24% /oracle/arch/VGNAME [root@linuxhost ~]# lvextend -L+20G /dev/mapper/vg_VGNAME-oradata Extending logical volume oradata to 34.00 GiB Logical volume oradata successfully resized [root@linuxhost ~]# resize2fs /dev/mapper/vg_VGNAME-oradata resize2fs 1.41.12 (17-May-2010) Filesystem at /dev/mapper/vg_VGNAME-oradata is mounted on /oracle/oradata/VGNAME; on-line resizing required old desc_blocks = 1, new_desc_blocks = 3 Performing an on-line resize of /dev/mapper/vg_VGNAME-oradata to 8911872 (4k) blocks. The filesystem on /dev/mapper/vg_VGNAME-oradata is now 8911872 blocks long. [root@linuxhost ~]# df -h |grep VGNAME /dev/mapper/vg_VGNAME-admin 1008M 193M 765M 21% /oracle/admin/VGNAME /dev/mapper/vg_VGNAME-oradata 34G 7.2G 25G 23% /oracle/oradata/VGNAME /dev/mapper/vg_VGNAME-safeset1 1008M 643M 315M 68% /oracle/safeset1/VGNAME /dev/mapper/vg_VGNAME-arch 4.0G 887M 2.9G 24% /oracle/arch/VGNAME
HP-UX
Add a new SAN Virtual Volume to HPUX
To add a new (3PAR) SAN Virtual Volume to HP-UX and create a Veritas filesystem on it, the following steps need to be performed. You will need to substitute all $ variables with your actual values.
- Create VV and VLUN on SAN
3par: showvlun -lvw -t -v NAME*
hpux# ioscan -C disk hpux# scsimgr -p get_attr all_lun -a hw_path -a device_file -a wwid -a instance|grep -i $WWN hpux# scsimgr lun_map -D $FOUNDDISK hpux# pvcreate $FOUNDDISK hpux# mkdir /dev/vg_NAME hpux# mknod /dev/vg_NAME/group c 64 $MINOR hpux# vgcreate -s 256m vg_NAME $FOUNDDISK hpux# lvcreate -L $SIZEm -n NAME vg_NAME hpux# newfs -F vxfs /dev/vg_NAME/rNAME # create on chardev hpux# vi fstab hpux# grep vg_NAME /etc/fstab|while read dev i rest do mkdir -p $i mount $i chmod 02775 $i chmod $USER:$GROUP $i done
Extending a Volume Group with a new SAN Volume
- Create a Volume on the SAN and assign the LUN to the HP-UX host
3par: showvlun -lvw -t -v NAME*
hpux# ioscan -C disk hpux# scsimgr -p get_attr all_lun -a hw_path -a device_file -a wwid -a instance|grep -i $WWN hpux# pvcreate -f /dev/rdisk/disk hpux# vgextend vg_VGNAME /dev/disk/disk
AIX
IBM AIX UNIX is the only UNIX system that has a 'registry'. Do not edit files, like /etc/passwd directly. They will be overwritten. Everything is contained in the object repository. Use smitty for maintenance, or if you are experienced enough (but why should you?) modify the object repository itself.
For monitoring and generating performance data and graphs, use nmon.
AIX 5.3 and Oracle
Some random notes from my work on AIX 5.3 in relation with Oracle performance tuning.
vmo -p -o strict_maxclient=1 vmo -p -o minperm%=3 vmo -p -o maxclient%=8 vmo -p -o maxperm%=8 vmo -p -o v_pinshm=1 iio -p -o j2_maxRandomWrite=32 no -p -o tcp_sendspace=262144 no -p -o tcp_recvspace=262144 chuser capabilities=CAP_BYPASS_RAC_VMM,CAP_PROPAGATE USERNAME See CIO mounted files: lsof64 +fg In pfile: *.filesystemio_options=setall *.lock_sga=true in sqlplus show parameter spfile; create spfile from pfile; in /etc/tunables/nextboot: vmo: maxpin% = "15" v_pinshm = "1" maxperm% = "8" maxclient% = "8" minperm% = "3" strict_maxclient = "1" no: tcp_recvspace = "262144" tcp_sendspace = "262144" routerevalidate = "1" ipsrcrouteforward = "1" ipsrcrouterecv = "1" ipsrcroutesend = "1" bcastping = "1" nonlocsrcroute = "1" ioo: j2_maxRandomWrite = "32" pinning shared memory to prevent SGA pageouts: vmo -p -o v_pinshm=1 leave maxpin% at default of 80% unless the SGA exceeds 77% of real mem vmo -p -o maxpin%=((total mem - SGA size)*100/total mem)+3 Oracle Parameters: LOCK_SGA = TRUE Large Page Support vmo –r –o lgpg_size= 16777216 –o lgpg_regions=(SGA size / 16 MB) Allow Oracle to use Large Pages chuser capabilities=CAP_BYPASS_RAC_VMM,CAP_PROPAGATE oracle lsattr -El aio0 maxservers should be (10 * #of disks accessed concurrently )/ #cpus Oracle files : o database datafilesshould be stored on JFS2 + CIO o redologs should be stored on raw-device or JFS2 + CIO (block size 512 bytes) o controlfiles should be stored on JFS2 (without DIO or CIO) o binaries should be stored on JFS2 (no mount option) Stripe and mirror everything. vmstat–useful for obtaining an overall picture of CPU, paging, and memory usage lvmstat–useful to get logical volume I/O statistics iostat–allows to get statistics on disk activity (and also on terminals, CPU, adapters) nmon–complete tool which gives information on all the components of the system filemon–uses the trace facility to report on the I/O activity of physical volumes, logical volumes, individual files, and the Virtual Memory Manager xmperf–allows to define monitoring environments to supervise the performance of the local and remote systems netstat–allows to monitor network activity
Add a disk to a Linux client on a LPAR
Create Disk. Wait until initialized Create mapping - assign to host VIO - assign unique LUN login to the vio ssh padmin@drvio lspv cfgmgr lspv oem_setup_env lsattr -El hdiskX Verify ieee_volname ^D chdev -dev hdiskX -attr pv=yes lspv lsmap -all # find vhostX for assigning hdiskX to mkvdev -vadapter vhostX -vdev hdiskX -dev logical_name_for_vhost lsmap -vadapter vhostX login op linux sudo su rescan-scsi-bus.sh dmesg cfdisk /dev/sdX # create 1 LVM partition pvcreate /dev/sdX1 vgs # get vg name vgextend vgname /dev/sdX1 vgs # see that there is free space df -h # get the devicename for the partition lvresize -LXXG /dev/vg/name #resize the filesystem with the right tools # ext3 resize2fs /dev/vb/name # jfs mount -o remount,resize /mountpoint
Script to create a LPAR host on the HMC from the commandline
In order to do this shell acces to the HMC is required, if you do not know ho to get that then send me a mail and maybe I will explain it to you.
DO=echo for ID in 10 11 12 13 14 15 do LPAR=$(expr $ID + 3) SCSI=$(expr 40 + $ID) $DO mksyscfg -r lpar -m DR_Server-550-SN65E0999 -i "'profile_name=normal_boot,name=OpenSuse $ID,lpar_id=$LPAR,lpar_env=aixlinux,all_resources=0,min_mem=256,desired_mem=1024,max_mem=32768,min_num_huge_pages=0,desired_num_huge_pages=0,max_num_huge_pages=0,proc_mode=shared,min_proc_units=0.1,desired_proc_units=0.2,max_proc_units=4.0,min_procs=1,desired_procs=2,max_procs=4,sharing_mode=cap,uncap_weight=0,shared_proc_pool_name=DefaultPool,io_slots=none,lpar_io_pool_ids=none,max_virtual_slots=70,\"virtual_serial_adapters=1/server/1/any//any/1,0/server/1/any//any/1\",\"virtual_scsi_adapters=$SCSI/client/2/DR_VIO/$SCSI/1\",\"virtual_eth_adapters=2/0/1//0/1/ETHERNET0\",hca_adapters=none,boot_mode=norm,conn_monitoring=0,auto_start=0,power_ctrl_lpar_ids=none,work_group_id=none,redundant_err_path_reporting=0,bsr_arrays=0,lpar_proc_compat_mode=default'" $DO chsyscfg -r prof -m DR_Server-550-SN65E0999 -i "'name=normal_boot,lpar_name=DR_VIO,\"virtual_scsi_adapters+=$SCSI/server/$LPAR/OpenSuse $ID/$SCSI/0\"'" done
SUN Solaris
How to replace a broken software RAID1 disk
Check for errors: - DiskSuite errors : metastat | grep maint - Physical errors : format Preparations (not needed if disk is same size): - Backup partition tabel : format -> 0 -> save c1t0d0-part-layout.dat Detach the broken disk: - Detach failed submirrors from mirrors: - metadetach -f d0 d20 - metadetach -f d1 d21 - metadetach -f d3 d23 - metadetach -f d4 d24 - metadetach -f d5 d25 - metadetach -f d6 d26 - Clear the failed metadevices: - metaclear d20 - metaclear d21 - metaclear d23 - metaclear d24 - metaclear d25 - metaclear d26 - Clear the failed metadb replicas: metadb -d c1t1d0s7 Replace disk. Partition new disk: - prtvtoc /dev/rdsk/c1t0d0s2 | fmthard -s - /dev/rdsk/c1t1d0s2 Attach new disk: - Recreate and activate the metadb replicas: metadb -a -c 3 c1t1d0s7 - Reinitialize the meta devices as submirrors: - metainit d20 1 1 c1t1d0s0 - metainit d21 1 1 c1t1d0s1 - metainit d23 1 1 c1t1d0s3 - metainit d24 1 1 c1t1d0s4 - metainit d25 1 1 c1t1d0s5 - metainit d26 1 1 c1t1d0s6 - Attach the new submirrors: - metattach d0 d20 - metattach d1 d21 - metattach d3 d23 - metattach d4 d24 - metattach d5 d25 - metattach d6 d26
Info: http://www.math.uwaterloo.ca/mfcf/internal/procedures/OS/Solaris/mirror-fix.shtml
HP 3PAR
Usefull CLI Commands
showalert -f # fixed alerts showalert -n # new alerts showalert -a # acknowledged alerts
Create Adaptive Optimization configuration
Starting from InformOs 3.1.2 (MU1) Adaptive Optimization runs from the nodes. The below example creates some AO profiles.
createaocfg -t0cpg CPG_SSD_001 -t1cpg CPG_FC_001 -t2cpg CPG_NL_001 -mode Performance AO_PERFORM_SFN_001 setcpg -sdgw 240g -sdgl 250g CPG_SSD_001 setcpg -sdgw 1000g CPG_FC_001 createaocfg -t0cpg CPG_FC_002 -t1cpg CPG_NL_002 -mode Balanced AO_BALANCE_FN_002 setcpg -sdgw 9000g CPG_FC_002 createaocfg -t0cpg CPG_FC_003 -t1cpg CPG_NL_003 -mode Cost AO_COST_FN_003 setcpg -sdgw 2000g CPG_FC_003 createaocfg -t0cpg CPG_SSD_004 -t1cpg CPG_FC_004 -t2cpg CPG_NL_004 -mode Balanced AO_BALANCE_SFN_004 setcpg -sdgw 200g -sdgl 220g CPG_SSD_004 setcpg -sdgw 8000g CPG_FC_004 # sample between 06:00 and 23:00, exclude backups and nightly jobs createsched "startao -maxrunh 4 -btsecs -20h -etsecs -3h AO_PERFORM_SFN_001" "00 02 * * 2-6" AO_PERFORM_SFN_001 createsched "startao -maxrunh 3 -btsecs -21h -etsecs -4h AO_BALANCE_SFN_004" "00 03 * * 2-6" AO_BALANCE_SFN_004 createsched "startao -maxrunh 2 -btsecs -22h -etsecs -5h AO_BALANCE_FN_002" "00 04 * * 2-6" AO_BALANCE_FN_002 createsched "startao -maxrunh 2 -btsecs -23h -etsecs -6h AO_COST_FN_003" "30 04 * * 2-6" AO_COST_FN_003
Certificates
Various commands and tricks regarding OpenSSL, certificates, etc.
How to check if csr, key and cert match each other
Sometimes it is unclear if (mainly) a csr and key match each other. It is possible to check this with the following commands.
[root@unixhost httpd]# function x() { openssl $1 -noout -modulus -in $2|openssl md5; } [root@unixhost httpd]# x req csr/unixhost.it-functions.nl.csr ; x x509 certs/unixhost.it-functions.nl.crt ; x rsa private/unixhost.it-functions.nl.key (stdin)= 5f9891cce94b75b031fb7667352145ba (stdin)= 5f9891cce94b75b031fb7667352145ba (stdin)= 5f9891cce94b75b031fb7667352145ba
If the checksums match, then the files 'belong' to each other.
Generate a csr and key
#!/bin/bash ## DO=echo FQDN="${1}" SUBJECT="/C=NL/ST=Zuid Holland/L=Leiden/O=My Company/OU=IT Operations/CN=$FQDN/emailAddress=cert@mycompany.com" $DO openssl req -new -newkey rsa:2048 -nodes -keyout $FQDN.key -subj "$SUBJECT" -out $FQDN.csr
What if
What if your child has a challenge? On a board put marbles, pebbles or something. Starting with one, ending with one hundred. How do you know you have the precice amount of objects to fulfill this challenge?
Good old AWK
I like awk. It has served me well. The syntax looks like C. It feels like C. But easier. I wrote quite some difficult stuff in perl. For EDI with TLS encryption and stuff. But, to be honest, I do not like perl. The syntax is a mess.
So, below some examples in awk, easily translated into java, javascript, php and C because of the syntax. And it solves the problems easily.
Triangular numbers
Actually the problem we try to solve is about triangular numbers. We have a 10 by 10 pallet. The children need to put 1 upto 100 marbles on the pallet. How many marbles does the teacher need to provide. Exactly how much?
awk 'BEGIN { sum=0 for(i=1; i<=100; i++) { sum += i printf("%3d: %d\n", i, sum) } sum=0 for(i=1; i<=100; i++) { sum += i printf("%d ", sum) } printf("%d\n", sum) exit }' 1: 1 2: 3 3: 6 4: 10 5: 15 ( ... ) 98: 4851 99: 4950 100: 5050 1 3 6 10 15 21 28 36 45 55 66 78 91 105 120 136 153 171 190 210 231 253 276 300 325 351 378 406 435 465 496 528 561 595 630 666 703 741 780 820 861 903 946 990 1035 1081 1128 1176 1225 1275 1326 1378 1431 1485 1540 1596 1653 1711 1770 1830 1891 1953 2016 2080 2145 2211 2278 2346 2415 2485 2556 2628 2701 2775 2850 2926 3003 3081 3160 3240 3321 3403 3486 3570 3655 3741 3828 3916 4005 4095 4186 4278 4371 4465 4560 4656 4753 4851 4950 5050 5050
So the teacher needs 5050 marbles. That's a lot...
The chessboard and King Sheram (quadratic numbers)
How to calculate the amount of rice on a quadratic scale as told in the story of king Sheram? With plain awk... (see http://en.wikipedia.org/wiki/Wheat_and_chessboard_problem).
We use the following awk code, but note the --bignum flag.
Without it, the calculations can be incorrect (as correctly noticed by Marc Kooij).
awk --bignum 'BEGIN { rows=cols=8 board=cols*rows for(sum=i=0; i < board; i++) { grains = 2**i sum += grains row = (i / rows) + 1 col = (i % cols) printf("%d%c: %d grain%s, total %d grain%s\n", row \ ,65 + col , grains, i?"s":"", sum, i?"s":"") } exit }' 1A: 1 grain, total 1 grain 1B: 2 grains, total 3 grains 1C: 4 grains, total 7 grains 1D: 8 grains, total 15 grains 1E: 16 grains, total 31 grains 1F: 32 grains, total 63 grains 1G: 64 grains, total 127 grains 1H: 128 grains, total 255 grains 2A: 256 grains, total 511 grains 2B: 512 grains, total 1023 grains 2C: 1024 grains, total 2047 grains 2D: 2048 grains, total 4095 grains 2E: 4096 grains, total 8191 grains 2F: 8192 grains, total 16383 grains 2G: 16384 grains, total 32767 grains 2H: 32768 grains, total 65535 grains 3A: 65536 grains, total 131071 grains 3B: 131072 grains, total 262143 grains 3C: 262144 grains, total 524287 grains 3D: 524288 grains, total 1048575 grains 3E: 1048576 grains, total 2097151 grains 3F: 2097152 grains, total 4194303 grains 3G: 4194304 grains, total 8388607 grains 3H: 8388608 grains, total 16777215 grains 4A: 16777216 grains, total 33554431 grains 4B: 33554432 grains, total 67108863 grains 4C: 67108864 grains, total 134217727 grains 4D: 134217728 grains, total 268435455 grains 4E: 268435456 grains, total 536870911 grains 4F: 536870912 grains, total 1073741823 grains 4G: 1073741824 grains, total 2147483647 grains 4H: 2147483648 grains, total 4294967295 grains 5A: 4294967296 grains, total 8589934591 grains 5B: 8589934592 grains, total 17179869183 grains 5C: 17179869184 grains, total 34359738367 grains 5D: 34359738368 grains, total 68719476735 grains 5E: 68719476736 grains, total 137438953471 grains 5F: 137438953472 grains, total 274877906943 grains 5G: 274877906944 grains, total 549755813887 grains 5H: 549755813888 grains, total 1099511627775 grains 6A: 1099511627776 grains, total 2199023255551 grains 6B: 2199023255552 grains, total 4398046511103 grains 6C: 4398046511104 grains, total 8796093022207 grains 6D: 8796093022208 grains, total 17592186044415 grains 6E: 17592186044416 grains, total 35184372088831 grains 6F: 35184372088832 grains, total 70368744177663 grains 6G: 70368744177664 grains, total 140737488355327 grains 6H: 140737488355328 grains, total 281474976710655 grains 7A: 281474976710656 grains, total 562949953421311 grains 7B: 562949953421312 grains, total 1125899906842623 grains 7C: 1125899906842624 grains, total 2251799813685247 grains 7D: 2251799813685248 grains, total 4503599627370495 grains 7E: 4503599627370496 grains, total 9007199254740991 grains 7F: 9007199254740992 grains, total 18014398509481983 grains 7G: 18014398509481984 grains, total 36028797018963967 grains 7H: 36028797018963968 grains, total 72057594037927935 grains 8A: 72057594037927936 grains, total 144115188075855871 grains 8B: 144115188075855872 grains, total 288230376151711743 grains 8C: 288230376151711744 grains, total 576460752303423487 grains 8D: 576460752303423488 grains, total 1152921504606846975 grains 8E: 1152921504606846976 grains, total 2305843009213693951 grains 8F: 2305843009213693952 grains, total 4611686018427387903 grains 8G: 4611686018427387904 grains, total 9223372036854775807 grains 8H: 9223372036854775808 grains, total 18446744073709551615 grains
So much rice, wonder how much it wil weigth and how long, how many people could eat from it.