Testing our new secret filesystem... no caching or silly business going on here, all raw spindles and extremely suspect but reproducible fork bombs courtesy of tcsh.
foreach? ssh $h /mnt/share/jcuff/dd.sh $h &
foreach? end
And where dd.sh is simply a 16 way fork bomb:
#!/bin/tcsh
foreach h (`seq 1 16`)
dd if=/dev/zero of=/mnt/jcuff.$h.$1 bs=1024k count=64000 >& /mnt/jcuff.$h.$1.out &
end
Which makes for THREE HUNDRED AND TWENTY concurrent jobs each writing out 64GB, or in layman's terms this is basically TWENTY TERABYTES of output.
How would your current network attached storage hold up under this type of crazy load?
Could it?
Oh and I should say that this file system has another slightly unique property:
[root@localhost ~]# df -H Filesystem Size Used Avail Use% Mounted on 10.10.187.101:/export 1.2P 1.7T 1.1P 1% /mnt/
This one seemed to, in only 25 minutes we had generated our 20 Terabytes!
20T
As an aggregate speed we saw:
[root@compute001]# cat *.out | grep bytes \ | awk '{print (gbpersecond=gbpersecond+$8)/1024 " GB/s"}' \ | tail -1 14.1553 GB/s
we also confirmed speeds with a more fancy mpi run (still powered by tcsh ;-)):
[root@compute001 jcuff]# cat mpi.sh #!/bin/tcsh foreach h (`cat hosts`) ssh $h "/usr/bin/mpirun -host $h -np 36 /mnt/share/openmpi_IOR/src/C/IOR -b 2g -t 1m -F -C -w -k -e -vv -o /mnt/file1_p36_3.$h >& /mnt/file1_p36_3.$h.out" & end
Needs a whole lot more testing, with like real science and stuff but as a start out with a three line dodgy tcsh script, this looks to be a mighty fine file system ;-) This is a teaser, I'm not giving out makes and models or any of that, just wanted to let folks know we are building an awesome monster here! Oh and also to help see if folks will ever stop making fun of me using tcsh. Hehehe!
Summary: 320 jobs, 20 hosts, 20TB output @ 14GB/s to 1.2PB