Posts
Irresistable force? meet Immovable object ...
There is a strong push (well at least the articles tell us so, and you know, its not like they are ever wrong … nosiree) to move computing into a cloud. This is sometimes a good idea, there are specific profiles which fit the cloud paradigm. Quite a few profiles actually. But there are some speedbumps. Literally. Bandwidth has been, and will be, an issue for the foreseeable future. Clouds have limited bandwidth in and out.
Posts
Sometimes being right isn't a happy thing
In this post, I wrote
and today, we get confirmation that we hit 15.2% in June. More layoffs have happened since then. GM cut another 1000, Chrysler and Ford have been cutting hard. So when can we start using the correct name for this economic condition? What I can say is that I am seeing signs of life … significant signs of life in the HPC auto community. No, not necessarily the big 3’s large machines.
Posts
OT: bad ideas
Saw this, this morning. Then another one. No, this will not impact me/us right now. And as we are converting the LLC into a C-corp, it should have no real impact until we sell the company. But it will impact any small business owner with enough revenue to matter. Heck, if you look on tax returns, it appears that lots of small business owners are wealthy. They aren’t but it would appear that way.
Posts
Sun reports Q4 numbers
At the Reg. Not good. In a nutshell? Revenues cratered.
Thursday, Sun shareholders vote on selling Sun to Oracle. I’d call it a foregone conclusion that this deal will be approved. If the regulators don’t approve it … well …
The Reg opines more.
Yeah. We have a reasonably good idea of what is and is not toast. It would not surprise me to see Oracle sell off the hardware business in bits and pieces.
Posts
The business side of HPC
John West at InsideHPC has, as usual, an interesting article on rumors of SGI abandoning a bid because of “margin games.”
The market is changing as business gets more challenging. The drive to lower and negative margins will drive vendors from the market. At some point those purchasing gear will have to decide if the cost they pay for driving the price and margins into the ground is worth the benefit they get for doing it.
Posts
Twitter Updates for 2009-07-14
* JR5 96TB #[hpc](http://search.twitter.com/search?q=%23hpc) #[storage](http://search.twitter.com/search?q=%23storage) unit ([http://bit.ly/2UhAFV](http://bit.ly/2UhAFV) hits 2.5GB/s (http://scalability.org/?p=1706) sustained w/256GB file # * FYI: gigabyte per second #NFS on a single cost-effective box. See http://scalability.org/?p=1708 about this #hpc #storage unit #
Powered by Twitter Tools.
Posts
Who says you can't do Gigabyte per second NFS?
I keep hearing this. Its not true though. See below. NFS client: Scalable Informatics Delta-V (ΔV) 4 unit NFS server: Scalable Informatics JackRabbit 4 unit. (you can buy these units today from Scalable Informatics and its partners) 10GbE: single XFP fibre between two 10GbE NICs. This is NOT a clustered NFS result.
root@dv4:~# mount | grep data2 10.1.3.1:/data on /data2 type nfs (rw,intr,rsize=262144,wsize=262144,tcp,addr=10.1.3.1) root@dv4:~# mpirun -np 4 ./io-bm.exe -n 32 -f /data2/test/file -r -d -v N=32 gigabytes will be written in total each thread will output 8.
Posts
Its all in how you do the IO ...
JackRabbit 5U (JR5) 96TB unit, with 8 threads writing to the same file (each one writing to a different section of the file to reduce contention). write performance below.
[root@jr5 ~]# mpirun -np 8 ./io-bm.exe -n 128 -f /data/file -w -s -d -v N=128 gigabytes will be written in total each thread will output 16.000 gigabytes page size ... 4096 bytes number of elements per buffer ... 2097152 number of buffers per file .
Posts
Puzzle solved ... now good results
Ok, io-bm.c is fixed. I had a typo in a define. That did a pretty good job of removing all the MPI goodness … Fixed, and ran it. Looks like we see good performance, with none of the strange loss of IO that bonnie++ has. This is what we see with verbose mode on.
Writing: 4 threads
[root@jr5 ~]# mpirun -np 4 ./io-bm.exe -n 128 -f /data/file -w -d -v N=128 gigabytes will be written in total each thread will output 32.
Posts
A mystery within a puzzle ...
In some previous posts I had been discussing bonnie++ (not bonnie, sorry Tim) and its seeming inability to keep the underlying file system busy. So I hauled out something I wrote a while ago, for precisely these purposes (I’ll get it onto our external Mercurial repository soon). Push the box(es) as hard as you can, in IO. I built this using OpenMPI on the JackRabbit (JR5 96TB unit) and ran it.