8x9800GX2! MIT researcher build 16 GPU monster-class computer

Kornowski

VIP Member
2602338383_348fea14a5_b_1.jpg


2602353085_0977db8061_b_1.jpg


2603173800_ae5eee1450_b_1.jpg


2603178378_b451e48d1e_b_1.jpg


SOURCE
 
Um, why? It is not like that is going to boost performance that much. All that may do is increase the data throughput of the video cards, but to take full advantage the developers would have to code their products to be multi threaded out to multiple GPUs like that.

Basically, this is just a concept that would have no real world advantage and mainly be a waste of money.
 
Um, why? It is not like that is going to boost performance that much. All that may do is increase the data throughput of the video cards, but to take full advantage the developers would have to code their products to be multi threaded out to multiple GPUs like that.

Basically, this is just a concept that would have no real world advantage and mainly be a waste of money.


Amen to that :) now, ofcourse it does look impressive and it'll heat your room up to a decent temperature fairly quick, but I dont see any use for it at all.
 
i noitced in the last picture they have a DVItoVGA converter, they cant afford a simple DVI cord when they have the money to spend 20k on a computer :P
 
would have to code their products to be multi threaded out to multiple GPUs like that.
While poorly coded games gain no performance at all in SLI, and a bit of optimization is needed to get the most out of a SLI-setup, games don't need to be actually "multithreaded" to support Multi-GPU setups. It doesn't work exactly like when you have multiple CPUs/Cores and you need to make your applications multithreaded to see the full potential of your system. Also, I've seen that you're no big fan of Multi-GPU setups, but with recent advances in technology SLI/Crossfire bring pretty impressive gains on many games/applications, way, WAY better than back when those technologies were introduced.
 
Um, why? It is not like that is going to boost performance that much. All that may do is increase the data throughput of the video cards, but to take full advantage the developers would have to code their products to be multi threaded out to multiple GPUs like that.

Basically, this is just a concept that would have no real world advantage and mainly be a waste of money.
This was for MIT research, I'm sure it wasn't built by someone for just a gaming machine.
 
*Cough* They have no life *Cough* :P

That is just insane, and a complete waste of money IMO. With the close to 20K they used to build that thing, they could have built a nice rig for about 2K (which is more than enough IMO, spent about 1,100 on a nice 32" 1080i HD LCD TV, (thats still only about 3.1K spent, hell they could even buy a new car with whats left (nothing special or fancy, but none the less) :P

But I have to admit, It's really cool :D
 
*Cough* They have no life *Cough* :P

That is just insane, and a complete waste of money IMO. With the close to 20K they used to build that thing, they could have built a nice rig for about 2K (which is more than enough IMO, spent about 1,100 on a nice 32" 1080i HD LCD TV, (thats still only about 3.1K spent, hell they could even buy a new car with whats left (nothing special or fancy, but none the less) :P

But I have to admit, It's really cool :D
Again, this was built by MIT. They could afford thousands of those setups, and this was for research.



In order to study the real-time artificial vision system, MIT Massachusetts Institute of Technology a few students and researchers began creating a used eight of the 9800 GX2 monster-class computers, as research purposes. Although they are all used in commercial products (MSI K9A2, GeForce 9800GX2 ...) But we used the IBM Cell by exposure to the high-throughput computing method or Elastic Cloud calculated, and so on we have no way to understand the way To integrate different parts of the computer computing power. Although this plan compared to computing power they can only be a small finger, but at least the MIT students have good computer assembly, a readily available.
 
Um, why? It is not like that is going to boost performance that much. All that may do is increase the data throughput of the video cards, but to take full advantage the developers would have to code their products to be multi threaded out to multiple GPUs like that.

Basically, this is just a concept that would have no real world advantage and mainly be a waste of money.

It's MIT, who cares about a few thousand bucks :P. Plus, they'll definitely find a way to use all those computing powers.
 
Back
Top