Pages: [1] 2 3
Sid2
 
Forum moderator - BOINCstats SOFA member
BAM!ID: 28578
Joined: 2007-06-13
Posts: 7336
Credits: 593,088,993
World-rank: 3,482

2008-09-20 13:01:53
last modified: 2008-10-26 03:09:52


As the years spent in multi-core processing begin to fly by, a clear trend is emerging: More can be far better. Applications have access to enormous volumes of affordable parallel compute ability that was literally impossible five years ago.

While extreme high-end specialty accelerated cards had been available for a few years, the average high-end video card makers began to recognize that they too had many orders of magnitude more compute ability than even the highest-end CPUs.

In this article, we look at five high-end contenders priced that are likely appeal to enthusiast, enterprise power and professional users. ATI's highest end graphics card, the RV770-based 4870; Nvidia's highest-end offerings the GTX 280 and T10P Tesla; Clearspeed's CSX700 and Tilera's Tile64.

All of these products are available today and all of them have proven their abilities to significantly increase performance through parallel computing.



More . . .

Sid2
 
Forum moderator - BOINCstats SOFA member
BAM!ID: 28578
Joined: 2007-06-13
Posts: 7336
Credits: 593,088,993
World-rank: 3,482

2008-10-08 16:06:48
last modified: 2008-10-08 16:07:14

Sid2
 
Forum moderator - BOINCstats SOFA member
BAM!ID: 28578
Joined: 2007-06-13
Posts: 7336
Credits: 593,088,993
World-rank: 3,482

2008-10-12 15:51:01


. . . people may soon get their own supercomputing facilities by exploiting their PC's graphics processors.

Nvidia has been talking up GPUs for a couple of years, and Nvidia's chief scientist David Kirk told a conference: "If you think about it, this is a massively parallel supercomputer on your desktop. It is truly the democratisation of supercomputing."

The potential is already visible in the results from cooperative projects such as Folding@Home (bit.ly/fathome). The problem is that programs written for CPUs don't usually have any way of accessing GPU power, except for mundane, repetitive tasks such as pixel shading and playing DVDs.

But that could change now AMD (Intel's main rival) owns ATI (Nvidia's main rival). According to Ian McNaughton, a senior manager with AMD in the UK, AMD is planning multi-core processors that will have both CPUs and GPUs on the same chip.

For consumers who need fast graphics processing for video and games, it would make more sense to have a quad-core chip with two CPUs and two GPUs than four CPUs, he says. No doubt Intel and Via are thinking along the same lines.

It should certainly be possible to enable an on-chip GPU to handle the sort of repetitive parallel processing that would race through an Excel spreadsheet. In which case, the market for deskside supercomputers may not be as big or as profitable as Microsoft and its PC partners hope.



More . . .

Sid2
 
Forum moderator - BOINCstats SOFA member
BAM!ID: 28578
Joined: 2007-06-13
Posts: 7336
Credits: 593,088,993
World-rank: 3,482

2008-10-25 12:08:35


GPU architectures have steadily become more general-purpose, and the latest models do double-precision floating-point math, which is needed for many scientific applications. In 2007, NVIDIA released a system called CUDA that allows GPUs to be programmed in the C language, making it much easier for scientists to develop and port applications to run on GPUs.

CUDA also has the benefit of making it much easier for scientists to develop and port applications to run on GPUs. Scientists have already used CUDA for applications in molecular dynamics, protein structure prediction, climate and weather modeling, medical imaging, and many other areas.

BOINC has recently added support for GPU computing. The BOINC client detects and reports GPUs, and the BOINC server schedules and dispatches jobs appropriately. If configured to do so, BOINC can even use a PC's GPU 'in the background' while the computer is in use. Already, one BOINC-based project (http://GPUgrid.net) has CUDA-based applications, and several other projects will follow suit shortly.


More . . .

Sid2
 
Forum moderator - BOINCstats SOFA member
BAM!ID: 28578
Joined: 2007-06-13
Posts: 7336
Credits: 593,088,993
World-rank: 3,482

2008-10-25 18:15:52


The basic premise is the use of Nvidia's CUDA programming platform (itself closely related to the C programming language) to unlock the increasingly programmable architecture of the latest graphics chips.

On paper, it's extremely plausible. In terms of raw parallel compute power, 3D chips put CPUs to shame. A good recent example is the new room-sized, high density computing cluster installed by Reading University.

Designed to tackle the impossibly complex task of climate modeling, it weighs in at no less than 20 TeraFlops. That sounds impressive until you realise that just a single example of Nvidia's next big GPU, due this summer, could deliver as much 1TFlop. So, a few four-way Nvidia GPU nodes will soon offer the same raw compute power as a supercomputer built using scores of CPU-based racks.

Intel's intriguing new GPU, known as Larrabee, is due out in late 2009 or early 2010. Apart from the fact that it will be based on an array of cut-down X86 processor cores, little is known about its detailed architecture. But as Intel's first serious effort to compete in the GPU market, it's a game-changing product.

For [Nvidia's VP of Content Relations Roy] Taylor, of course, the Larrabee project merely confirms that the GPU is where the action is. “Why does Larrabee exist? Why is Intel coming for us? They're coming for us because they can see the performance advantage of our GPUs,” Taylor says.



More . . .

Sid2
 
Forum moderator - BOINCstats SOFA member
BAM!ID: 28578
Joined: 2007-06-13
Posts: 7336
Credits: 593,088,993
World-rank: 3,482

2008-10-26 03:08:07




This isn't just another article about the latest and greatest video card or how well it handles the latest game titles, this article is also meant to explain why the GTX 200 graphics processor is going to change the way we all use computer hardware now and into the future.

Even before the GeForce GT200 GPU, NVIDIA has been consistently overwhelming the graphics card industry. Anymore it seems like the only products that manage to outperform their video cards are other GeForce graphic cards. Industry competitors have been very unsuccessful at beating NVIDIA, and very recently their biggest rival waved a white flag in surrender and relegated themselves to feeding off a low-end market segment just to maintain an identity.

Sometimes though, I think that you become so good at what you do that you begin to compete with yourself. Not surprisingly NVIDIA has already anticipated this problem and planned for a solution, which is why this article will introduce a lot more than just video game frame rates for the new compute-ready GT200 graphics processor.


More . . .



NVIDIA GPU Computing & CUDA FAQ



Sid2
 
Forum moderator - BOINCstats SOFA member
BAM!ID: 28578
Joined: 2007-06-13
Posts: 7336
Credits: 593,088,993
World-rank: 3,482

2008-10-28 10:51:17
last modified: 2008-10-28 11:00:25



Accelerators can help address the need for higher performance with more efficient use of power and space, saving data center resources says Glenn Lupton, engineering team leader in HP's Accelerator Program.


Question: There are a number of accelerators in the market today. Can you give us a breakdown by type?


Lupton: One of the most common accelerators in use today are Graphics Processing Units (GPUs), highly parallel processors capable of 100's of gigaflops per second that were originally designed to accelerate graphics applications, but now include special features for high-performance parallel computing. Competition among the GPU vendors for market share in the personal computer (PC) graphics gaming market has driven technological advancements in graphics cards.

Researchers have been investigating their usage for high performance computing for a number of years with success in many areas with large-scale deployments starting this year. Both AMD and NVIDIA have product lines specialized for high-performance computing, specifically NVIDIA Tesla and AMD Stream.



More . . .

Sid2
 
Forum moderator - BOINCstats SOFA member
BAM!ID: 28578
Joined: 2007-06-13
Posts: 7336
Credits: 593,088,993
World-rank: 3,482

2008-10-28 11:18:15



The first GPGPU technologies (General-Purpose computation on GPUs) for 3D graphics cards appeared several years ago. Modern GPUs contain hundreds of arithmetic units, and their power can be used to accelerate a lot of compute-intensive applications. The existing generation of GPUs possesses a flexible architecture. Together with high-level programming languages and firmware architectures, such as the ones described in this article, it reveals these features and makes them much more accessible.

GPCPU was inspired by the appearance of relatively fast and flexible shader programs that can be executed by modern GPUs. Developers decided to employ GPUs not only for rendering in 3D applications, but also for other parallel computations. GPGPU used graphics APIs for this purpose: OpenGL and Direct3D. Data were fed to a GPU in the form of textures, and computing programs were loaded as shaders. This method had its shortcomings -- relatively high programming complexity, low data exchange rate between a CPU and a GPU, and other limitations to be described below.

GPU-assisted computing has been developing rapidly. At a later stage, both main GPU manufacturers, NVIDIA and AMD, announced their platforms -- CUDA (Compute Unified Device Architecture) and CTM (Close To Metal or AMD Stream Computing) correspondingly.

Unlike previous GPU programming models, they were implemented to take into account direct access to hardware functions of graphics cards. These platforms are not compatible with each other -- CUDA is an extension of programming language C, while CTM is a virtual machine that executes assembler code.



More . . .

Sid2
 
Forum moderator - BOINCstats SOFA member
BAM!ID: 28578
Joined: 2007-06-13
Posts: 7336
Credits: 593,088,993
World-rank: 3,482

2008-10-28 11:48:46



Nvidia has shipped close to 100 million processors with the new programmable interface, called Compute Unified Device Architecture, or CUDA. The programming kit has been downloaded more than 150,000 times.

Nvidia’s competitors are more dismissive. Executives at A.M.D. and Intel argue that a rather small set of very sophisticated software can take advantage of the CUDA design. “They are severely restricted and limited,” said Dave Hofer, a director of marketing at Intel. “In the short term, it is not a massive threat.”

Intel plans to release a competing product called Larrabee in 2009 or 2010. A.M.D. is promoting a fledgling programming layer from Apple called OpenCL, or Open Computing Language, which A.M.D. hopes will blunt CUDA’s momentum should it be ready for widespread use as expected in 2009.

Mr. Huang, however, said the competition is underestimating his company’s lead. “We will have shipped 300 million units with CUDA by the time those other guys are ready,” Mr. Huang said. “We probably have a four-year lead on Intel.”


More . . .

Sid2
 
Forum moderator - BOINCstats SOFA member
BAM!ID: 28578
Joined: 2007-06-13
Posts: 7336
Credits: 593,088,993
World-rank: 3,482

2008-10-30 11:10:17
last modified: 2008-10-30 12:29:27


Seeking The Best Performance per Watt for Folding@Home


We got so excited in participating in the Folding@Home project that we built as many high performance systems we could running both the SMP and GPU clients. We were very happy with the results until we received our first electricity bill: our energy consumption more than doubled – and we haven’t even had our systems running 24/7 for 30 days!

Since we still wanted to contribute as much as we can to Folding@Home, we decided to go in a quest to find out if there is a way to score lots of points at Folding@Home and, at the same time, not going bankrupt. We got all video cards we had available here in our lab to see which one provided the best performance/consumption ratio.


We found out several interesting things in our investigation. Here is a summary:


  • From the video cards we analyzed, GeForce 8800 GT is the one that provides the best cost/benefit and best performance/kWh ratios for running Folding@Home. Of course you will get a higher score with a GeForce GTX 260 or GeForce GTX 280, but they are more expensive and also will consume more. If you think only about the points/kWh ratio (i.e. efficiency), then GeForce GTX 260 is the best: it produces more points per kWh consumed than all other video cards.

  • A “weaker” video card won’t necessarily consume less power than a “stronger” one. Just see how GeForce 8800 GTS produces a lower score and consumes more than a GeForce 8800 GT.

  • ATI video cards should not be used for running Folding@Home: they have a far lower points/kWh ratio compared to nVidia cards. A GeForce 8800 GT provides almost double the efficiency of a Radeon HD 4870. If you are building dedicated systems for running Folding@Home, stick with nVidia: you will get a higher score and a lower electricity bill.

  • Very low-end video cards like Radeon HD 3450 and GeForce 8500 GT are not efficient to run Folding@Home and should be avoided. From the mainstream market GeForce 9500 GT was the one with the best performance and efficiency index (points/kWh), being our recommendation on this segment.

  • The Playstation 3 achieved one of the lowest points/kWh ratio, meaning that you will feel an increase on your electricity bill without a meaningful increase in your Folding@Home score. We see lots of people praising the math performance of PS3, but this performance isn’t converted in a huge Folding@Home score because each PS3 work unit doesn’t give a lot of points.





More . . .


Sid2
 
Forum moderator - BOINCstats SOFA member
BAM!ID: 28578
Joined: 2007-06-13
Posts: 7336
Credits: 593,088,993
World-rank: 3,482

2008-11-06 02:47:04


GeForce GTX 200 GPUs - BEYOND GAMING



More than just games - NVIDIA® GeForce® GTX GPUs accelerate the latest consumer applications - Video transcoding from Elemental Technologies and the Stanford Folding@home project




Steven Pletsch
 
BOINCstats SOFA member
BAM!ID: 36730
Joined: 2007-10-16
Posts: 1306
Credits: 80,949,457
World-rank: 13,627

2008-11-10 19:53:36


http://www.prnewswire.com/cgi-bin/stories.pl?ACCT=104&STORY=/www/story/11-10-2008/0004921461&EDATE=

I wonder if it comes with a plastic drool guard screen?

4GB
240 CUDA processors

Guest

2008-11-10 20:08:46

I wonder if it comes with a plastic drool guard screen?


it's protected by a large fence of franklins: "The Quadro FX 5800 graphics board has an MSRP of $3499 USD."
Sid2
 
Forum moderator - BOINCstats SOFA member
BAM!ID: 28578
Joined: 2007-06-13
Posts: 7336
Credits: 593,088,993
World-rank: 3,482

2008-11-15 03:31:44


AMD to Open Up Graphics Processors' Streaming Floodgates


In December, AMD will release a software update that unlocks the ATI Stream acceleration capabilities already built into their ATI Radeon graphics cards. The result, according to the company, will be enhanced performance for applications optimized for the technology. AMD is already working with ISVs to develop versions of software applications that make full use of ATI streaming.

Both AMD and rivals like Nvidia have taken steps recently that extend the usefulness of the GPU and harness its compute power beyond simply rendering graphics.

The sophistication of GPUs has evolved to the point where general computations can be done directly through them and not just by the CPU alone, Markedon pointed out.

"ATI Stream allows applications, outside of gaming, to automatically run on either processor, depending on whichever one is capable of running it faster," he added. The technology is traditionally used by early adopters in academic, scientific and financial institutions.


More . . .

Sid2
 
Forum moderator - BOINCstats SOFA member
BAM!ID: 28578
Joined: 2007-06-13
Posts: 7336
Credits: 593,088,993
World-rank: 3,482

2008-11-15 23:30:45


AMD`s FireStream 9270 Processor Looks to Boost the Chip Maker`s HPC Offerings


Advanced Micro Devices is preparing to release its latest FireStream general-purpose GPU for high-performance computing. The AMD FireStream 9270 GPU offers an additional performance boost to better handle high-performance computing and scientific applications.

In addition, AMD is updating its software development kit to allow developers to write more applications that use the FireStream GPU. The AMD FireStream GPU competes against Nvidia’s Tesla 10 series GPU and what Intel’s Larrabee processor will offer when the chip is released.

In addition to FireStream and other developments concerning HPC, AMD also plans to release a new driver for its line of ATI Radeon HD 4000 series graphics cards that will allow consumers to take advantage of ATI Stream technologies. For example, in terms of applications, this driver update will allow a system that uses these discrete ATI Radeon graphics cards to convert high-definition video faster. This ATI Catalyst driver is set for a Dec. 10 release.


More . . .

Sid2
 
Forum moderator - BOINCstats SOFA member
BAM!ID: 28578
Joined: 2007-06-13
Posts: 7336
Credits: 593,088,993
World-rank: 3,482

2008-11-20 14:52:13

nVidia graduates from games to HPC


As part of his speech, Michael Dell talked up the new Tesla Personal Supercomputer, introduced at the show. It's a desktop PC with four GPU processors in it that can generate four teraflops of performance. Just two years ago that would get it on the Top 500 list.

A high-end, dual processor, quad-core Xeon workstation can manage, at best, 192 gigaflops, according to Samit Gupta, senior product manager for Tesla at nVidia. The price is $9,995, fairly cheap for that level of processing power.



More . . .

Sid2
 
Forum moderator - BOINCstats SOFA member
BAM!ID: 28578
Joined: 2007-06-13
Posts: 7336
Credits: 593,088,993
World-rank: 3,482

2008-11-20 15:57:25


The history of CUDA


Ian Buck talks about his background developing Brook for GPUs at Stanford university and what paths were taken for developing a C platform for GPUs.



Sid2
 
Forum moderator - BOINCstats SOFA member
BAM!ID: 28578
Joined: 2007-06-13
Posts: 7336
Credits: 593,088,993
World-rank: 3,482

2008-11-20 16:24:44


Darren Schmidt of National Instruments talks about initial CUDA experience


Darren Schmidt pioneered the first work at National Instruments with CUDA for LabVIEW. He talks about his initial work and what he found in moving his programs over to the GPU.




Sid2
 
Forum moderator - BOINCstats SOFA member
BAM!ID: 28578
Joined: 2007-06-13
Posts: 7336
Credits: 593,088,993
World-rank: 3,482

2008-11-22 15:27:26



Larrabee, Intel’s daring attempt of a discrete GPU, still seems to be on track for a potential release to market this time next year, as recent reports claim.

According to Intel, Larrabee will be the industry’s first many-core x86 Intel architecture, with the first Larrabee products targeting discrete graphics applications.

Larrabee will support DirectX and OpenGL and will be capable of running existing games and applications. Highly parallel applications, such as scientific and engineering software, will also see great benefits from the Larrabee’s native C/C++ programming model.

While Larrabee does seem to offer some interesting benefits that gives it some distinction, it will still be facing competition from Nvidia and ATI upon its release and not simply with graphical gaming performance


More . . .

Sid2
 
Forum moderator - BOINCstats SOFA member
BAM!ID: 28578
Joined: 2007-06-13
Posts: 7336
Credits: 593,088,993
World-rank: 3,482

2008-12-10 02:38:49



Most PCs today have a minimum of two extraordinarily powerful processing mechanisms, and I'm not talking about "cores." The first is a compilation of multiple cores -- today, more often as many as four per unit, for 16 in a four-way server. The second uses a fundamentally different architecture, designed for pipelining identical instructions that are repeated tens of thousands of times, to be executed in parallel, in a process that on paper resembles the stretching and folding of taffy.

But for most computer users, only the core structure is utilized for everyday tasks. And in retrospect, it's obvious that a huge opportunity has been missed for as long as the past decade -- a chance to use the GPU, which for everyday tasks can be relatively dormant, for more than just graphics. To take advantage of that opportunity, however, the complexities of pairing any one manufacturer's set of cores with any other's assembly of pipelines, must be masked from the software developer, who must pay attention to the task of making his application crunch numbers.

So today, Khronos Group -- the consortium of manufacturers and developers born out of the project that made the OpenGL 3D graphics libraries -- has formally published its 1.0 specification for Open Computing Language (OpenCL). Essentially, this is a toolset for constructing applications that can leverage any grouping of CPUs and GPUs, running any operating system, to perform high-speed parallel processing tasks.


More . . .

Sid2
 
Forum moderator - BOINCstats SOFA member
BAM!ID: 28578
Joined: 2007-06-13
Posts: 7336
Credits: 593,088,993
World-rank: 3,482

2008-12-30 02:54:09


Nvidia GPUs Report for Grid Computing Duty


BOINC provides the distributed computing grid layer for a number of scientific projects that aim to cure diseases, study global warming and explore space, with the help of volunteer home PCs. Tapping Nvidia GPUs is adding even more power to the massive grid.

Its GPUGRID project uses Nvidia-based graphics cards in participating PCs to compute high-performance biomolecular simulations for scientific research. Adding support for Nvidia GPUs led to 1,000 GPUs delivering the same amount of computing power as 20,000 CPUs, the project said.

Gianni De Fabritiis, researcher at the Research Unit on Biomedical Informatics at the Municipal Institute of Medical Research and Pompeu Fabra University in Barcelona, stated that running GPUGRID on Nvidia GPUs "innovates volunteer computing by delivering supercomputing-class applications on a cost-effective infrastructure which will greatly impact the way biomedical research is performed."

Einstein@Home also expects an "order of magnitude" improvement from Nvidia technology, said Bruce Allen, director of the Max Plank Institute for Gravitational Physics and Einstein@Home Leader for the LIGO Scientific Collaboration. "This would permit deeper and more sensitive searches for continuous-wave sources of gravitational waves," he said.



More . . .

Sid2
 
Forum moderator - BOINCstats SOFA member
BAM!ID: 28578
Joined: 2007-06-13
Posts: 7336
Credits: 593,088,993
World-rank: 3,482

2009-06-17 13:13:13


Jacket is Accelereyes GPU engine for MATLAB.

This allows users to run MATLAB code and Jacket then takes that code and compiles it down to CUDA for GPU acceleration.


Sid2
 
Forum moderator - BOINCstats SOFA member
BAM!ID: 28578
Joined: 2007-06-13
Posts: 7336
Credits: 593,088,993
World-rank: 3,482

2009-06-19 11:59:03
last modified: 2009-06-21 13:03:28

Sid2
 
Forum moderator - BOINCstats SOFA member
BAM!ID: 28578
Joined: 2007-06-13
Posts: 7336
Credits: 593,088,993
World-rank: 3,482

2009-06-20 17:00:50


Graphics cards


One of the most interesting trends in high performance computing will be the increased appearance of GPU (graphical processing units) in high-end systems. Typically GPUs are used to render high-end graphics on desktop PCs and workstations. But, says Top500.org, as GPUs increase in performance they will become increasingly important to supercomputer makers as they typically offer a better price-performance ratio than most multi-core CPUs.

The one limitation with GPUs is that they are only good for crunching numbers. They are designed to process graphics which means processing data streams and if that could be harnessed alongside CPUs GPUs offer makers attractive options.

The Tsubame supercomputer from the institute of Technology in Tokyo is already proving the value of GPUs and is the first Top500 supercomputer running the Tesla graphics chip from Nvidia. The system-cluster consists of 170 Tesla-S1070-systems running at 170 Teraflops.


More . . .

Sid2
 
Forum moderator - BOINCstats SOFA member
BAM!ID: 28578
Joined: 2007-06-13
Posts: 7336
Credits: 593,088,993
World-rank: 3,482

2009-06-21 01:23:39


NVIDIA Tesla C1060 Computing Processor


The NVIDIA Tesla C1060 transforms a workstation into a high-performance computer that can dramatically outperform a small cluster. This gives technical professionals a dedicated computing resource at their desk-side that is much faster and more energy-efficient than a shared cluster in the data center. The Tesla C1060 is based on the massively parallel, many-core Tesla processor, which is coupled with the standard CUDA C programming environment to simplify many-core programming.



Pages: [1] 2 3

Index :: Gadgets, Games and Gizmos :: GPU Computing: the Essential Guide
Reason: