I don't think it's entirely fair to assume that wasn't understood. Even most people with a basic understanding of computers understand that a graphic co-processor (video card) will not work unless you load drivers. Now, there is a lot of general ignorance out there, and every time I have to deal with "I *need* a monitor with *lots* of RAM!" and "I want one of *those* keyboard with lights and buttons so my computer will be as fast as her's" a little piece of my soul implodes.
right - "general ignorance out there". of course installing some thing, loading it's drivers and hoping to see some benefit simply depends on ones ability to get a thing one can make use of.
Keep in mind the context we are discussing here is data crunching on GPU's for high performance computing and supercomputers. These are areas where the IT techs and project managers will hopefully know better. (Not high level managers. "We can't have a green data center because corporate colors are blue and gray."
are we? afaik w7 as a so called mainstream OS does use the physx extensions if they are available. do not ask me what the benefits are, i switched it off right away..
In my post, I figured it would cut costs to make an nVidia "co-processor card" without video capabilities. If nothing else, it eliminates the need for video port loop-backs. Right now, I know there is a cost issue. The cards are being mass-produced. It may not be cost-effective now to make a new, specialized card. However, in the future, it makes sense. Look at SLI with Nvidia. Why not have an "SLI model" that cuts cost by removing the video port, and perhaps... how do I say this... "direct video rendering" capabilities? Just make an SLI co-card that's the same model, but only acts as a co-processor. To the PC and operating system, it's still a video card, just without displaying pretty pictures on a monitor. Similarly with heterogenous supercomputers. Why not just have cards that don't have DVI ports and don't need loop-backs? At that point, we would essentially have a co-processor card that looks like a video card to the PC.
that's things mungled up. first: those video-ports are only a very very small part of produtcions costs, so what the hack. second it's an OS thing (for windoze) that you need to have a monitor attached to a GPU to make use of it. and 3rd - of course it would be possible to tunnel that and and use it not matter what some silly OS thinks.
Note that I'm still talking about a CUDA or OpenCL environment. The difference is more streamlined, and therefore potentially somewhat cheaper, hardware for the same chipset. No one is talking about duct-taping a CUDA or Cell processor to the motherboard and expecting the data gnomes to suddenly work faster.
even if some kind of coprocessor (and those IGP's are exactly that) has entered the chipset - you either got the software to make use of it or have a nice piece of sillycone..
to sum it up: intel, amd, nvidia or who ever else may come up with anything they like - for the mainstream it's eather x86-compliant or special and has to wait until software is available to use it.