Es facil comprobar que Wii Hollywood es capaz de realizar stream processing yque por lo tanto es un GPU capaz de software GPGPU nativamente. Entre las pruebas estan que el Hollywood de Wii es capaz de realizar HDR+AA en graficos de tiempo real como Monster Hunter 3, Cursed Mountain, Resident Evil DarkSide Chronicles, Silent Hill, etc; ni siquiera las Nvidia serie 7 de 24 pipelines eran capaces de HDR+AA, algo que solo fue posible hasta las ATI x1000 serie debido a su nueva arquitectura, a pesar de tener solo 16 pipeline.
Nvidia 7 no capaz de HDR+AA pero si ATI x1000(ingles)
http://www.xbitlabs.com/articles/video/display/radeon-x1000_7.html
En wikipedia viene la noticia en español pero sin detalles del por que
http://es.wikipedia.org/wiki/Serie_Radeon_X
"
Serie Radeon X1000
La serie Radeon X1000 de 0.09 micras apareció en octubre de 2005. Es una arquitectura completamente nueva que tiene poco en común con las generaciones anteriores de GPUs de ATI. La nueva serie Radeon X1000 de ATI incluye soporte para Shader Model 3.0, un nuevo controlador de memoria similar al Cell de IBM, un sistema de cache rediseñado, AVIVO y antialiasing adaptable. Aparte que podían trabajar con antialiasing y HDR en sicrónico cosa que los procesadores de la serie 7 de NVIDIA no lo hacen
"
ATI Hollywood demuestra HDR+AA en Monster Hunter 3
http://news.vgchartz.com/news.php?id=3996
"
The demo consists of two quests, hunt either a Kurubekko (little wyvern much like Kut-Ku) or an alpha carnivore, Dosjagi. You can choose between 5 weapons: Great Sword, Sword & Shield, Hammer, Light Crossbow and Heavy Crossbow. Make your selections and off you go! The first thing you'll notice is the spectacular graphics; no, that's no hyperbole. The lighting, the texture detail, anti-aliasing (yes, you read that right, no jaggies), HDR (High dynamic range rendering for us geeks) and other technologies make this one of the prettiest Wii games on the market, period.
"
De hecho, se pueden tambien usar algoritmos GPGPU para reducir la presión del uso del HDR
http://transporter-game.googlecode.com/files/RealtimeHDRImageBasedLighting.pdf
"
Computing luminance statistics for a certain scene
might become a heavy operation for real-time applications.
Therefore, a General Purpose GPU parallel
reduction technique has been used to compute in parallel
the minimum, maximum and log average luminance.
"
Aproposito, el HDR no es posible en fixed pipelines y sin shaders programables
http://courses.csusm.edu/cs536xz/student_presentation/HDR%20Rendering.ppt
"
OpenGL fixed - functions cannot be used for HDR.
The programmable pipeline must be used to achieve HDR lighting effects.
HDR is achieved in OpenGL using a fragment shader, written in GLSL or Cg.
"
Si algunos programadores insiten en que Wii no tiene shaders, creo que deberian conseguirse engines como el Athena engined usado en Cursed Mountain, o el Unity, el Gamebryo LighSpeed, etc; yo solo supongo dos cosas, que dicen que Wii no tiene shader programables quizas por estar usando un SDK defasado, tanto como los primeros que salieron antes de que el Hollywood fuera terminado, otra razon debe ser que solo conocen el Shader Model de DirectX, cuando se sabe que Wii usa el API GX muy parecido a opengl, por lo que deberían aprender el opengl Shading language para entender mejor al shading system de Wii.
Aunque si no quieren aprender el API GX de nintendo, pueden valerse de trucos como el usar un driver como el mesa para que traduzca las llamdas de opengl a el GX de Wii.
http://www.phoronix.com/scan.php?page=news_item&px=NzAyMA
"
A Mesa (OpenGL) Driver For The Nintendo Wii?
Posted by Michael Larabel on January 28, 2009
There is now talk on the Mesa 3D development list about the possibility of having a Mesa driver for the Nintendo Wii. Those working on developing custom games for this console platform have already experienced some success in bringing OpenGL to the Wii through the use of Mesa.
Nintendo has its own graphics API (GX) for the Wii, which is resemblant of OpenGL but still different enough that some work is required to get OpenGL running. A way to handle this though is by having a Mesa driver translate the OpenGL calls into the Wii's GX API. This is similar to gl2gx, which is an open-source project that serves as a software wrapper for the Nintendo Wii and GameCube.
Their first OpenGL instructions running on the Nintendo Wii is shown below (it's not much, but at least it's better than seeing glxgears).
"
Otra prueba esta en que Wii ha empleado herramientas como Havok FX(disponible en Havok 4.0) para hacer las físicas desde el GPU y no el CPU
http://www.gamasutra.com/php-bin/news_index.php?story=9308
"
"We wanted the action in our game to focus on interactive elements in a highly intuitive manner," says David Nadal, Game Director at Eden Games. "We knew Havok Physics could help us do that for game-play elements, but we wanted to push the envelope even further to add persistent effects that could interact with game-play elements. Havok's GPU-accelerated physics effects middleware helped us achieve that in surprisingly little time."
Through the use of Havok FX and GPU technology, game developers are able to implement a range of physical effects like debris, smoke, and fluids that add detail and believability to Havok’s physics system. Havok FX is cross-platform, takes advantage of current and next-generation GPU technology, and utilizes the native power of Shader Model 3 class graphics cards to deliver effect physics that integrate seamlessly with Havok’s physics technology found in Havok Complete.
“With Havok FX we can explore new types of visual effects that add realism into Hellgate: London,” commented Tyler Thompson, Technical Director, Flagship Studios. “Given the widespread installed base of GPUs and the incredible performance of the new Nvidia GeForce 7900 boards, Havok FX was a natural choice."
"
La razon por la que no se ha aprovechado el Hollywood en un si quiera 50% puede ser explicado por Tim Sweeny de Epic Games
http://www.tomshardware.com/news/Sweeney-Epic-GPU-GPGPU,8461.html
"
Tim Sweeney: GPGPU Too Costly to DevelopNext news 3:11 PM - August 14, 2009 by Kevin Parrish X
Epic Games' chief executive officer Tim Sweeney recently spoke during the keynote presentation of the High Performance Graphics 2009 conference, saying that it is "dramatically" more expensive for developers to create software that relies on GPGPU (general purpose computing on graphics processing units) than those programs created for CPUs.
He thus provides an example, saying that it costs "X" amount of money to develop an efficient single-threaded algorithm for CPUs. To develop a multithreaded version, it will cost double the amount; three times the amount to develop for the Cell/PlayStation 3, and a whopping ten times the amount for a current GPGPU version. He said that developing anything over 2X is simply "uneconomic" for most software companies. To harness today's technology, companies must lengthen development time and dump more money into the project, two factors that no company can currently afford.
"
Aproposito, encontre en una de las patentes de displacement mapping de nintendo algo que me revelo el nombre de la técnica exacta que pueden emplear para el displacement mapping, se conoce como vertex displacement mapping y consiste en realizar todo el trabajo necesario, incluso los calculo flotantes y aritmeticos, en el GPU y el CPU solo haría el mínimo trabajo de proporcionar solo las posiciones originales de los vertices y nada mas. Esto lo comprobe al leer la patente y me di cuenta de que hablaba algo sobre texture cache y fetching vertex texture, cualidades solo disponibles a partir de las ATI HD 2000 series como la HD 2900 y se usan para el vertex displacement mapping
Patent Storm
http://www.patentstorm.us/patents/6980218/fulltext.html
"
Command processor 200 receives display commands from main processor 110 and parses them—obtaining any additional data necessary to process them from shared memory 112. The command processor 200 provides a stream of vertex commands to graphics pipeline 180 for 2D and/or 3D processing and rendering. Graphics pipeline 180 generates images based on these commands. The resulting image information may be transferred to main memory 112 for access by display controller/video interface unit 164—which displays the frame buffer output of pipeline 180 on display 56.
FIG. 5 is a logical flow diagram of graphics processor 154. Main processor 110 may store graphics command streams 210, display lists 212 and vertex arrays 214 in main memory 112, and pass pointers to command processor 200 via bus interface 150. The main processor 110 stores graphics commands in one or more graphics first-in-first-out (FIFO) buffers 210 it allocates in main memory 110. The command processor 200 fetches: command streams from main memory 112 via an on-chip FIFO memory buffer 216 that receives and buffers the graphics commands for synchronization/flow control and load balancing, display lists 212 from main memory 112 via an on-chip call FIFO memory buffer 218, and vertex attributes from the command stream and/or from vertex arrays 214 in main memory 112 via a vertex cache 220.
Command processor 200 performs command processing operations 200a that convert attribute types to floating point format, and pass the resulting complete vertex polygon data to graphics pipeline 180 for rendering/rasterization. A programmable memory arbitration circuitry 130 (see FIG. 4) arbitrates access to shared main memory 112 between graphics pipeline 180, command processor 200 and display controller/video interface unit 164.
FIG. 4 shows that graphics pipeline 180 may include: a transform unit 300, a setup/rasterizer 400, a texture unit 500, a texture environment unit 600, and a pixel engine 700.
Transform unit 300 performs a variety of 2D and 3D transform and other operations 300a (see FIG. 5). Transform unit 300 may include one or more matrix memories 300b for storing matrices used in transformation processing 300a. Transform unit 300 transforms incoming geometry per vertex from object space to screen space; and transforms incoming texture coordinates and computes projective texture coordinates (300c). Transform unit 300 may also perform polygon clipping/culling (300d). Lighting processing 300e also performed by transform unit 300b provides per vertex lighting computations for up to eight independent lights in one example embodiment. As discussed herein in greater detail, Transform unit 300 also performs texture coordinate generation (300c) for emboss-style bump mapping effects.
"
Bueno, si tienen alguna duda solo pregunten
Es facil comprobar que Wii Hollywood es capaz de realizar stream processing yque por lo tanto es un GPU capaz de software GPGPU nativamente. Entre las pruebas estan que el Hollywood de Wii es capaz de realizar HDR+AA en graficos de tiempo real como Monste
OIye
Yo tengo el mismo PDF que hiso el autor, en español xD
Y ahoira nintendo tiene registradi el Zii, que sera?
Yo soy el mismisimo
Yo soy el mismisimo tapionvslink.
http://www.tapionvslink.weebly.com
La pagina no la he actulizado aun con los datos que acabo de dar como prueba, mucho menos el pdf en español pues no encuentro la evidencia en ese idioma.
Es facil comprobar que Wii Hollywood es capaz de realizar stream processing yque por lo tanto es un GPU capaz de software GPGPU nativamente. Entre las pruebas estan que el Hollywood de Wii es capaz de realizar HDR+AA en graficos de tiempo real como Monste
Pardon
No suelo aprender mnucho las cosas como nombres, pero siempre mantengo los arcivos originales (Ya uqe al editarlos, van cambiando, despues lo edita otro, y alfinal llega otra info)
Voy a leer el post.
Con Nintendo Switch ;) (Y mi blanquita también)
-Saludos desde el 2022: Gracias SceneBeta por tantos recuerdos y amistades
Ahora hackeando desde el Hospital :D