Ga naar inhoud

GeForce 6800 (NV40) + Radeon X800 (R420) news topic.


Aanbevolen berichten

[quote:751af98178="VTec"]Antwoord op de vraag van Duketheboss: [img:751af98178]http://www.hardware.info/images/news/XFXGF6.jpg[/img:751af98178] XFX 6800 Ultra.[/quote:751af98178]Cool , ik zou hem wel willen hebben maar heb niet genoeg geld :x :cry: :cry: Misschien krijg ik Red Dragon z'n 9800XT als hij een nieuwe gaat kopen? :P geintje :wink:
Link naar reactie
  • Reacties 266
  • Aangemaakt
  • Laatste reactie

Beste reacties in dit topic

  • anoniem

    267

Ik ben zelf ook aan het rondkijken naar een nieuwe kaart (vandaar dat ik plots hier weer rondhang ;)) en het zal mij benieuwen wat het gaat worden :) Altijd een nVidia fan geweest, maar zelf gebruik ik met volle tevredenheid een ATI 9800 PRO. Ach, ik volg gewoon de weg van Red Dragon, dan zit ik altijd goed ;)
Link naar reactie
[quote:49152a301d="VTec"]Ik ben zelf ook aan het rondkijken naar een nieuwe kaart (vandaar dat ik plots hier weer rondhang ;)) en het zal mij benieuwen wat het gaat worden :) Altijd een nVidia fan geweest, maar zelf gebruik ik met volle tevredenheid een ATI 9800 PRO. Ach, ik volg gewoon de weg van Red Dragon, dan zit ik altijd goed ;)[/quote:49152a301d] Dan wordt het waarschijnlijk de Asus V999999999999 ;) Maar ja dat is gebaseerd op speculaties, even afwachten tot begin volgende maand, dan zijn de reviews online verwacht ik.
Link naar reactie
Ik zit inderdaad aan die Asus V9999 te denken of een Leadtek A400. een van die 2. Maar ik wacht eerst even af hoe snel de X800 XT word. verhaaltjes op rage3d klinken leuk maar ik wacht eerst af tot ik wat betrouwbare bronnen heb gezien O ja, Mijn Radeon 9800XT heb ik al aan een maat van me beloofd dus helaas mensen hij is niet meer te koop.
Link naar reactie
[quote:7a42310c0d]very nice find!! keep in mind, these frame rates of 68 and the 70% gain increase is on only the R420 Pro, image the XT [/quote:7a42310c0d] 68 FPS in FarCry , iedereen vraagt zich ook al af op welke resolutie , maar dit is nog maar de Pro! maarjah , eerst even wachten op wat betrouwbare bronnen ja.
Link naar reactie
[quote:1a0e6745cc="Red Dragon"]Ook leuk is deze nabeschouwing van Anandtech, hij blikt terug op de NV3x reeks, wel de moeite waard om even door te neuzen: http://www.anandtech.com/video/showdoc.html?i=2031[/quote:1a0e6745cc] Waard om door te neuzen? Vind deze reaktie wel weer erg sterk ;) (b3d natuurlijk) [quote:1a0e6745cc="WaltC"][quote:1a0e6745cc="IST"]http://www.anandtech.com/video/showdoc.html?i=2031 Or how I stopped worrying and learned to love just how biased Anandtech really is.[/quote:1a0e6745cc] I just read it and was going to do a post here myself about it, but you beat me to it...;) I agree, and I thought the whole thing was little more than an apology for nV3x that might've been penned (well, at least coached from the sidelines) by nVidia itself. The whole problem with it is that the article seems to depend on nV40 as a reference for "what was wrong with nV3x" when the R3x0 architecture has been serving the purpose of illustrating what was wrong with nV3x for the past 18 months...;) This article would have been timely and somewhat informative 8-12 months ago, using R3x0 as a reference (nV40 not required.) As it is, though, it's just an apology the purpose of which is to try and promote nV40 because it is "so much better" than nV3x. The truth is that "what was wrong" with nV3x is that it fell far short of R3x0, and what is "right" about nV40 is that its design is much closer to that of R3x0 than nV3x ever was. I think a good sidebar option for this article might be: "What took us so long at AnandTech to figure out what was wrong with nV3x?" The article did, finally, put some facts into writing that AT should have written about long ago. Notable were these: [quote:1a0e6745cc="Anandtech"]Behind NV3x is a 4x2 pixel pipe (though there was some confusion over this we will get to later). ... ...rather than coloring a pixel, a z or stencil operation can be performed in the color unit. This allows NV3x to perform 8 z or stencil [b:1a0e6745cc]ops[/b:1a0e6745cc] per clock and NV40 to perform 32 z or stencil [b:1a0e6745cc]ops[/b:1a0e6745cc] per clock. NVIDIA has started to call this "8x0" and "32x0", respectively, as [b:1a0e6745cc]no new pixels are drawn.[/b:1a0e6745cc][/quote:1a0e6745cc] (Empahsis mine.) Finally, the fact that "ops" per clock in no way resemble pixels per clock, is at last sinking into the mainstream hardware press enough to actually see its way to print...;) Only took them--let's see--about 18 months to understand the very simple premise. I guess we can rejoice over small miracles whenever they occur. So why did so many people trip over their shoelaces over something so small as understanding the differences between "ops" and "pixels," differences which are substantial, fundamental, and obvious? I think the following quote from the article explains it: [quote:1a0e6745cc="AnandTech"]Of course, there is more to graphics performance than how many pixel pipes are under the hood.[/quote:1a0e6745cc] Of course there is, and no one has ever said differently. The problem was that some interpreted that since there's "more to graphics performance in a gpu than the number of pixel pipes," it means, somehow, that the number of pixel pipes in a given gpu [i:1a0e6745cc]isn't important[/i:1a0e6745cc] or germane to a discussion of that gpu's performance, or that the number of pixel pipes in a given gpu were [i:1a0e6745cc]flexible[/i:1a0e6745cc] to the degree that pixel pipes and the number of ops per clock were freely interchangeable in a discussion if not indistinguishable in reality. Which of course is sheer nonsense...;) This "confusion," in my opinion, originated after nVidia misrepresented the pixel pipeline organization of nV30 as "8x1" when it was later revealed that nV30 was clearly 4x2 (despite nVidia's official representation of the 8x1 spec.) nVidia apologists came out of the woodwork to "explain it" away under a blanket of confusion which did little more for me except to underscore that they themselves didn't know the difference between a pixel and an "op." To me, it was analogous to not knowing the difference in the human body between a leg and a toe...;) [i:1a0e6745cc] Prior to those events[/i:1a0e6745cc], I do not recall any such general confusion existing as to the fact that all gpus have a fixed number of pixel pipes in their physical architectures, and that knowing that number was a decent baseline from which to begin a meaningful investigation into the target gpu's overall performance profile. It's just a baseline, but it is a fundamental baseline to have if enlightenment is expected. Without such unambiguous knowledge of baseline specifications, confusion will reign. Where the article here well and truly falls short of the mark is in the fact that the blame for the "confusion" has to go to nVidia, since nVidia was content to allow such misrepresentations and misunderstandings to fester and grow so long as nVidia perceived some direct PR advantage in doing so. That was obviously the company's motivation behind dishonestly representing the 8x1 organization in the first place (since R300 was indeed 8x1 and nVidia was cognizant of the difference): nVidia was not in the least confused about why it would be better to state an 8x1 organization for nV30 (even if it was in reality 4x2.) Had nVidia not thought such a statement, even if dishonest, would help it in some fashion, it would never have made the bogus 8x1 claim for nV30 in the first place. No one who designed nV3x is in the slightest "confused" about the fundamental differences between an op and a pixel, I'm quite sure. I thought this next comment was particularly insightful, and was glad to see this make its way into print as well: [quote:1a0e6745cc="AnandTech"]They needed the performance leap, and now they will be in a tough position when it comes to making money on this chip. [b:1a0e6745cc]Yields will be lower than NV3x[/b:1a0e6745cc], but retail prices are not going to move beyond the $500 mark.[/quote:1a0e6745cc] (Emphasis mine.) If the yield picture is indeed worse than it was for nV30 (which was cancelled) or nV35/38 (which didn't see the light of day in any sort of "quantity" until late in the 3rd calendar quarter of '03 and beyond), then the picture for nV40 is extremely bleak. Time will tell about this, though, and if nVidia can successfully yield mass-market quantities of nV40, a gpu manufactured on the same .13 process as nV35/8, but with nearly *double* the transistor count, my hat will be off to them as this will be quite an achievement, indeed. Because, just as the question, "If a tree falls in the forest where no one can hear it, does it make a sound?" is an interesting one in some respects, so is this question: "Will nV40 actually matter if nVidia is unable to ship it in meaningful quantities?" In 30-90 days we should have an answer to the vexing philosophical question: "How many nV40s can you fit on the head of a pin?" The answer must be "all of them" or "none"...;) If the answer is "all of them" it will mean yields are too poor for the nV40 to sell in meaningful quantities; if it is "none" it will mean yields are OK and even one nV40 is too large to fit...Heh...;) OK, enough philosophy...:D [/quote:1a0e6745cc] Vooral dat verhaaltje over Yields is wel intressant trouwens.
Link naar reactie
ik zag ook staan , op die screenshot met 68 FPS , met FRAPS! dus , er is een minimale afmeting voor het fraps venstertje , dat niet veranderd kan worden , iemand had dus uitgevogeld dat de resolutie waarschijnlijk 1600x1200 was , en 2x AA en 4x AF dacht ik . dat zijn toch super scores:S Edit : net op tweakers : http://www.tweakers.net/nieuws/32091 'Radeon X800 XT pas in juni, meer pipelines dan X800 Pro' XT 16 pipelines geactiveerd :D:D
Link naar reactie
[img:5467337d5d]http://www.oc.com.tw/article/0404/imgs/ati-RANCH-012.jpg[/img:5467337d5d] [quote:5467337d5d="CJ"] Nog wat meer info... X800 zal 500 en zelfs 600 Mhz GDDR3 geheugen gebruiken. Lijkt mij dat de Pro 500 Mhz gebruikt en de XT 600 Mhz? Edit: Er wordt ook een nieuwe 3Dc technologie genoemd. Het schijnt iets te maken te hebben met (texture?) compression en zowel Half Life 2 als Far Cry gaan dit ondersteunen. Na de E3 zullen er nog meer games bekend worden gemaakt die 3Dc ondersteunen // ANS:In fact, Pixel Shader 3.0 with 2.0 effects, to the naked eye said is no difference, simple saying, the majority of games the instruction collection which need with the formula, in Pixel Shader 2.0 then sufficiently deals with, in addition within the short time cannot have any support Pixel the Shader 3.0 software urgent needs use such specification, much less present Pixel the Shader 2.0 specifications by no means completely display, if demands guides Pixel Shader 3.0, then draws a chart the chip operation ability is can carry out smoothly, or the undecided number, looked from the ATi angle, The increase operation ability can compare increases other functions to come to have the significance. // There is no different between an image rendered by PS 2.0 and an image rendered by PS 3.0. PS 2.0 is good enough to handle most of the shader instructions in games. Also, there won't be too many games will support PS 3.0 in the foreseeable future. And, PS 2.0 is still not fully coded to show its strength. If forced to produce a chip that can render PS 3.0 codes, the performance of the chip may be not high enough to produce smooth frame rate. From our point of view, greatly enhance the PS 2.0 unit on our products sound more meaningful to consumers.[/quote:5467337d5d]
Link naar reactie
Ik zou gewoon eerst even de benchmarks afwachten ipv fps aflezen van een picture die gemaakt is op de ATI presentatie dag, dat zegt namelijk niet echt veel. Wat die foto betreft die ik hier zie van hjs, zelfde cooler als de 9800XT en 1 molex, dus de kaart word minder warm als de 6800 ultra maak ik hier uit op, heeft ie daar al een voordeel. voeding eisen zullen wel het zelfde wezen als de 9800XT gok ik
Link naar reactie
weer wat gepikt van CJ (wel handig die gast ;) ) [quote:22284319aa][b:22284319aa]R420, Radeon X800PRO/XT features new compression[/b:22284319aa] 3Dc revealed By Fuad Abazovic: donderdag 22 april 2004, 21:06 THERE'S ONLY one thing about the new ATI marchitecture that we haven’t reported on yet. ATI calls it 3Dc and it's a new way of compression that this chip will include. I guess ATI didn’t want us in Canada because it didn’t want us to learn about its new chips and its capabilities too soon. The last piece of the puzzle is this 3Dc compression that ATI is willing to supply as an open standard and this marchitecture looks very impressive, at least on paper. 3Dc will be able to perform similar level of quality as normal non compressed textures, we learn, but it wont be able to compress all data. It might lose information about light and shadow stuff. It will certainly speed things up r and R300 marchitecture tweaked up to R420 runs things faster then without. ATI is claiming 8.8 to 1 compression for R300 if I remember correctly and it was always rather good with these optimisations. However, I'd urge caution about these claims, as the R300 could compress data in 8.8 to 1 ratio only if you were talking about a black screen with information about only the black shade. It's another piece of BlueCrystalKit to make your 950/1000MHz memory look better, that's for sure.[/quote:22284319aa] http://www.theinq.net/?article=15487 [quote:22284319aa] I've always said that it doesn't make sense to upgrade your video card until you've seen what both manufacturers have put forth. This methodology works extremely well since ATI and NVIDIA have relatively similarly timed product cycles. I've seen a number of people preparing to upgrade their computers based on the GeForce 6800 Ultra results we've published recently, but what I'd caution here is the same thing we've been saying for a long time - wait. At worst, you'll wait less than a month and be greeted with another NV30, at best, you'll save yourself some money I won't tell you which one the outcome will be, but I'd just caution you to wait, just a little bit longer - we're finally upon the home stretch. Now if we could only get some good next-generation content to take advantage of these cards.[/quote:22284319aa] http://www.anandtech.com/weblog/index.html?bid=90 Nou, als Anand zegt wachten, dan moeten we maar wachten. :roll: Ik denk dat de X800 dus een beest wordt
Link naar reactie
http://www.driverheaven.net/articles/driverIQ/ IQ niet helemaal okay van de 6800 ? [quote:f33c3ae869]When I first read the reviews of the 6800 Ultra I was very impressed to see the problems which affected the NV3x being resolved, there were a few issues with IQ mentioned – such as those in Farcry however these were to be fixed in a future driver and appear to be legitimate bugs. I feel this is not the case however for the issues described above. 3Dmark as we all know is a synthetic benchmark, the purpose of which is to compare cards on a level playing field. Based on the changes in mipmap/textures on the NV40 this is not the case. It is disappointing that with such a technology lead over anything currently released that Nvidia have driver issues such as those described above. Max Payne 2 raises a different question altogether. Its not synthetic and its really a user experience issue. When your playing away at 100+ fps I have to be honest and say that its not a noticeable change in IQ over the Radeon. You’d be hard pushed to say which image is optimised with the mipmap square. To me though, its not a matter of what I can see so much as the fact that this changing of textures is happening behind the users back. I have selected maximum quality on my £400 graphics card and I fully expect it to have Max Quality…not lesser quality than a much cheaper competitiors product. The final issue is why have Futuremark stated that these drivers should be considered as approved? Its very clear to me that the 6800Ultra is not rendering the reference image as desired by 3Dmark, regardless of any performance impact and therefore isnt comparable to any other card. This in result is no different to the optimisations in previous drivers which Futuremark have frowned upon. We have forwarded Nvidia and Futuremark links to this article and have requested their comments on the findings. Hopefully the answers to the issues raised will resolve our concerns.[/quote:f33c3ae869]
Link naar reactie

Om een reactie te plaatsen, moet je eerst inloggen

Gast
Reageer op dit topic

×   Geplakt als verrijkte tekst.   Herstel opmaak

  Er zijn maximaal 75 emoji toegestaan.

×   Je link werd automatisch ingevoegd.   Tonen als normale link

×   Je vorige inhoud werd hersteld.   Leeg de tekstverwerker

×   Je kunt afbeeldingen niet direct plakken. Upload of voeg afbeeldingen vanaf een URL in


×
×
  • Nieuwe aanmaken...