ATI tries shared memory again

Alexvrb

Established Member
Fancified Shared Memory with a new name. Thanks ATI.

Alright, it's not exactly your typical shared memory solution. They're using the superior speed of PCI Express, combined with some on-card memory (hopefully at least half), and smart algorithms to try and get the textures on the card when they need to be there. The only problem is, now you're borrowing from my main memory, which isn't cheap when you're banking on the fast DDR/DDR2 memories of tomorrow. I'd rather spend the extra $30-50 on the card than have to buy another memory stick. Plus it could incur a performance hit on everything else, assuming it works properly and the graphics card doesn't see a hit. And what if the textures being used exceed on-card memory?

There are good and bad things about this, I just hope they leave it for lower end solutions only. I mean, it still beats the tar out of older shared memory designs. Either way, it is pretty interesting.
 
Hmm, I have to say I agree with you. If I paid for 2GB of memory, I want to be able to USE 2GB of memory. OTOH, though, if it's fast enough, it could allow you to increase the amount of texture memory available to your card as you need it. I don't think we'll see anything using more than 512MB for quite a while though (and that's quite a hefty chunk of RAM to dedicate to your video card).
 
Well, it used to be that you could assume that the memory would be pretty much saturated by CPU activity, but now with dual-bank designs there can - in principle - actually be a surplus of throughput. If it's structured properly I don't see why it's not a good idea to try to take advantage of that.
 
Originally posted by ExCyber@Sep 18, 2004 @ 12:25 AM

Well, it used to be that you could assume that the memory would be pretty much saturated by CPU activity, but now with dual-bank designs there can - in principle - actually be a surplus of throughput. If it's structured properly I don't see why it's not a good idea to try to take advantage of that.

Again, it has its ups and downs. I think it should be an OPTIONAL feature enabled/disabled in the drivers. The other thing is, I think that a lot of what they're talking about should be the responsibility of the game developers, not the GPU manufacturers. Meaning the game engine should better know its own needs and how/when to get textures to/from main memory.

There are other problems involving using fast, modern two-channel solutions. The main one that comes to mind is that these are not the kind of systems that need to save money on the GPU, if you get my drift. But the cheaper systems that need help on this front... let's just say our typical Celerons aren't going to have super ultra DDR-9 quad-channel with leet sauce. They probably still equip them with PC133 and just aren't telling anyone. Sharing memory on one of those would probably suck.

I think it could be very useful for integrated graphics or perhaps very compact laptop cards.

Oh! One inexpensive setup that has a massive surplus is dual-channel Socket A systems. They have almost half their bandwidth left after the CPU bus is saturated. It could be great for one of those.
 
Really? I didn't know that. Would that apply to my nforce2 board with two sticks of PC3200?
 
I don't know if its cool for desktop performance...

I have a Qbic Q3401 (soltek) p4 3ghz with 512mb of ddr 400 (shared with the GFX chip) and some desktop appz like PaintShopPro are strangely slowdowned when i zoom on big pictures or when i copy/paste pics (on my old p3 500mhz with a standalone gfx card, i never encountered this type of slowdowns...).

Thats quite dramatic, i bought a 6 time faster computer with 32time faster ram and i'm slower with PaintShopPro !
 
Originally posted by it290@Sep 18, 2004 @ 08:44 AM

Really? I didn't know that. Would that apply to my nforce2 board with two sticks of PC3200?

Yes. Now granted, you will see some performance increase especially when other things are accessing main memory, and this minor boost is reflected in benchmarks. But it is not drastic. Now on the other hand, you will see a marked improvement in speed when dealing with a Socket 754 chip vs an otherwise identical Socket 940 chip that has TWO HT links to memory. That is because in this case, you can be using the same chipset, CPU clock, and bus speeds, but the dual-channel links are in the CPU itself.

Oh, one more thing, I assume your CPU is running at 400Mhz FSB, yes?

Fonzie: Well first off, the technology I'm talking about isn't out yet. It would still be vastly superior to the shared memory you are using. But anyway, when you spend that much money on a computer, you need to set aside a little cash for a discrete graphics solution. If you don't game, something like a cheap Radeon 9200/9600 would be great.

Err, one more thing. I guarantee you that your "DDR 400" memory is NOT 32 times faster than the memory used by your Pentium III. Even if it is set up for dual-channel operation (two sticks of the same exact speed and size), it would be between 8 and 12 times faster at most.
 
Yes, it's running at 400mhz FSB. Actually, I just built another machine for my neighbor (yesterday) with an Athlon 64 and two sticks of PC3200 (and nforce3 mobo)... that thing is smokin' fast. Shame it's running XP right now, I'd like to see how it would fly with Gentoo installed..
 
It's a socket 754 chip though, right? I'd love to eventually have a 940 with the next nforce (last I heard they still weren't sure if it was going to be called nforce 4, but the Nvidia APU is back!). Still, any Athlon 64 is pretty solid. I don't know what the hell they're doing with Sempron pricing, since for only a little more than a socket 754 Sempron you can get a "lowend" retail Athlon 64 2800+ for $142 shipped. As for the socket A sempr0ns, they're often pitted price-wise against superior Athlon XP models (you have to look at FSB and clock frequency instead of rating, since Semprons are rated against Celerys).
 
it would be between 8 and 12 times faster at most.

Ok, i was just making PC3200(DDR)/PC100(SINGLE) = 32... So, sorry for the lack of knowledge.

If you don't game, something like a cheap Radeon 9200/9600 would be great.

So you think that the fact that i use only my Intel GFX chip slowdown a lot my desktop applz?

I can't bear that my 3ghz is +10 times slower than my crap 500mhz with PaintShopPro !!! (I have some 1 sec freeze sometimes when i paste a little pic :eek:mg: :( :/ ).

Thx
 
Originally posted by fonzievoltonov@Sep 21, 2004 @ 05:06 PM

So you think that the fact that i use only my Intel GFX chip slowdown a lot my desktop applz?

I can't bear that my 3ghz is +10 times slower than my crap 500mhz with PaintShopPro !!! (I have some 1 sec freeze sometimes when i paste a little pic :eek:mg: :( :/ ).

Thx

It could be part of the problem, I don't know how much it'd affect it though. It shouldn't be slower for many things. Hopefully there isn't anything else wrong with your system. You have the latest mainboard drivers (including graphics) and other drivers, right?

As for the memory speed, they changed the way they named it from SDR to DDR. PC100 means it runs at 100Mhz, but the bandwidth is 800MB/sec. PC3200 is often called "DDR 400" because it runs at 200Mhz and is DDR, so it has an effective speed of 400Mhz. But the bandwidth is 3.2GB/sec, as the name PC3200 implies - that's how all DDR memory is named, starting at PC1600. So if you have single channel PC3200, your memory is only actually 4 times as fast.
 
Back
Top