Mac vs. PC thread

Originally posted by ExCyber@Oct 5, 2003 @ 08:18 AM

Yeah, I'm still baffled by a lot of the PC games that get rave reviews; sometimes it seems like any mediocre FPS with nice graphics gets lauded by the press and the fans; the last PC game I played and thought was really good is Starcraft. Maybe that means I don't play enough PC games, but I think it's mostly that the PC scene is driven more by technology than by artistry - everyone seems to talk about whose engine a game is based on or what graphics card you need to run it or how it's too easy for cheaters to ruin the game, but the level of real appreciation seems to be weak. I mean, when was the last time you saw a phenomenon like Ikaruga on PC, with people more or less swooning at its feet and writing sonnets about how impressive it is? This sort of thing seems to happen at least once every couple of years in the console scene...

Not even that, we see more detailed and complex 3D models in most Console games than in PC games, but there is always the perfect excuse "PC games run at higher resolution", I believe that is one of the cheapest resources an individual can refer to.
 
A 64-bit processor doesn't grant you any real performance advantages over a 32-bit processor unless a) You use software that needs to do 64-bit integer math or B) you use software that could take advantage of >4GB of RAM.

Or c) OS developers take advantage of the 64-bit virtual address space to develop more efficient memory management / filesystem code. I guess that would technically fall under "software that needs to do 64-bit integer math", but I'd argue that it's a special case since it could affect the performance of 32-bit applications as well.
 
I do not see that point as valid, it is like stating that a 32bit processor is not more powerful than a 16bit processor because you are working under a 16bit enviroment.
 
Is it really that hard to read?

Come on... I know you can make the effort, at least for me... ^.~

If it really does "bother" that much I will stop writting 100% in italics.
 
Am I the only one who doesnt care about the whole pc vs mac thing?

When either brakes people call me and I get money :D

It's a win win situation. :devil
 
Originally posted by Des-ROW@Oct 6, 2003 @ 12:48 AM

I do not see that point as valid, it is like stating that a 32bit processor is not more powerful than a 16bit processor because you are working under a 16bit enviroment.

What good is power you can't use?

Anyway, in regards to the PC vs. Console issue from a graphical prettyness perspective things tend to run in cycles. When the next generation of consoles comes out, PCs are at a disadvantage. Even if there are PC systems with greater or equal power in the graphics department there aren't any games that take advantage of it; however, more towards the end of a consoles life, PCs start to become dominant again. Whether consoles or PCs or both are better for any particular gamer varies from person to person and their unique needs.
 
Basically...

pvc-time.gif
 
OK this is gonna be a long one...I just know it.

1. 64bit

64BIT DOES NOT OFFER ANY SIGNIFICANT PERFORMACE INCREASE UNLESS YOU ACTUALLY NEED THE EXTRA PRECISION!!!!

Once again for those who can't read. 64bit does JACK SHIT for performance unless you need to have numbers that 32bit can't store in one 32bit value. (Which incidentaly is about 0.00000000000000001% of the time. Give or take.)

There's a good reason why processors have been 32 bit for so long. The only reason why the jump from 16bit to 32bit was so dramatic was that processors didn't need to load two individual 16bit values per instruction. Going to 32bit helped with this by (mainly) greatly reducing memory delays as the cpu waited for another part of an instruction to arrive from memory. It has been shown that in some instances 64bit cpus can actually DECREASE performance from 32bit ones. Plus you end up with programs that in the great majority of cases just end up twice as big (8 bytes per instruction vs. 4) without actually getting any benefit.

2. 64bit OS

A program can take advantage of 64bit instructions WITHOUT the need for the OS itself to be 64bit. All the program has to do is simply use the 64bit assembly instructions instead of their 32bit versions. The OS could care less which set the program uses. This is also why old as fuck 16bit Windows programs will (for the most part) run happily on a modern version of Windows. As long as the program uses OS libraries in the correct bitage it will run just fine.

BTW there IS a 64bit version of WindowsXP and all versions of Server 2k3 have 64bit support built in (at least for Itanium chips for now).

3. Amount of RAM

Any version of Windows (and for the rest of this post assume it's some flavor of XP) can access up to 4GB of RAM. If for some reason you need more (and pay for the hardware that supports it) you can get Windows Server 2k3 which ranges from 4GB to 64GB (who would need this much). That number jumps to 512GB of ram if you use a 64bit cpu. No consumer I ever meet will ever need more than 4GB (heck most don't need more the 512MB). And just for refrence here you go.

4. CISC vs. RISC

It has been shown many times in the past that a properly written program (in assembly) will run at about the same speed on both a RISC and CISC cpu assuming both are running at comparable clock speeds. Basically what happens is that for a CISC cpu each instruction takes longer to do BUT it has to do fewer instruction than a RISC cpu would. In the end this offset (lots of small instructions vs. few large instructions) generally evens out the field. Any properly written compiler (as with many of the current ones) will generate extrememly efficient code for it's particular cpu type and will use all the instructions it can. This wasn't true in the past which is why often times a CISC cpu did worse than a RISC one.

5. UNIX more stable than Windows

BULLSHIT. Unix is just as prone to crashes and fuckups as Windows ever was. You just end up hearing more about Windows because it has more users and more diverse (and often poorly written) software for it. If some shitty little util crashes on me that looks like crap and barely works...I don't blame the OS so much as the inability of the programmer to write good code. Unix is incidentaly just as insecure as Windows. Both suffer from similar flaws and have their backdoors er al. Windows is just a bigger target and hence more popular with trojans, virues etc. I have used a Mac for very many years at school and have made them crash at frighteningly regular intervals. Heck I had the whole system (at the time brand new G3) die on me because a website had too many images on it. Only way to get back into the system was to unplug the thing and reboot. On the contrary while the old Windows 9x OS' were unstable they were not that bad if you used good hardware and avoided glitchy programs. With Windows XP I have yet to have a program crash and take the whole system with it. Yes programs crash...but they do it on all systems. I once saw a SUN UNIX server come to a halt because some studen accidentaly wrote an app that forked itself in an infinite loop....not a pretty sight.

6. Multiprocessor

Windows XP Pro allows upto 4 cpus. (Home is limited to 1 but then again it was meant for normal people who will only ever need that much.) The Server 2k3 versions can go as high as 64 cpus. Windows has had multiprocessor support all the way since Windows NT 3.51 way back in 94. Mac just got it a couple years ago with OSX.

(BTW when comparing a Dual G5 with shitloads of ram...don't compare it to a "Standard" PC. There is nothing "standard" about a Dual CPU computer with more ram than anyone would need.)

Well that's everything for now. Can't think of more as I write this at 2AM. And I had to use quite some restraint keep my Mac bashing in check.

Oh and Cloud if all you want to do is run 7+ year old games than that spiffy new G5 is just for you. :p
 
Originally posted by gameboy900@Oct 6, 2003 @ 01:47 AM

1. 64bit

64BIT DOES NOT OFFER ANY SIGNIFICANT PERFORMACE INCREASE UNLESS YOU ACTUALLY NEED THE EXTRA PRECISION!!!!

Once again for those who can't read. 64bit does JACK SHIT for performance unless you need to have numbers that 32bit can't store in one 32bit value. (Which incidentaly is about 0.00000000000000001% of the time. Give or take.)

There's a good reason why processors have been 32 bit for so long. The only reason why the jump from 16bit to 32bit was so dramatic was that processors didn't need to load two individual 16bit values per instruction. Going to 32bit helped with this by (mainly) greatly reducing memory delays as the cpu waited for another part of an instruction to arrive from memory. It has been shown that in some instances 64bit cpus can actually DECREASE performance from 32bit ones. Plus you end up with programs that in the great majority of cases just end up twice as big (8 bytes per instruction vs. 4) without actually getting any benefit.

I thought so, that is most likely to be the reason why most processors are starting to be 64bit instead of 32bit, because 64bit decreases performance. I believe we should call several companies and tell them about this... we should start with SGI.

3. Amount of RAM

Any version of Windows (and for the rest of this post assume it's some flavor of XP) can access up to 4GB of RAM. If for some reason you need more (and pay for the hardware that supports it) you can get Windows Server 2k3 which ranges from 4GB to 64GB (who would need this much). That number jumps to 512GB of ram if you use a 64bit cpu. No consumer I ever meet will ever need more than 4GB (heck most don't need more the 512MB). And just for refrence here you go.

Perfect, XP supports it, but does the current PC hardware support over 4Gb? Oh, and, in such case, we shouldn't even consider the G5 as "consumer" hardware, how should we label it? "Professional"? Your choice.

4. CISC vs. RISC

It has been shown many times in the past that a properly written program (in assembly) will run at about the same speed on both a RISC and CISC cpu assuming both are running at comparable clock speeds. Basically what happens is that for a CISC cpu each instruction takes longer to do BUT it has to do fewer instruction than a RISC cpu would. In the end this offset (lots of small instructions vs. few large instructions) generally evens out the field. Any properly written compiler (as with many of the current ones) will generate extrememly efficient code for it's particular cpu type and will use all the instructions it can. This wasn't true in the past which is why often times a CISC cpu did worse than a RISC one.

Again, we should give both IBM and SGI a call.

5. UNIX more stable than Windows

BULLSHIT. Unix is just as prone to crashes and fuckups as Windows ever was. You just end up hearing more about Windows because it has more users and more diverse (and often poorly written) software for it. If some shitty little util crashes on me that looks like crap and barely works...I don't blame the OS so much as the inability of the programmer to write good code. Unix is incidentaly just as insecure as Windows. Both suffer from similar flaws and have their backdoors er al. Windows is just a bigger target and hence more popular with trojans, virues etc. I have used a Mac for very many years at school and have made them crash at frighteningly regular intervals. Heck I had the whole system (at the time brand new G3) die on me because a website had too many images on it. Only way to get back into the system was to unplug the thing and reboot. On the contrary while the old Windows 9x OS' were unstable they were not that bad if you used good hardware and avoided glitchy programs. With Windows XP I have yet to have a program crash and take the whole system with it. Yes programs crash...but they do it on all systems. I once saw a SUN UNIX server come to a halt because some studen accidentaly wrote an app that forked itself in an infinite loop....not a pretty sight.

Once more, we should call SGI and tell them to make a new release of IRIX 6.5, but we should tell then to replace the UNIX System V core for a standard Windows2000 kernel.

I will also add that if a PC can play audio and video files, navigate the internet and burn optical media, it has already fulfilled it's purpose, disregarding the technical specs or whatever. Other than that, if I really want to work and make the best out of a system, I would switch to another platform, more stable, powerful and reliable.

Oh, and, as a sidenote, I hate PC games with all my little and beautiful heart, unless they are console ports, obviously.

Ja ne!
 
Following GB's logic, however, replacing the Unix core with the Windows one would not be more or less reliable. You'd have a different set of problems, but both cores have their fundamental flaws.

64Bit computing is an evolutionaly change. It opens doors that 32Bit computing closes. Anything that uses large numbers will benfit from databases to high resolution 3D mesh. Currently there are workarounds for dealing with huge numbers, but the trade off is speed.


I will also add that if a PC can play audio and video files, navigate the internet and burn optical media, it has already fulfilled it's purpose, disregarding the technical specs or whatever. Other than that, if I really want to work and make the best out of a system, I would switch to another platform, more stable, powerful and reliable



The same can be said about a Mac. There is nothing to indicate that a Mac is more stable, powerful or reliable than any other system. Absolutely nothing. I've seen Linux, Solaris, Windows XP and OSX all crash and generally behave stupidly. There is no advantage to using one over the other, unless there is a specific application you need to use that isn't available on the other.

I will reiterate - my choices are based on what I'm comfortable with. I don't like Mac designs, I don't like the OS, I am unimpressed with their "designer" hardware.



I like PC games
 
Originally posted by gameboy900@Oct 6, 2003 @ 04:47 PM

Windows has had multiprocessor support all the way since Windows NT 3.51 way back in 94. Mac just got it a couple years ago with OSX.

That's not strictly true.

You were able to buy dual processor PowerMacs back in August 96. You could also get dual and quad processor Mac clones from Daystar Digital.

However, I believe the software support was rather poor.
 
A program can take advantage of 64bit instructions WITHOUT the need for the OS itself to be 64bit. All the program has to do is simply use the 64bit assembly instructions instead of their 32bit versions. The OS could care less which set the program uses.

Not true for AMD64; the OS has to enable/disable AMD64 extensions on a per-task basis.

BULLSHIT. Unix is just as prone to crashes and fuckups as Windows ever was. You just end up hearing more about Windows because it has more users and more diverse (and often poorly written) software for it.
.

It's not just that, it's that sometimes that poorly written software is part of the OS. Last I heard, the console could still bring the whole system down from an unpriviliged printf call (or a text file containing the same characters). There are still over 30 known unpatched IE security flaws. The core of the OS has been stable for quite a while, but the periphery is still full of holes. Sure, most normal users don't run into these types of problems, but then talking about what "normal users" do is hardly the measure of the robustness of an OS - a good OS has to handle the exceptions, because nothing else will.
 
Originally posted by Des-ROW@Oct 6, 2003 @ 07:33 AM

I thought so, that is most likely to be the reason why most processors are starting to be 64bit instead of 32bit, because 64bit decreases performance. I believe we should call several companies and tell them about this... we should start with SGI.

With a 64-bit CPU, your datapath is twice is wide, so any gate that supports 64-bit operations needs twices as many transistors (probably less in practice) to support 64-bit over 32-bit. Those are transistors that could have been used to add another integer or floating point unit. This is why consumer processors have been 32-bit so long. Now if you need to do a lot of 64-bit math then the tradeoff is well worth it, and that's why the companies like IBM and SGI have built machines with 64-bit processors much earlier than in the consumer market.

QUOTE

4. CISC vs. RISC

It has been shown many times in the past that a properly written program (in assembly) will run at about the same speed on both a RISC and CISC cpu assuming both are running at comparable clock speeds. Basically what happens is that for a CISC cpu each instruction takes longer to do BUT it has to do fewer instruction than a RISC cpu would. In the end this offset (lots of small instructions vs. few large instructions) generally evens out the field. Any properly written compiler (as with many of the current ones) will generate extrememly efficient code for it's particular cpu type and will use all the instructions it can. This wasn't true in the past which is why often times a CISC cpu did worse than a RISC one.

Again, we should give both IBM and SGI a call.

RISC chips are easier to design and in theory they are easier for a C compiler to optimize for; however, Intel and AMD have dumped enough R&D into x86 to keep up with the times. They certainly have been doing a lot better than Motorola and the G4. Of course, IBM has now taken the upperhand with the G5, but it's hard to say how long that will last.

I hate PC games with all my little and beautiful heart

Any particular reason?

It's not just that, it's that sometimes that poorly written software is part of the OS. Last I heard, the console could still bring the whole system down from an unpriviliged printf call (or a text file containing the same characters). There are still over 30 known unpatched IE security flaws. The core of the OS has been stable for quite a while, but the periphery is still full of holes. Sure, most normal users don't run into these types of problems, but then talking about what "normal users" do is hardly the measure of the robustness of an OS - a good OS has to handle the exceptions, because nothing else will.

I can't comment on the holes you mentioned; however, my boss manages to crash the GUI in OSX on his Quicksilver G4 several times a week and no matter how hard I push my PC at home it rarely ever crashes. Make your own judements.

EDIT: And I find that diagram to be quite inaccurate. It implies that PCs merely catch up to consoles in graphical power by the end of the consoles life span, when in fact PCs far surpass consoles in their power by the end of a consoles life. Just look at the numbers. The Radeon 9800 Pro can render up to 380 million polys per second and has 128MB of texture memory. The XBox can only render about 125 million polys per second and it has only 64MB shared between the video and the main processor.
 
my boss manages to crash the GUI in OSX on his Quicksilver G4 several times a week

Then that's probably a flaw in the GUI of OSX, which I made no claims about.

no matter how hard I push my PC at home it rarely ever crashes

Then I suppose you have stable hardware with decent cooling. "How hard" you push a computer has little to do with flaws in any OS code other than memory management code.
 
Originally posted by Mask of Destiny@Oct 6, 2003 @ 08:39 AM

And I find that diagram to be quite inaccurate. It implies that PCs merely catch up to consoles in graphical power by the end of the consoles life span, when in fact PCs far surpass consoles in their power by the end of a consoles life. Just look at the numbers. The Radeon 9800 Pro can render up to 380 million polys per second and has 128MB of texture memory. The XBox can only render about 125 million polys per second and it has only 64MB shared between the video and the main processor.

I think it is pretty accurate, especially considering that the diagram itself was made by SCEE employees, and not myself.

The Radeon 9800Pro can move 380Million polygons per second? Are you sure? Polygons and Vertices are not the same thing. According to nVidia, their GeForceFX 5900 Ultra, which is the Ati Radeon 9800Pro's main competitor, can process up to 338Million vertices per second, while the Ati site does not give any figures regarding their GPU's polygon/vertices performance. My point is, I doubt the Radeon9800Pro can process 380Million polygons per second, while, in the other hand, I believe it may move 380Million vertices per second.

Anyway, 380Million vertices per second still means more polygon processing power than either the Sony PlayStation2, Nintendo GameCube or Microsoft Xbox, but that is not the point, I still have to see a PC game that looks as good as Team Ninja's Dead or Alive 3, AM2's Virtua Fighter 4: Evolution, Konami's Silent Hill 3, AM2's OutRun2, Polyphony Digital's Gran Turismo 4 and other console games.

I will not even talk about texture memory, because that is out of the question, the Sony PlayStation2 has only 4Mb of DRAM for texture memory (but with a 48Gb per second bandwidth) and still offers competition.


Any particular reason?

I was exaggerating a bit, I know, but particular reason... yes... PC offers a very small range of different game genres and is plagued with FPS and Strategy games, there are even genres that practically never make it to the PC.

~ Oh, and, Curtis, I love your way of writting ^.~
 
I'm just saying that I can choose not to see people's ugly, oversized animated signature pictures, and I can choose not to see those inane smileys from people who think that going clickclickclick over the pretty pictures is an alternative to actually put some content into their posts, but I can't choose not to see posts from people who seem to think that making their posts hard to read somehow makes them cool.
 
Do you really believe I am a person who thinks that making my posts "hard to read" renders me "cool"?
 
Back
Top