CPU Comparison question

Message boards : Number crunching : CPU Comparison question

To post messages, you must log in.

1 · 2 · 3 · 4 · Next

AuthorMessage
Profile ejuel

Send message
Joined: 8 Feb 07
Posts: 78
Credit: 4,447,069
RAC: 0
Message 46287 - Posted: 15 Sep 2007, 17:38:48 UTC

Ok...although I'm extremely computer knowledgeable, I have been out of touch of CPUs over the past 5-10 years in their inner workings. Obviously CPUs get faster and cheaper over time...this is why I'm posting.

I have a 3+ year old Dell pc with this P4 chip:
Intel(R) Pentium(R) 4 CPU 2.80GHz [x86 Family 15 Model 2 Stepping 9]

I have a brand new Mac mini with this Intel Duo 2 chip:
Intel(R) Core(TM)2 CPU T7200 @ 2.00GHz [x86 Family 6 Model 15 Stepping 6]


What I find striking is that the new chip is only about 50% faster than the old chip at best. That's it. Is it me or is that bizarre? My old Dell cost about $1000 and honestly wasn't the fastest chip on the market back then. A lot has changed over the years since the P4 including multi-core processing, which, in effect is simply multiple cpus on 1 cpu...not necessarily faster chips but the more chips there are, the more the operating system can divvy up the work and assign (in theory) a dedicated core to an application/thread.

So what gives? Anybody have some laymen-type answer as why I feel my Mini isn't much faster than my dusty Dell? Or was my Dell really that snazzy to begin with (I doubt it)? I know that, since the dawn of the personal computer, cpus have been idle about 80% of the time...we just don't have the software (especially now) that would challenge a cpu (except for something like BOINC). So I understand the motto "hey, we have plenty of power...let's put more chips on the motherboard so each application can have it's own dedicated chip rather than timeslicing."

My Dell's CPU performance from BOINC is:
1482 floating points MIPS
2721 integer MIPS


My Mac's CPU (and there are 2 cores) performance is:
1621 floating points MIPS
4613 integer MIPS

I was really expecting to see at least a 100% improvement in both numbers above. However, I do now know the laymens-term difference between "integer MIPs" and "floating points MIPS"...I always thought floating points was the one to guage a cpu by...


-Eric
ID: 46287 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Mod.Sense
Volunteer moderator

Send message
Joined: 22 Aug 06
Posts: 4018
Credit: 0
RAC: 0
Message 46295 - Posted: 15 Sep 2007, 18:18:08 UTC

In a nutshell, and in laymen's terms, you can't judge CPU performance purely on Ghz ratings. It would be interesting for you to crank up both machines 24/7 for a couple weeks and see which earns a higher RAC.

But in general, yes, we've hit a period in time where Moore's Law is proving a bit of a challenge to perpetuate. Basic bottleneck to greater speed is heat. That's why you will continue to see more multi-core solutions rather then faster Ghz in future CPUs.

But recently there have been several technology advances on both power consumption (and if I use less power, then I'm producing less heat), and on heat dissipation. So, as always, the next 5 years of CPU announcements will be very interesting. Perhaps someone could dig up some links for me on the advances using copper, and self-aligning lattice of cool spots. I'm not remembering the proper terms.
Rosetta Moderator: Mod.Sense
ID: 46295 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
JEklund

Send message
Joined: 24 Sep 06
Posts: 7
Credit: 105,447
RAC: 0
Message 46297 - Posted: 15 Sep 2007, 18:59:53 UTC - in response to Message 46287.  

Ok...
....

My Dell's CPU performance from BOINC is:
1482 floating points MIPS
2721 integer MIPS


My Mac's CPU (and there are 2 cores) performance is:
1621 floating points MIPS
4613 integer MIPS

I was really expecting to see at least a 100% improvement in both numbers above. However, I do now know the laymens-term difference between "integer MIPs" and "floating points MIPS"...I always thought floating points was the one to guage a cpu by...


-Eric


But are those performance number for the Mac ( multicore ) actualy per core .. If so the You "see" over 100% improvement ?!?!

At least for my dual-core AMD Boinc says something like :

15/09/2007 12:47:14||Running CPU benchmarks
15/09/2007 12:47:46||Benchmark results:
15/09/2007 12:47:46|| Number of CPUs: 2
15/09/2007 12:47:46|| 2232 floating point MIPS (Whetstone) per CPU
15/09/2007 12:47:46|| 4119 integer MIPS (Dhrystone) per CPU


-- Lundi --
ID: 46297 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile ejuel

Send message
Joined: 8 Feb 07
Posts: 78
Credit: 4,447,069
RAC: 0
Message 46306 - Posted: 15 Sep 2007, 21:18:57 UTC - in response to Message 46297.  

Right to both of you. :) On the one hand, yes, I see the 100% improvement because I have 2 cores. But, if you were to compare 1 core from the Duo2 to the 1 core from 3+ years ago, they produce (as far as I can see) similar results.

So what that tells me is that although we are putting more cores on the chips/motherboard, which is great as I mentioned before, the core vs. core speeds over the past 3+ years really haven't improved for the average homeowner.

Yes, I am very much aware of the Gigahertz (or megahertz) wars and that pure speed is not the best measure. My Duo2 is 2.00GHz while the old P4 is 2.8.

What I would love to see is in a few years the average $1000 pc have 8-32 cores. How does that help me? Well, for projects like this, you get 8-32 computers for $1000. For everyday computer use, in theory each application would have a dedicated cpu (ram too?) so that Windows/the OS doesn't hog all the resources with stupid system calls. Audio/Video applications could have dedicated cpus which would vastly improve performance. Just like having 10 people to clean out your garage at once rather than 1 person.

:)

There have been some announcements recently about crazy technology which is 5-15 years away regarding super powerful chips...I'm sure many on this board read slashdot.org. I just don't recall all the technologies and which are feasible to bring to the average home user. AMD recently released their quad chip if I remember correctly. Apple has had an 8-core machine since late 2006 I believe.

Just as projects like BOINC's projects prove that millions of average computers are better than a few supercomputers, so will be the day when your pc has 8-32 cores essentially superpowering your pc faster than a single or dual chip. Imagine the processing power of BOINC projects a few years from now when every machine essentially has 2-4 cores? That's exponential. I hope the Projects are ready for more results in shorter time. :)

-Eric
ID: 46306 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile ejuel

Send message
Joined: 8 Feb 07
Posts: 78
Credit: 4,447,069
RAC: 0
Message 46313 - Posted: 16 Sep 2007, 0:05:57 UTC - in response to Message 46297.  



At least for my dual-core AMD Boinc says something like :

15/09/2007 12:47:14||Running CPU benchmarks
15/09/2007 12:47:46||Benchmark results:
15/09/2007 12:47:46|| Number of CPUs: 2
15/09/2007 12:47:46|| 2232 floating point MIPS (Whetstone) per CPU
15/09/2007 12:47:46|| 4119 integer MIPS (Dhrystone) per CPU


-- Lundi --


Isn't it interesting that your floating is so much better than either of my 2 chips?...yet your integer is not as high as my Duo2? Again, I'm not an expert in benchmarking but your AMD, compared to my Duo2, is much faster in 1 area but actually slower in another.


Yes, my mac mini is per core...I just didn't cut and paste.

-Eric
ID: 46313 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Natronomonas

Send message
Joined: 11 Jan 06
Posts: 38
Credit: 536,978
RAC: 0
Message 46350 - Posted: 16 Sep 2007, 12:30:09 UTC - in response to Message 46313.  

Remember also that you're comparing a desktop CPU to a laptop CPU.

The fact is that they've fitted two cores, of the same or greater processing power than the p4, into a thermal envelope probably a third of the p4.

ie, you're getting 6x the compute power per watt. This is where most of the recent gains have been made.

If you got a desktop CPU of the same thermal power, you'd find greater gains, and the added benefit of desktop RAM (faster speed).
Crunching Rosetta as a member of the Whirlpool BOINC Teams
ID: 46350 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Jmarks
Avatar

Send message
Joined: 16 Jul 07
Posts: 132
Credit: 98,025
RAC: 0
Message 46359 - Posted: 16 Sep 2007, 13:24:32 UTC

Natronomonas is right the T series is a laptop cpu meant to save power that is why the numbers are not huge. It is doing all that work an more at 10% energy of your old PC.
Jmarks
ID: 46359 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
DJStarfox

Send message
Joined: 19 Jul 07
Posts: 145
Credit: 1,230,305
RAC: 0
Message 46369 - Posted: 16 Sep 2007, 15:46:42 UTC
Last modified: 16 Sep 2007, 15:49:11 UTC

I thought the AMD Opteron HE and EE processors would catch on rapidly, but I don't see them for sale too often anymore. The HE's expel HALF the thermal energy of the full power version of the chips but cost more. Running BOINC 22/7, I'd say the 2 HE processors I have save me $25/month over the full-power ones in electricity costs.

Edit: My stats are here (32-bit BOINC 5.8.16 on 64-bit Linux system):
Benchmark results:
Number of CPUs: 2
1866 floating point MIPS (Whetstone) per CPU
3138 integer MIPS (Dhrystone) per CPU
ID: 46369 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
zombie67 [MM]
Avatar

Send message
Joined: 11 Feb 06
Posts: 316
Credit: 6,410,102
RAC: 0
Message 46378 - Posted: 16 Sep 2007, 18:21:39 UTC - in response to Message 46287.  

My Dell's CPU performance from BOINC is:
1482 floating points MIPS
2721 integer MIPS


My Mac's CPU (and there are 2 cores) performance is:
1621 floating points MIPS
4613 integer MIPS


BOINC benchmarks are pretty meaningless, and can be very misleading. Especially when the OS is different (just ask the linux users...).

That Core 2 chip should be able to provide 2-4 times the RAC of the P4, depending on the project you run.

Reno, NV
Team: SETI.USA
ID: 46378 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Paul

Send message
Joined: 29 Oct 05
Posts: 193
Credit: 65,662,169
RAC: 5,021
Message 46396 - Posted: 16 Sep 2007, 23:02:41 UTC

In the olden days, we had single threaded OSes and applications. This means that every instruction had to be executed in a specific order. In this single, sequential environment, the raw performance of the CPU was very important. With multi-threaded OSes and applications, all the rules change.

Multiple cores allow multiple threads to be run simultaneously. My PC has about 700 threads running. With multiple cores, these threads are assigned by the OS to the core that has the least utilization. Concurrent thread processing greatly enhances the performance of background apps.

More cores should help performance with newer OSes so you should see some improvement.

What is the RAM situation with your MAC mini? Could you be memory bound, not CPU bound?

BTW - Intel added the SSE, SSE2 and SSE3 instruction sets because intel CPUs typically have poor FPU performance. If you use an optimized BOINC Client that takes advantage of these instruction sets, you will see a dramatic improvement in your bench marks and it should also be evident in RAC.

Let us know what you discover.

thx
Thx!

Paul

ID: 46396 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
zombie67 [MM]
Avatar

Send message
Joined: 11 Feb 06
Posts: 316
Credit: 6,410,102
RAC: 0
Message 46405 - Posted: 17 Sep 2007, 4:27:12 UTC

To be clear, optimized BOINC *clients* make no difference, except with credits claimed. They crunch no faster. Optimized project *applications* do crunch faster, using things like SSE*.
Reno, NV
Team: SETI.USA
ID: 46405 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Mod.Sense
Volunteer moderator

Send message
Joined: 22 Aug 06
Posts: 4018
Credit: 0
RAC: 0
Message 46429 - Posted: 17 Sep 2007, 13:21:50 UTC - in response to Message 46405.  

To be clear, optimized BOINC *clients* make no difference, except with credits claimed. They crunch no faster. Optimized project *applications* do crunch faster, using things like SSE*.


Yep, Paul's correct that your benchmarks may change, but the rest of his comments do not pertain on Rosetta. But probably true on other projects. Rosetta is not presently optimized for specific CPU capabilities like SSE.
Rosetta Moderator: Mod.Sense
ID: 46429 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Paul

Send message
Joined: 29 Oct 05
Posts: 193
Credit: 65,662,169
RAC: 5,021
Message 46435 - Posted: 17 Sep 2007, 14:08:41 UTC - in response to Message 46429.  

To be clear, optimized BOINC *clients* make no difference, except with credits claimed. They crunch no faster. Optimized project *applications* do crunch faster, using things like SSE*.


Yep, Paul's correct that your benchmarks may change, but the rest of his comments do not pertain on Rosetta. But probably true on other projects. Rosetta is not presently optimized for specific CPU capabilities like SSE.


I am sure you have discussed this before so don't blast me on this one...

Why do people offer optimized BOINC Clients? If the client is optmized for the specific chip, not just the OS, wouldn't that run faster?

With the new credit system, I would assume the optimized client is of no advantage.

If we need to move this to a different thread, I understand.

thx

Thx!

Paul

ID: 46435 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Paul

Send message
Joined: 29 Oct 05
Posts: 193
Credit: 65,662,169
RAC: 5,021
Message 46436 - Posted: 17 Sep 2007, 14:23:55 UTC - in response to Message 46435.  

To be clear, optimized BOINC *clients* make no difference, except with credits claimed. They crunch no faster. Optimized project *applications* do crunch faster, using things like SSE*.


Yep, Paul's correct that your benchmarks may change, but the rest of his comments do not pertain on Rosetta. But probably true on other projects. Rosetta is not presently optimized for specific CPU capabilities like SSE.


I am sure you have discussed this before so don't blast me on this one...

Why do people offer optimized BOINC Clients? If the client is optmized for the specific chip, not just the OS, wouldn't that run faster?

With the new credit system, I would assume the optimized client is of no advantage.

If we need to move this to a different thread, I understand.

thx


I just did a search on this topic and it looks like this is a very heated debate. Forget the questions.

thx

Thx!

Paul

ID: 46436 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile ejuel

Send message
Joined: 8 Feb 07
Posts: 78
Credit: 4,447,069
RAC: 0
Message 46437 - Posted: 17 Sep 2007, 14:42:20 UTC - in response to Message 46350.  

Remember also that you're comparing a desktop CPU to a laptop CPU.

The fact is that they've fitted two cores, of the same or greater processing power than the p4, into a thermal envelope probably a third of the p4.

ie, you're getting 6x the compute power per watt. This is where most of the recent gains have been made.

If you got a desktop CPU of the same thermal power, you'd find greater gains, and the added benefit of desktop RAM (faster speed).



I didn't know the Mac Mini was using a laptop chip. It makes sense but kinda disappointing. I can't keep up with all the Intel logos/brand names these days...all purposely confusing so you think you are getting a great chip when even just 10 months ago Celerons were still put in desktops. Yuck.
ID: 46437 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile ejuel

Send message
Joined: 8 Feb 07
Posts: 78
Credit: 4,447,069
RAC: 0
Message 46438 - Posted: 17 Sep 2007, 14:52:06 UTC - in response to Message 46435.  
Last modified: 17 Sep 2007, 14:54:28 UTC

To be clear, optimized BOINC *clients* make no difference, except with credits claimed. They crunch no faster. Optimized project *applications* do crunch faster, using things like SSE*.


Yep, Paul's correct that your benchmarks may change, but the rest of his comments do not pertain on Rosetta. But probably true on other projects. Rosetta is not presently optimized for specific CPU capabilities like SSE.


I am sure you have discussed this before so don't blast me on this one...

Why do people offer optimized BOINC Clients? If the client is optmized for the specific chip, not just the OS, wouldn't that run faster?

With the new credit system, I would assume the optimized client is of no advantage.

If we need to move this to a different thread, I understand.

thx



I don't see the point of an undertaking like that for a non-profit, volunteer project. The reasons are (in no particular order):

1)How much cpu performance can you improve by doing special clients? 10%? Whooped-dee-doo. New people sign up to BOINC every day...so what will take someone 6 months to develop a specialized app to squeeze 10% could be accomplished easily in a few weeks by the addition of more users and hence doing more work and hence getting more work done in a particular timeframe.
2)And who's going to spend all their time developing/updating/debugging each and every client? And they are going to donate all that time for no charge?
3)Wouldn't it be better to use those hundreds (if not thousands) of man hours doing something else? Maybe spreading the word of the project or building a better website? Or working on a better universal client?


-Eric

p.s. Speaking of adding users, I'd like to know why this project doesn't have more users. This project has less than 160,000 users and just barely 400,000 machines. I know everyone has their own choice but out of the 300 million people in the U.S. alone this is a pitiful amount to show up at Rosetta. Probably time for a new thread unless one has already started it.
ID: 46438 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Jmarks
Avatar

Send message
Joined: 16 Jul 07
Posts: 132
Credit: 98,025
RAC: 0
Message 46439 - Posted: 17 Sep 2007, 15:20:39 UTC - in response to Message 46437.  

Remember also that you're comparing a desktop CPU to a laptop CPU.

The fact is that they've fitted two cores, of the same or greater processing power than the p4, into a thermal envelope probably a third of the p4.

ie, you're getting 6x the compute power per watt. This is where most of the recent gains have been made.

If you got a desktop CPU of the same thermal power, you'd find greater gains, and the added benefit of desktop RAM (faster speed).



I didn't know the Mac Mini was using a laptop chip. It makes sense but kinda disappointing. I can't keep up with all the Intel logos/brand names these days...all purposely confusing so you think you are getting a great chip when even just 10 months ago Celerons were still put in desktops. Yuck.


I think you should blame Mac for this not Intel. They are the ones who designed it and marketed it.

Ps I have the same CPU in my laptop and it is 2 times faster than my old Pentium 2.8 600 watt desktop at CS2 with Photoshop and MS office premium 2003 etc.
Jmarks
ID: 46439 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
zombie67 [MM]
Avatar

Send message
Joined: 11 Feb 06
Posts: 316
Credit: 6,410,102
RAC: 0
Message 46449 - Posted: 17 Sep 2007, 16:33:44 UTC - in response to Message 46437.  
Last modified: 17 Sep 2007, 16:41:57 UTC

I didn't know the Mac Mini was using a laptop chip. It makes sense but kinda disappointing. I can't keep up with all the Intel logos/brand names these days...all purposely confusing so you think you are getting a great chip when even just 10 months ago Celerons were still put in desktops. Yuck.

Don't let the "laptop" designation fool you. It really means low power, not low performance. The chip used in the iMac is also the laptop version, and I got a RAC of well over 1k with it on SETI.

FWIW, the mac mini came with a PPC (G4), a Core Solo, a Core Duo, or a Core 2 Duo. Which chip does yours have?
Reno, NV
Team: SETI.USA
ID: 46449 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
zombie67 [MM]
Avatar

Send message
Joined: 11 Feb 06
Posts: 316
Credit: 6,410,102
RAC: 0
Message 46450 - Posted: 17 Sep 2007, 16:41:30 UTC - in response to Message 46435.  

Why do people offer optimized BOINC Clients? If the client is optmized for the specific chip, not just the OS, wouldn't that run faster?

With the new credit system, I would assume the optimized client is of no advantage.


BOINC is open source. Anyone can compile their own BOINC *client* any way they like. For Rosetta, there is no point in doing so, as the credit method here makes over-claiming clients moot.

As for optimized science *applications*, yes, they can be *much* faster using chip abilities such as SSE/2/3/4. Sometimes more than twice as fast. Since the application is not open source for Rosetta, it requires the project to offer such optimized applications.
Reno, NV
Team: SETI.USA
ID: 46450 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Jmarks
Avatar

Send message
Joined: 16 Jul 07
Posts: 132
Credit: 98,025
RAC: 0
Message 46452 - Posted: 17 Sep 2007, 16:55:07 UTC - in response to Message 46449.  

I didn't know the Mac Mini was using a laptop chip. It makes sense but kinda disappointing. I can't keep up with all the Intel logos/brand names these days...all purposely confusing so you think you are getting a great chip when even just 10 months ago Celerons were still put in desktops. Yuck.

Don't let the "laptop" designation fool you. It really means low power, not low performance. The chip used in the iMac is also the laptop version, and I got a RAC of well over 1k with it on SETI.

FWIW, the mac mini came with a PPC (G4), a Core Solo, a Core Duo, or a Core 2 Duo. Which chip does yours have?


He said it was a Intel(R) Core(TM)2 CPU T7200 @ 2.00
Jmarks
ID: 46452 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
1 · 2 · 3 · 4 · Next

Message boards : Number crunching : CPU Comparison question



©2024 University of Washington
https://www.bakerlab.org