Intel’s Confusing Messaging: Is Comet Lake Better Than Ice Lake?
by Dr. Ian Cutress on January 16, 2020 9:00 AM EST- Posted in
- CPUs
- Intel
- Laptops
- Trade Shows
- Notebooks
- Ice Lake
- Comet Lake
- CES 2020
This year at CES 2020, Intel held its usual pre-keynote workshop for select members of the press. Around 75 of us across a couple of sessions were there to hear Intel’s latest messaging and announcements from the show: a mixture of messaging and preview of the announcements to be made at the keynote. This isn’t unusual – it gives the company a chance to lay down a marker of where it thinks its strengths are, where it thinks the market is heading, and perhaps gives us a highlight into what might be coming from the product hardware perspective. The key messages on Intel’s agenda this year were Project Athena, accelerated workloads, and Tiger Lake.
We’ve covered Tiger Lake in a previous article, as it shapes up to be the successor to Ice Lake later in the year. Intel’s Project Athena is also a known quantity, being a set of specifications that Intel wants laptop device manufacturers to follow in order to create what it sees as the vision of the future of computing. The new element to the discussion is actually something I’ve been pushing for a while: accelerated computing. With Intel now putting AVX-512 in its consumer processors, along with a stronger GPU and things like the Gaussian Neural Accelerator, actually identifying what uses these accelerators is quite hard, as there is no official list. Intel took the time to give us a number of examples.
In this case, we’re seeing AI enhancements in the Adobe suite, Cyberlink, Blender, XSpit, and a few others. Eight of these are CPU/AVX-512 enhanced, six are GPU enhanced, and one is via the GNA. For a technology like AVX-512 to only have eight enhanced consumer applications several years after its first launch (Skylake-X was launched in May 2017) isn’t actually that great, but at least Intel is now telling us where we can find them, aside from specific compute benchmarks (3DMark Physics, y-cruncher).
As always with these presentations, part of the company’s aim is to showcase how they beat the competition. These are often cherry picked benchmarks that highlight the key points, however as it has been noted of late, Intel has been focused on ‘real world performance’ benchmarks, and is trying to shun what it calls ‘unrepresentative tests’, like CineBench, or synthetic tests. As part of this showcase, Intel was quick to point out that its laptop offerings provide more performance and better features than AMD’s Ryzen Mobile 3000 series.
It’s worth noting that when Intel or AMD show benchmark numbers, as a member of the press, it’s best to actually not pay too much attention here. Because these are often cherry picked numbers, first-party benchmarks by the companies aren’t the same as an independent test in a review. We take them with a pile of salt, but only if we bother to listen to them in the first place.
Now this is one slide that caused a lot of discussion after the event from social media, rather than the press. In this slide, Intel shows two comparable systems, the R7 3750H with an RTX 2060, against an i7-9750H with the same GPU at the same speed. Both CPUs are targeting the same market, and with the same discrete GPU, Intel puts itself ahead in the gaming tests.
Intel also added in the ‘best’ gaming system on the market today, at the maximum price, to this slide to offer a comparison point to show that there is currently no AMD system on the market with the ‘best’ graphics. This graph was meant to demonstrate that AMD can’t play in this high-end space, because OEMs won’t pair their CPUs with the best graphics. With these results, the press agreed that AMD doesn’t play in this space with Ryzen Mobile 3000, and the benchmark comparison was somewhat obsolete in that regard. The discussion on social media turned to whether Intel was being genuine in comparing AMD’s best with Intel’s best, despite the significant price difference in the CPU and comparing an RTX 2060 to an RTX 2080. To be honest, I agreed with Intel here – it wasn’t a graph designed to show like for like, but just how much performance is still on the table when money is no object. This graph became somewhat obsolete very quickly anyway, given that AMD announced its new Ryzen Mobile 4000 CPUs the next day.
However, this isn’t the slide I want to talk about today. There were a pair of slides that Intel showed that had me rather confused and surprised that I don’t think anyone else picked up on.
For the 15W processors, for thin and light laptops, Intel showed two sets of data. I’ll give the raw graphs here.
First was AMD’s Ryzen 7 3700U against Intel’s Comet Lake based Core i7-10710U:
The next was AMD’s Ryzen 7 3700U against Intel’s Ice Lake based Core i7-1065G7:
All three CPUs are the best 15 W CPUs that AMD and Intel has in the ultra-mobile market and notebooks at the time of writing.
Despite Intel’s push for ‘real world’ benchmarks, the tests start with PCMark10 (a synthetic to ‘emulate’ real world, mostly Microsoft Office) and WebXPRT 3 (a synthetic web test from Principled Technologies, a company known for producing paid-for Intel performance whitepapers). Then we have a series of direct Microsoft Office throughput metrics, and finally some Intel GPU enhanced AI tests provided by Topaz Labs.
The curious thing for me wasn’t so much the comparison between AMD and Intel. We know that Ryzen Mobile 3000 was still behind Intel's latest generation of processors in a lot of tasks, as evidenced by our Microsoft Surface Laptop 3 review, where we tested Intel 10th Gen vs AMD Ryzen Mobile 3000 in the same form factor. There's going to be more parity when we can test against the new Ryzen Mobile 4000 CPUs later this year.
The curious thing for me is that Intel had provided its own comparison data, for both Comet Lake and Ice Lake, but didn't do its own direct comparison between the two. Both of these CPUs fall under Intel’s ‘10th Gen Core’ branding, but Comet is on a 14++ manufacturing process, with 6 cores and Gen 9 graphics, while Ice is on a 10nm manufacturing process with 4 cores, higher IPC, and Gen 11 graphics. Both CPUs are set for 15 W, so you might expect both to be in similar designs, however Intel has positioned Ice Lake as its cost-premium processor due to its higher IPC, while Comet Lake is more for the bulk of the market. Comparing the two probably isn't in Intel's best interest.
We’ve tested Ice Lake, both in Intel’s Software Development System and in the Microsoft Surface Laptop 3, and it has a sizeable per-clock performance advantage over Intel’s 14++ hardware. The only downside is that it does not clock as high, meaning that Ice Lake and Comet Lake will be fighting for performance dominance, although it is expected for Ice Lake to win most of the tests: in Intel’s original Ice Lake disclosure, they cited a small jump over the previous generation 15 W processors.
Comet Lake isn't in this graph, however it's meant to be the successor to Whiskey Lake built on the same manufacturing process, except tweaked for more performance by virtue of the 10th Gen Core stature. It comes in 4-core and 6-core variants, with the latter being more than what Whiskey Lake offered.
As a given, Ice Lake wins in graphics against Comet Lake, by virtue of having 2x the execution units. That's not indispute here, however the CPU results are.
From Intel's data, I wanted to put Comet vs. Ice into one graph. With the data in one graph, we can see why Intel’s own slides did not put the data together: Ice Lake loses in most benchmarks against Comet Lake.
The first five benchmarks are PCMark 10, which Intel uses as a test to ‘represent real world benchmarks’, we have Comet Lake winning the first four, and then the browser test is essentially equal but in Ice Lake’s favor by less than 1%. With the WebXPRT test, another browser test, the result is again almost equal, but tips to Ice Lake. For the Microsoft Office real world tests, it’s split 50:50, with each CPU taking one each from Powerpoint and MSWord. The Photoshop test is a dead heat, and the cherry picked Topaz Labs AI tests aren’t particularly ‘real world’ in Intel’s defined sense since this is a niche software package and Intel has worked with the software vendor to enable GPU acceleration.
If we take out the GPU tests, for those keeping track, Comet Lake wins in a 6-4-1 result. If we considered anything below a 1% difference a tie, it's more like a 6-3-2.
Intel's Data: Comet Lake vs Ice Lake | |||
AMD Ryzen 7 3700U |
Intel Comet Core i7-10710U |
Intel Ice Core i7-1065G7 |
|
PCMark10 Overall | 1.00x | 1.33x | 1.26x |
PCMark10 Word | 1.00x | 1.36x | 1.23x |
PCMark10 Excel | 1.00x | 1.61x | 1.47x |
PCMark10 PowerPoint | 1.00x | 1.16x | 1.13x |
PCMark10 Edge | 1.00x | 1.24x | 1.25x |
WebXPRT3 | 1.00x | 1.48x | 1.53x |
PowerPoint PDF Export | 1.00x | 1.83x | 1.33x |
Word Convert to PDF | 1.00x | 1.68x | 1.63x |
Word Mail Merge Error Check | 1.00x | 1.54x | 1.79x |
PowerPoint Export to 1080p Video | 1.00x | 1.44x | 1.61x |
PS Element CC Colorize Photo | 1.00x | 2.00x | 2.00x |
Ryan and I spent some time discussing these results. Some of these tests rely heavily on turbo, such as the PCMark tests, and so the Comet Lake i7-10710U can hit 4.7 GHz on the latest variant of Skylake, while the Ice Lake i7-1065G7, despite its higher IPC difference, can only do 3.9 GHz. This means in a lot of bursty workloads (which a lot of business workloads are), the Comet Lake wins and we see that play out.
For the Microsoft Office workloads, it depends on if the workload is IPC sensitive of thread sensitive. Comet Lake here can spin up 12 threads at 3.9 GHz or higher, while Ice Lake can only do 8 threads at 3.5 GHz or higher. The PowerPoint PDF Export is a crushing win for Comet Lake, over 30%, whereas the Mail Merge Error Check and PowerPoint Export to 1080p Video is 10% in favor of Ice. What I want to know is, what presenters are exporting PowerPoint presentations to 1080p video? Is that seriously a common use case? I’d argue that exporting slides to PDF is >100x a more frequent workload here, which is why Comet Lake’s +30% performance gain over Ice is more important than Ice’s +10% win in exporting to a presentation to video. Not only that, but PCMark's own Office subtests put Comet ahead.
With all this in mind, what does it mean? Intel seems unwilling to put Comet Lake and Ice Lake in the same graph because Comet Lake has some serious hardcore wins in ‘real world benchmarks’, and it isn’t until Intel introduces the accelerated workloads, built upon Ice Lake’s GPU or AVX-512 unit, or games, or connectivity, where Ice Lake can get a true win. But those are specific scenarios for specific individuals, which does go against the ethos of Intel’s march on ‘real world performance’.
Ultimately Intel is in its own catch-22. It needs to show off the fact that it has processors with acceleratable elements that can smash the competition (and previous generations), but at the same time, because those features aren’t widespread, it’s trying to combine it with the ‘performance in the real world’ messaging and asking reviewers to target the more popular workloads, even if Intel isn’t doing that itself in its own performance comparisons. The easiest way for Intel to approach this is to remove this notion of ‘real-world performance’ being the ultimate goal, because with it the company is shooting itself in the foot. It can’t both push reviewers to focus on real world performance and yet highlight its accelerators for niche applications. Sure, going after real world performance is a laudable goal, but Intel has to understand that it needs to market its data appropriately. At the minute it’s coming up in a big jumbled mess that doesn’t do anyone any favors - least of all Intel.
199 Comments
View All Comments
0ldman79 - Thursday, January 16, 2020 - link
The optimizations are definitely a bonus but if they only exist in 6 apps then what good are they?I mean how long was NVENC out there before hardware acceleration was a thing?
0ldman79 - Thursday, January 16, 2020 - link
I didn't quite phrase that like I wanted.NVENC was released with Kepler.
It was just a couple of years ago that NVENC was added to Handbrake and Staxrip, before that it was added to a few commercial/freeware apps with mixed results.
It's only been a quality option for a short while now. Initially the only software that supported NVENC had little to no quality options aside from bitrate.
levizx - Friday, January 17, 2020 - link
NVENC is useless in almost all use case other than converting video for your phone and realtime recoding while casting. Casting wasn't even a thing back then and streaming already took over.m53 - Thursday, January 16, 2020 - link
“ I'm seeing this trend a lot in recent reviews, where people focus on CPU aspects to magnify Intel's problems to generate that sensational headline. I get it .. you are vying for an audience.”Good point. I am seeing similar practice to many other sites as well. Catchy headlines and colorful opinions instead of presenting an information or test result as is. I guess it pleases the fanboys and generate traffic.
Spunjji - Friday, January 17, 2020 - link
You see similar things on other sites? Great. It's not happening here. The headline is "Intel's confusing messaging - is Comet Lake better than Ice Lake". The article provides analysis of Intel's own publicly available figures in an attempt to answer that.What, precisely, would you prefer?
Spunjji - Friday, January 17, 2020 - link
You know the trend I see? Obtuse comments that don't relate to the article in question but instead pick on some vague theoretical "trend" visible only to the commenter, which generate equally bland "won't somebody think of the children" style responses that don't actually move any discussion in any particular direction.You're agreeing with a guy who's complaining that an article about CPU performance talks about the performance of CPUs. It doesn't get much more inane than this.
m53 - Saturday, January 18, 2020 - link
“I guess it pleases the fanboys and generate traffic.”Your 20+ comments on a “marketing messaging” related opinion piece has proven my point.
Spunjji - Monday, January 20, 2020 - link
All you've "proven" is that these threads are being stacked with the kind of comments that piss me (personally) off, and that I've inexplicably taken it upon myself to respond to most of them. None of that demonstrates anything about Anandtech or the reasoning behind their editorial decisions, which was your claim.Your inane replies continue to add to the body of evidence in favour of my point, namely that the threads are being astroturfed by gormless armchair critics and shills with nothing to add.
ywyak - Thursday, January 16, 2020 - link
> Ryan and I spent some time discussing these results.Ryan Shrout or the lovely Editor of Anandtech, Ryan Smith?
In any case, could you update your answer on the article for a future references?
Ian Cutress - Thursday, January 16, 2020 - link
He yes perhaps I should clarify. It was Ryan Smith