A First Look at Futuremark’s New 3DMark Time Spy Extreme: DX12 Benchmark Now in 4K
by Nate Oh on October 4, 2017 10:00 AM EST- Posted in
- GPUs
- Futuremark
- 3DMark
- Benchmarks
- DirectX 12
This week, Futuremark unveiled Time Spy Extreme, a 4K version of the DX12 gaming benchmark released last July. Slotting into the 3DMark suite, Time Spy Extreme is available under the Advanced and Professional Editions on Windows, focusing on DX12 functionality on contemporary high-end graphics cards as well as high-core count processors. The original Time Spy was already more intensive than Fire Strike Ultra, and Time Spy Extreme brings that graphical punishment to 4K.
Where the original Time Spy scaled poorly on 10+ threads, Time Spy Extreme’s CPU test was redesigned for CPUs with 8 or more cores, and is additionally able to utilize AVX2 and AVX512 instruction sets. While the benchmark does not require a 4K monitor, the typical video memory demands are still present: Time Spy Extreme has VRAM minimum of 4GB, a step up from Fire Strike Ultra’s 3GB.
Otherwise, the sequence remains the same: the Time Spy explores a museum with artifacts and terraria of Futuremark’s other benchmarks, past and present. We’ve covered the technical aspects when Time Spy was first released, and the underlying details are likewise the same. Ultimately, Time Spy and Time Spy Extreme engine was built for DX12 from the ground up, and incorporates marquee DX12 features: asynchronous compute, explicit multi-adapter, and multi-threading. Like the original, Extreme was developed with input from Futuremark’s Benchmark Development Program, a group that includes AMD, Intel, and NVIDIA.
Time Spy Extreme will be released to the public on October 11. For the time being, an advance release has been made available to the press, and we’ve taken a quick look at the benchmark with a selection of modern cards.
Graphics Benchmarking Results
To preface, Time Spy Extreme’s CPU test is a simulation, and the benchmark measures average simulation time per frame. In other words, the score/measurement is not dependent at all with the graphical rendering work. With that in mind, we are only reporting graphics subscores and framerates. In general, the Time Spy and Time Spy Extreme benchmark scores do not relate to one another.
Overall, on our 8-core Skylake X GPU test bench, we looked at 9 cards with and without async compute disabled: the Titan Xp, GTX 1080 Ti, GTX 1080, GTX 1070, GTX 980 Ti, RX Vega64, RX Vega56, RX 580, and R9 Fury X. We also checked the effect of enabling 11.6GB of HBCC on the Vega cards. NVIDIA’s 385.69 drivers and AMD’s Radeon Software Crimson ReLive Edition 17.9.3 were the drivers used. For the RX Vega cards, the default power profile (primary BIOS and 'Balanced'' profile) was used.
The results are largely unsurprising; as we noticed historically and in our RX Vega review, AMD's graphics performance benefits more from DX12 environments than DX11. And despite its 4GB frame-buffer, the Fury X is able to hold its own despite having the bare minimum required VRAM. The RX 580 was never intended for 4K gaming, and so is expectedly unsuitable.
The divergent performance between the two graphics subtests are reflective of the particular workloads. Like in the original Time Spy, Graphics Test 2 heavily features ray-marched volume illumination and has just under 3 times the amount of tessellation patches that Graphics Test 1 has.
Ultimately, Time Spy Extreme proves itself to be quite punishing, and if the benchmark were a game, only the 1080 Ti and Titan Xp would provide marginally playable framerates.
With both Vega and Pascal, enabling asynchronous compute in Time Spy Extreme results in improved performance, and more so for the AMD cards. While not shown on the graph, the RX 580 and Fury X also benefit, while the 980 Ti regresses ever-so-slightly.
We also tested both RX Vega cards with an 11.6GB HBCC memory segment enabled. The scores differed by less than 2% compared to the default Async On scores, in line with previous reports of HBCC providing minimal benefits in games.
For more information on the subtests, the 3DMark technical guide has been updated with the specifics of Time Spy Extreme.
Availability and Purchasing
As mentioned earlier, Time Spy Extreme will be available to the public next Wednesday (10/11/17). It will be a free update for 3DMark Advanced and Professional Edition licenses purchased after July 14, 2016, as well as for anyone with older copies of 3DMark who have already purchased the Time Spy upgrade. For copies purchased before then, Time Spy Extreme will come with the purchase of the Time Spy upgrade.
16 Comments
View All Comments
Fergy - Friday, October 6, 2017 - link
Matrox has never had a competitive GPU. Just like 3Dlabs and Bitboys. I hope Apple/Qualcomm/ARM make a desktop GPU.extide - Wednesday, October 4, 2017 - link
Anyone remember when FutureMark was still MadOnion? Heh, I always loved the music in 3dMark 2000..BurntMyBacon - Thursday, October 5, 2017 - link
I liked the old name better. Just ran a 3dMark 2000 for nostalgia sake. I had forgotten what the music sounded like. I remember when 3dMark 2000 represented the pinnacle of graphics technology.Communism - Thursday, October 5, 2017 - link
If you don't know the subject matter, then don't comment.Communism - Thursday, October 5, 2017 - link
RIP proper functioning of comment system, this was as a reply to one of the posts.EugenM - Friday, January 19, 2018 - link
TimeSpy doesnt support true Async Compute, the single threaded style Async Compute is clearly tailored to take into Account Nvidia hardware Limitations leaving massive processing power idle in AMD RX Vega 64, had this benchmark had proper async compute it would make a dejavu on results like in the old days when Benchmarks overtesselated everything to bog down the dedicated hardware tesselator for AMD and allow Nvidia gpu processed tesselation to shine while everything else in the graphics quality department was just about average.In this test, Futuremark instead of beying fair and utilize DX12 implementation properly and fully implemented the Nvidia Version of Incomplete DX12.