Microstutter: A Quantitative Investigation

Ihatethedukes

New Member
Hi all, I'm putting together a thread that discusses and, I think, approaches the concept of microstuttering from a much better angle than approached before.

Introduction
The current view floating around the internet is that multi-GPU cards are susceptible to what is called microstutter. The prevailing mechanism put forth by people who believe in multi-GPU microstutter is that the frames being output from each GPU in AFR rendering mode (alternate frame rendering) are not evenly spaced out and leave relatively large gaps in between frames rendered, giving an illusion of lower FPS than are actually output.

afr.png


As you can see from rage3d.com's figure that the mechanism is from two gpus rendering in parallel, finishing their frames at nearly the same time and displaying them close together then you're left looking at the last frame much longer than you would were the two rendered and displayed at regular intervals. It is this 'pairing of outputs' (PoPs) that supposedly leads to the visible 'choppiness' of microstuttering gameplay.

There are a few articles around that attempt to approach the issue of ustutter from:
http://www.rage3d.com/reviews/video/ati4870x2cf/index.php?p=1
and
http://translate.google.com/transla...page_1/fromTicket_/key_//tm.htm&sl=auto&tl=en

While both works are admirable, neither really tell a good story of ustutter. The older rage3d is easily the better of the two, and while their attempt at quantifying their ustutter by binning frame time is a start it falls very short of its goal because frametimes are FPS dependent, meaning, the higher the FPS the more will be binned into the faster bins.. The more recent donanimhaber article lacks a very basic control of a single GPU running the same benchmark. I look the liberty of running a single 5870 in the same benchmark for comparison.
Single 5870
137242d1263528593-donanimhaber-5970-microstutter-test-microstuttersingle-gpu.png

Their 5970
unigine1920x1200.jpg


Remember, I did not cherry pick these results I just took the first 30 frames or so to illustrate that a 5870 has a similar rendering irregularity. Here's the whole run.

137244d1263529180-donanimhaber-5970-microstutter-test-microstuttersingle-gpu2.jpg


This result alone makes me doubt the validity of the PoPs mechanism for ustutter. So, I'm developing A QUANTITATIVE METHOD for determining the validity of the PoPs mechanism to ustutter.

METHODS:

I am defining that microstutter is the inconsistent time intervals between rendered frames nothing more, nothing less. This can exist on both single and multi-GPUs (the mechanism is in dispute more than its existence). I propose that we impose the limit of microstutter as being a >50% difference of the moving average delay. This 50% number was chosen because it allows for 'noise' or natural differences in difficulty of rendering one frame and the next so that they don't show up as microstutter but still allows for the slight delay inbetween the PoP frames.

A moving average, averages the frame in question with the ones surround it (let's say +/-60frames). This gives us a local FPS average that isn't effected by a higher FPS 1 minute later in the benchmark to use as the 'theoretical best case' to compare the real delay too. Then we compare the actual frames one by one to their moving average. I will be considering that any variation over 50% of the value of the moving average = microstutter. EXAMPLE:

Lets have a worst case scenario. You've got a system that either produces frames at 5ms and 30ms apart from one another alternately. The moving average for 30frame +/- is the expected 16.6667. (I chose 60PFS arbitrarily). We then compare the ACTUAL frametime of 30 as follows:

(30-16.6667)/16.6667=0.8 (80%)
(5-16.6667)/16.6667=-0.7 (70%)


We then count the amount of times we see a >50% difference. If we count >50% as 1 and <50% as 0 we can graph it and see what areas there are a lot of peaks at 1 (1peaks).

Since this would have 100% of the delays as >50% then it'd be RIDDEN with microstutter.

An issue with this is, how do we just have to decide on what consists of 'real' microstutter?

I'll reiterate that the mechanism in which this inconsistency comes about is in dispute. People insist that it's in multiGPU formats that it comes about. Because of this, we can simply compare the amount of 1peaks of a single card to the amount of that a dualGPU creates and see if it's elevated in dual GPU formats. The reason for this approach is that it's quantitative rather than subjective, FPS blind and doesn't rely on subjective analysis.

We need identical benchmark data from a 5870 and a 5970 (meaning same time/settings/benchmark/place in benchmark).

Results

The results are interesting to say the least.
5870 - Average score per frame: 0.438

attachment.php


5970 - Average score per frame: 0.307
attachment.php


Analysis
The single GPU 5870 actually has MORE ustutter than the dualGPU 5970. Nearly ~44% of the frames rendered with the single GPU deviated from the local average render time by >70%! The 5970 on the other hand only had ~31% of its frames rendered with a deviation of >70% that of the average render time. To address the possibility that the 5970 has fewer, but larger variances while the 5870 has more smaller variances, the threshold for ustutter was increased to 90% variance and the 5970 had ~7% occurrence while the 5870 had ~11%. This suggests that the 5870 has both more and larger ustutter events than the dual GPU 5970.

This data refutes the myth that multi-GPU systems create ustutter.
 
blech! too many words :D

i looked into this when i went crossfire, and tbh, the micro stutututtering that i noticed wasn't that much worse than the tearing that goes on when the video frame rate is higher than what the monitor's rated for, maybe it was just that, tearing due to high frame rate. the only game that this was really an issue was in GRID; i imagine any other fast paced game would have the same problem.

also, i think the microstututtering was more of a problem with the early multi-gpu setups and most of the issues have been resolved through optimized drivers.

do you think ustuttering would be more or less pronounced with higher rated monitors: 120Hz, 240Hz ??
 
Last edited:
do you think ustuttering would be more or less pronounced with higher rated monitors: 120Hz, 240Hz ??

thats why you just buy a good single gpu instead of multiple cards :D

actually the results showed that microstututtering is really a non-issue. however ghosting and image tear are still present in fast moving high-rez image rendering. higher refresh rates in the newer lcd monitors aim to reduce these effects, but i'm sure that would introduce some new problems if your video card now has to catch up to the monitors frame rate.
 
Last edited:
Microstutter is the new ADD

Sorry to dredge up an old topic. My interest in microstutter is more academic than as gamer but since nobody else seems to care, I would like to know if someone has a semi-decent solution for it.
Here are some of my observations/notes on the issue:
1. Microstutter is present in single graphics card rigs as well as cross-fire rigs (as the OP noted) and can be defined as this: For a 60Hz refresh rate display mode, each frame should be written out to the display device every 16.6ms. On average over 1second you will get 60 frames, but closer examination of the inter-frame periods give you values anything between 4 to 24ms. This irregular timing is called microstutter.
2. However when most of the people run their tests they try to run it for the best possible quality settings of their favorite games. Hence there is always a shred of doubt that the graphics card was not able to render in time instead of this being a systemic problem.
3. So, I created a very simple OpenGL example (from one of Nehe's tutorial) that generates a solid color rotating triangle without anything remotely fancy. Microstutter is still present even in that example.
4. I brought out an oscilloscope and hooked it up to the VGA v-sync pin of the output being sent to the monitor, microstutter is nicely evident as confirmed by softwares like FRAPS.
5. It is not a VGA only problem, I also checked on a DVI-capable projector which breaks out the v-sync (for external strobing) and found the same issue.
6. This issue is definitely not a side-effect of the "Sync to Vertical Retrace (Vsync)" issue which results in tearing and can be resolved (in a limited sense) with entirely different actions.

Noted solutions so far: (individual results will vary, I could only test/confirm a couple)
1. Disabling the LAN connection seems to reduce the stutter but doesn't eliminate it. (Verified: Inhouse discovery)
2. Some BIOS allow you to set the PCI Latency to a higher value than the default 32 cycles. (Verified: Sideeffects unknown) (Overclock.net)
3. Disable core parking. (not verified)
4. Use a graphics card which has a genlock/framelock capability. Requires you to buy a Quadro FX + G-sync card (or ATI equivalent) (total damage close to £1600) (not verified)

If anyone else has any other solutions/observations on this, please do let me know.

Regards,
Abe
 
Back
Top