Sampling and Light Transport · Lesson 1 of 1 · 15 min · beginner

How many samples should I use in Blender?

How many samples should you use in Blender Cycles? A practical guide to what samples do, why 4x samples only halves noise, how adaptive sampling and Noise Threshold change the answer, and when denoising beats brute force.

What you will learn

  • Understand what samples actually measure in Cycles.
  • Know why simply raising Max Samples has diminishing returns.
  • Use Noise Threshold, denoising, and a few key settings more deliberately.

Prerequisites

  • Basic familiarity with Cycles renders in Blender.
  • Helpful but optional: the earlier Light Paths lesson in this series.

If you have ever typed in 1024, 2048, or 4096 samples mostly because “higher must mean better,” you are in extremely good company.

Samples are one of the most misunderstood settings in Blender because they feel like a quality slider, but they are really a noise-control budget. They absolutely matter, but not in the simple linear way most people first imagine.

This lesson is about Cycles. Eevee has its own sampling behavior, so treat the numbers and heuristics here as Cycles guidance, not a universal rule for every Blender renderer.

Wrong samples, wasted setup

Pick the wrong sample count and you either render for six hours instead of one, or you ship an animation where every shadow flickers. Everything else you tuned (denoiser, light bounces, compositing) depends on this number landing in the right range.

What samples actually do

Cycles is a path tracer. It estimates light by tracing many possible light paths through each pixel and averaging the result. More samples = a more stable estimate.

A reasonable guess: doubling the work should roughly halve the noise. Try that prediction against the widget below. Drag from 16 samples to 64 (4x the work). Then from 256 to 1024 (also 4x the work). If the relationship between samples and noise is linear, both jumps should look the same amount cleaner.

Samples

Samples are repeated light estimates per pixel, not a sacred quality number.

Cycles keeps shooting more light paths through a pixel and averages the result. More samples reduce random noise, but the payoff slows down fast: doubling samples drops noise to about 71% of before, while render time doubles.

Pick the scene difficulty

Noise left

50% of the noise you would still see at 16 samples. This falls with the square root of the sample count, not in a straight line.

Relative time

1.0x the work of a 64-sample render. Samples are expensive because every extra pass still costs render time.

What the next doubling buys

Noise drops to about 71% of before for 2x the render time. The cost grows linearly while the noise reduction lags behind by the sqrt(N) curve.

If you predicted “the same amount cleaner” you should have noticed something felt off. The 16-to-64 jump is dramatic. The 256-to-1024 jump is much quieter, even though both cost the same multiplier in render time. Each extra round gives you less visible improvement than the one before.

Why? Flip a coin 100 times and you’ll typically be about 5 flips off 50/50. Flip 10,000 times and you’ll typically be about 50 flips off 5,000/5,000. The raw error count grew 10x, but the sample size grew 100x, so the percentage error shrank 10x. That ratio is the whole curve: error counts grow like sqrt(N) while sample size grows like N, so the percentage error falls as 1 / sqrt(N). A hundred times the work, ten times the accuracy.

That math has a name: Monte Carlo sampling, the technique of estimating an answer by averaging lots of independent random guesses. Cycles is doing exactly that for every pixel. Random error falls with 1 / sqrt(samples), not in a straight line.

This is the key mental shift:

  • Samples are not “detail points.”
  • Samples are not “resolution.”
  • Samples are repeated attempts to reduce statistical uncertainty.

When a render looks grainy, the question is not only “should I raise the number?” It is also “am I spending those samples intelligently?”

Knowledge check

You have a render at `1024` samples. A friend says: 'Render it again at `1024` and average the two. That's effectively `2048` samples, so the noise should be halved.' Is your friend right?

+

No. Averaging two independent 1024-sample renders is mathematically the same as one render at 2048 samples (the friend is right about that). But going from 1024 to 2048 is only a 2x increase, which by sqrt(N) drops noise to about 71% of before, not 50%. To halve the noise, you need 4x the samples: a single render at 4096, or four averaged renders of 1024 each.

Why brute-force samples get expensive fast

Render time scales linearly with sample count. Noise reduction does not. So imagine a render that takes 2 minutes at 128 samples:

  • Bumping to 512 (4x): 8 minute render, noise drops to 50% of before.
  • Bumping to 2048 (16x): 32 minute render, noise drops to 25% of before.
  • Bumping to 8192 (64x): 128 minute render, noise drops to 12.5% of before.

Each step costs 4x more time for the same fractional improvement. Past a point, the picture is barely changing while the render queue is. Reserve brute force for the parts of the image that genuinely need it.

Noise Threshold changes the real question

If Noise Threshold is enabled, Cycles switches to adaptive sampling: every so often, it estimates how noisy each pixel still is and compares that estimate to the threshold you set. Pixels under the threshold stop. Pixels over keep going. The threshold is the per-pixel version of “how confident am I in this average?”

That changes the job of Max Samples:

  • Without adaptive sampling, Max Samples is close to “every pixel does this much work.”
  • With adaptive sampling, Max Samples becomes a ceiling that only the hardest pixels actually hit.

The widget below shows a simulated render with adaptive sampling on. The heatmap on the right is hidden behind a “Show me” button on first load, because the prediction is the point. Look at the scene on the left and commit to an answer first. Which region do you think will burn the most samples: the bright window, the glossy sphere, the soft floor, or the shadow corners under it? Pick one, then click reveal.

Adaptive sampling

Noise Threshold changes the real question from “how high?” to “how clean is clean enough?”

Adaptive sampling lets easy pixels stop early while difficult pixels keep working. The max sample count becomes a ceiling, not a promise that every pixel will spend that much time.

What the render looks like

Each pixel terminates when its noise estimate drops below threshold. The image you see is what survives once every pixel has stopped (or hit the ceiling).

Where Cycles spends samples

Look at the scene on the left. Predict which regions burned the most samples before clicking. Bright window? Glossy sphere? Soft floor? Shadow corners?

Average pixel

About 47 samples. This is the number that really matters for total render time once adaptive sampling is enabled.

Quiet background

About 16 samples before it is considered “good enough.”

Subject detail

About 34 samples where texture, shading changes, and edges need more confidence.

Hardest highlight

About 148 samples where glossy contrast and bright light make Monte Carlo noise stubborn.

If your prediction matched the heatmap, you have the model. If you were surprised, the surprise itself is the lesson: adaptive sampling spends budget where the variance actually lives, not where the geometry looks detailed or the lighting looks dramatic.

The Blender manual also gives a useful numeric anchor here:

  • Typical Noise Threshold values are roughly 0.1 to 0.001.
  • Lower values mean cleaner renders, but longer render times.

A better habit

Instead of asking only “how many samples?”, ask “what max ceiling gives adaptive sampling enough room, and what threshold tells it when to stop?”

Denoising is not the same thing as more samples

The Blender docs put it bluntly: denoising gives you a less noisy image without requiring more samples. The widget below shows three panels: noisy input, denoised result (gated), and a ground-truth reference at very high samples. The sphere has a fine stripe pattern.

Before you click anything, set samples to 8, leave the pass mode at Color + Albedo + Normal (the strongest), and commit to a yes/no prediction: will the denoiser bring those stripes back? Then click “Show me what survives” and check. After you have an answer, move samples to 128 and try the same prediction.

Denoising

Denoising lets you stop earlier, but it is not the same thing as true sample detail.

Blender’s docs are pretty direct here: denoising reduces noise without requiring more samples, and extra passes like Albedo or Albedo + Normal generally help preserve detail. The trade is that aggressive denoising can smear information the renderer never really resolved.

Denoiser input passes

Noisy input

What the renderer actually produced at the current sample count. This is what the denoiser has to work with.

Denoised result

Compare the noisy input on the left to the ground truth on the right. The sphere has fine stripes. At the current sample count and pass mode, will the denoiser bring those stripes back? Commit, then click below.

Ground truth (very high samples)

What the scene actually looks like once noise is no longer the issue. Compare the denoised result to this. Anything missing here was never going to survive denoising at the chosen sample count.

Color only

The denoiser mostly sees grainy color. It can clean noise fast, but it is more likely to smear edges and tiny shading changes.

Add Albedo

Albedo helps the denoiser understand material color boundaries, which usually preserves surfaces better at low sample counts.

Add Normal too

Normal guidance helps the denoiser respect shape changes and usually gives the most detail-preserving result when the renderer supports it.

What you should have seen: at 128 samples the denoiser preserves the stripes across all three pass modes. At 8 samples no pass mode brings them back. Pass guidance (Albedo, Normal) helps the denoiser respect edges and material boundaries, but it cannot reconstruct detail the renderer never sampled cleanly. The stripe survives only if the noisy input had enough signal for the denoiser to lock onto in the first place.

Powerful, but limited:

  • Denoising can remove grain.
  • Denoising can make low-sample renders usable much sooner.
  • Denoising cannot invent shading detail the renderer never really resolved.

The manual also points out something many beginners miss:

  • Using at least Albedo is recommended because color-only denoising can blur details.
  • Albedo + Normal usually gives the denoiser the best structure to work with.

Denoising helps you stop earlier. It does not make sampling irrelevant.

The settings that matter most

All six live under Render Properties -> Sampling. Light Tree is one more click in: expand the Lights subsection or you will not see it.

One thing that bites people early: Viewport -> Max Samples and Render -> Max Samples are completely separate controls. Keep viewport samples low enough that you can iterate without waiting. The render side is where final-quality budget actually lives.

What each control is actually doing:

Max Samples

Question it answers: What is the most work any pixel is allowed to do?

It is the final ceiling, not automatically the ideal number for every pixel. A higher max gives difficult regions more headroom, but it does not mean the whole frame deserves that much work.

Once adaptive sampling is on, it is usually safer to give Max Samples generous headroom and let Noise Threshold decide when most pixels can stop. A ceiling that is too low can choke difficult regions before they ever clean up.

Noise Threshold

Question it answers: How clean does a pixel need to become before Cycles can stop?

This is the most important sampling control once you start using adaptive sampling seriously. Lower thresholds mean stricter standards and longer renders.

The estimator is also estimated

How does Cycles know per-pixel noise without the ground truth? It samples it. The threshold check is itself a Monte Carlo estimate: Cycles measures noise from a sample of samples and compares that estimate to your threshold. The same 1/sqrt(N) math from earlier applies recursively. With too few samples, the noise estimate itself is noisy and the check can fire too early. That is what Min Samples exists to prevent. Default 0 means automatic: Cycles picks a sensible floor from your Noise Threshold. Leave it at 0. Override only if you have seen visibly noisy regions slip through the convergence check on YOUR specific scene.

Denoise

Question it answers: Should Blender clean up remaining noise with a denoiser instead of pure sampling alone?

This is often the practical speed win, especially for previews, stills, and many final renders. The two names you will usually care about are OpenImageDenoise, which is Blender’s general-purpose option, and OptiX, which is the NVIDIA-specific alternative. But treat denoising as a cleanup stage, not as proof that the sampling underneath no longer matters.

Blender also exposes separate viewport and render denoise controls. That is why it makes sense to denoise aggressively for fast lookdev, while being more deliberate about the final render settings.

Time Limit

Question it answers: How much time am I willing to spend before Blender stops, even if the sample ceiling was not reached?

If you really mean “give me the best image possible in this budget,” this is the most literal budget control in the whole panel.

Light Tree

Question it answers: Can Cycles sample the scene’s lights more intelligently?

Without Light Tree, every ray picks a light at random with equal probability. In a scene with 50 lights where one carries most of the illumination, only 1 in 50 rays lands on the light that actually matters. With Light Tree on, Cycles weights light-picks by distance and intensity so the lights that actually contribute get sampled more often. Same ray budget, less noise.

On by default in recent Blender.

Symptom tells you which lever to pull

This is the lookup you’ll come back to. The shape of the noise tells you which control is the cheap fix; brute-force samples is rarely the answer.

Symptom you seeLikely causeFirst lever to reach for
Even grain across the whole frameGeneric Monte Carlo varianceTighten Noise Threshold, or add denoising
Noise concentrated in glossy reflections or bright highlightsAdaptive sampling stopped those pixels too soonLower the threshold; leave Max Samples generous enough that hard pixels can keep working
Stubborn grain in dark, indirectly lit cornersLight is having to bounce many times to reach those pixels, so each path has high varianceInvestigate Max Bounces. Sometimes raising it helps light reach the corners; sometimes lowering it cuts off the noisiest deep-bounce paths
Isolated bright firefly pixels in an otherwise clean imageOne light path dominating the pixel averageIndirect Light Clamping (Render Properties → Light Paths → Clamping)
Pixels keep hitting the Max Samples ceiling, render time climbingYour ceiling is too low for the variance in the sceneRaise the ceiling, accept residual noise, or change the lighting to reduce sampling difficulty

Starting points that actually work

These are not universal laws. They are practical starting points for ordinary Cycles scenes, not guarantees for extreme caustics, tiny emitters, or unusually pathological fireflies.

Lookdev / preview work

  • Start around 32 to 64 viewport samples.
  • Use viewport denoising.
  • Optimize for decision-making speed, not perfect cleanliness.

Final stills

  • Start around 512 to 1024 max samples in the Render subsection so adaptive sampling has room.
  • Start with a moderate Noise Threshold like 0.01.
  • Lower the threshold only if obvious stubborn noise remains where it matters.
  • As a rough mental map, 0.1 is preview territory, 0.01 is a common final still starting point, and 0.005 or 0.001 are for shots that truly justify longer renders.

Final animation

Animation reveals failure modes that a single still hides. A well-exposed frame can mask flicker, shimmer on shading edges, and boiling (denoiser-induced unstable detail that shifts across frames). Start around 256 to 512 max samples with denoising on, then test a short range before committing to a full render.

Why static noise reads as texture

The human eye reads stationary patterns as surface detail and moving patterns as noise. With a fixed seed, the same noise pattern hits the same pixels every frame, so the grain looks welded to the geometry like micro-texture. Animating the seed shifts the pattern each frame, which flips it back to reading as noise (where you can actually see it).

Enable Animated Seed at Render Properties -> Sampling -> Render -> Advanced -> Seed (it’s a clock icon next to the seed value). With it on, flicker and shimmer become visible while they are still fixable. A frozen seed can occasionally help certain temporal denoisers stay stable; for most artists, animated wins.

If render-time denoising still flickers, render with denoise data passes (Albedo + Normal) but without render-time denoising, then run the compositor’s Denoise node across the sequence. The compositor denoiser has access to neighboring frames and produces a more stable result than per-frame denoising alone.

Diagnose your own noisy scene

7-10 min

Open any Cycles scene you have on hand. The BMW demo or Classroom file work well; the default cube is fine if that is all you have.

  1. Render at default settings. Note the render time. Look at the result carefully: where is the noise worst? Likely candidates are glossy reflections, indirect light in corners, sharp shadow transitions, or fine geometry like hair.
  2. Diagnose before you touch anything. Based on the symptom-to-lever table above, pick the one change you think will clean up that specific region cheapest. Tighten Noise Threshold, add denoise, or, if the symptom doesn’t match a sampling fix at all, look at clamping or Max Bounces in the Light Paths panel. (Raising Max Samples is usually the brute-force last resort, not the first move.) Write down your reasoning.
  3. Make that one change and re-render. Note the new render time.
  4. Compare. Did the noisy region improve? Was the time cost what you expected? If your change didn’t work, that tells you something about where the noise is actually coming from.

You’re testing your diagnosis, not chasing a perfect render. A wrong guess teaches you more than a lucky right one.

What to remember

The best sample count is not the highest number you can tolerate. It is the point where the remaining noise is no longer worth the extra time, given the difficulty of the shot and what adaptive sampling and denoising can cover.

Samples are a time-vs-uncertainty trade, not a quality dial. The sqrt(N) curve means each doubling buys you less than the last. Start there, then calibrate Noise Threshold and denoising until the remaining noise no longer justifies another doubling.

Knowledge check

Your current setup: `Max Samples` `1024`, `Noise Threshold` `0.001`, denoise on. Render takes 9 minutes per frame. The ceiling alert in the AdaptiveSamplingExplorer says `18%` of pixels are hitting the cap. Director wants the same image 30% faster. Which lever do you reach for first, and what does it tell you about the second?

+

Raise Noise Threshold first (for example, to 0.005). Both symptoms point at the same fix: 0.001 is unnecessarily strict. Easy pixels are running long AND hard pixels are exhausting the ceiling without converging. Loosening the threshold makes both classes terminate sooner; the denoiser absorbs the modest residual.

Then check the ceiling alert. If it’s no longer firing, you’re done. The second move is “do not touch Max Samples.” If hard regions still hit the ceiling, raising Max Samples is the second lever, but that costs time the director did not have. Order matters because the threshold change can make the second move unnecessary, which is exactly what the budget asks for.

All 6 lessons in Rendering Fundamentals →

renderjuice

© Renderjuice 2026 All rights reserved.