AI Disruption

AI Disruption

Share this post

AI Disruption
AI Disruption
LightLab: Google Pushes Cinematic Lighting Control to the Limit with Diffusion Models
Copy link
Facebook
Email
Notes
More

LightLab: Google Pushes Cinematic Lighting Control to the Limit with Diffusion Models

Google's LightLab uses diffusion models for cinematic lighting control in images. Adjust intensity, color & add virtual lights—all from a single photo.

Meng Li's avatar
Meng Li
May 17, 2025
∙ Paid
5

Share this post

AI Disruption
AI Disruption
LightLab: Google Pushes Cinematic Lighting Control to the Limit with Diffusion Models
Copy link
Facebook
Email
Notes
More
1
Share

"AI Disruption" Publication 6400 Subscriptions 20% Discount Offer Link.


Recently, Google introduced a project called LightLab, which enables precise control over light and shadow in images.

It allows users to achieve fine-grained parametric control of light sources from a single image, including adjusting the intensity and color of visible light sources, the intensity of ambient light, and inserting virtual light sources into a scene.

In image or film creation, light is the soul, determining the focus, depth, color, and even the mood of a scene.

In films, for instance, well-crafted lighting can subtly shape a character’s emotions, enhance the story’s atmosphere, guide the audience’s attention, and even reveal a character’s inner world.

Image

However, whether in traditional photographic post-processing or adjustments after digital rendering, precisely controlling the direction, color, and intensity of light and shadow has always been a time-consuming, labor-intensive, and experience-dependent challenge.

Existing lighting editing techniques either require multiple photos to function (not applicable to single images) or, while capable of editing, lack precise control over specific changes (e.g., exact brightness or color adjustments).

Google’s research team fine-tuned a diffusion model on a specially constructed dataset, enabling it to learn how to precisely control lighting in images.

Image

To build this training dataset, the team combined two sources: a small set of real-world photo pairs with controlled lighting variations and a large set of synthetically rendered images generated using a physically based renderer.

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Meng Li
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share

Copy link
Facebook
Email
Notes
More