0.7  AI Interior Renders


Stable Diffusion


In this Project, I will give a short introduction to what stable diffusion is and how I use it in Hotel Interior Design. Then I will explain how we can control shapes generated by stable diffusion. Finally, I will show a toy project example of how the technology can be used to create room design ideas.

Stable Diffusion is an open-source AI model that generates images based on text prompts, using a process called latent diffusion. It begins with random noise and gradually refines this noise into a clear image through iterative denoising steps guided by neural networks and text-image alignment models (like CLIP). The model works efficiently by operating in a compressed, latent space rather than pixel-by-pixel, enabling faster and more accurate image generation. Stable Diffusion is widely used in creative industries for art, design, and visual storytelling.

Control Net for Interior Design


In this project, I connected multiple interior design datasets:
IHG Furniture bases (2397 images)
IHG Existing Interior Style Images (893 images)

In total, a dataset of 3290 images was created. Authors of the ControlNet paper suggest using at least 50k images, however, I had a hard time finding more sufficient quality interior design images.

The dataset for ControlNet training consists of the original image, condition image (lines, edges, masks), and text prompts. For each original image, I created a condition image using Hough transform. Initially, I experimented with a canny edge detector, but the resulting condition images had too many contours. The model had a hard time coming up with new designs matching so many unstructured edges. Hough transform outputs only straight lines in the image, which are perfect for interior design mapping.