VLMaterial: Procedural Material Generation with Large Vision-Language Models

teaser
Beichen Li1 Rundi Wu2 Armando Solar-Lezama1 Changxi Zheng2 Liang Shi1 Bernd Bickel3,4 Wojciech Matusik1
1MIT CSAIL 2Colombia University 3ETH Zurich 4Google Research

Abstract

Procedural materials, represented as functional node graphs, are ubiquitous in computer graphics for photorealistic material appearance design. They allow users to perform intuitive and precise editing to achieve desired visual appearances. However, creating a procedural material given an input image requires professional knowledge and significant effort. In this work, we leverage the ability to convert procedural materials into standard Python programs and fine-tune a large pre-trained vision-language model (VLM) to generate such programs from input images. To enable effective fine-tuning, we also contribute an open-source procedural material dataset and propose to perform program-level augmentation by prompting another pre-trained large language model (LLM). Through extensive evaluation, we show that our method outperforms previous methods on both synthetic and real-world examples.

Citation

@misc{li2025vlmaterialproceduralmaterialgeneration,
    title={VLMaterial: Procedural Material Generation with Large Vision-Language Models}, 
    author={Beichen Li and Rundi Wu and Armando Solar-Lezama and Changxi Zheng and Liang Shi and Bernd Bickel and Wojciech Matusik},
    year={2025},
    eprint={2501.18623},
    archivePrefix={arXiv},
    primaryClass={cs.CV},
    url={https://arxiv.org/abs/2501.18623},
}

Acknowledgements

This work was partially funded by an unrestricted gift from Google.