A platform for research: civil engineering, architecture and urbanism
A learning-based framework for generating 3D building models from 2D images
Our goal is to develop a tool to assist architects in generating 3D models of buildings. Unlike the existing manual computer-aided design (CAD) tools that require a significant amount of time and expertise to create 3D models, this tool enables architects to efficiently generate such models. In order to develop this tool, we propose a learning-based framework that enables generating 3D models of buildings from 2D images. Given an arbitrary image of a building, we generate a 3D model that architects can easily modify to produce the final model. We consider a parametric representation of 3D building models to facilitate accurate rendering and editing of the models. Our framework consists of two main components: 1) a facade detection and frontalizer module that detects the primary facade of a building and removes camera projection to generate a frontal view of the facade, and 3) a 2D to 3D conversion module that estimated the 3D parameters of the facade in order to generate a 3D model of the facade. We consider a simulation tool to generate 3D building models and use these as training samples to train our model. These simulated samples significantly reduce the amount of expensive human-annotated samples as this task requires expert architects annotating building images. To evaluate our approach, we test on real building images that are annotated by expert architects.
A learning-based framework for generating 3D building models from 2D images
Our goal is to develop a tool to assist architects in generating 3D models of buildings. Unlike the existing manual computer-aided design (CAD) tools that require a significant amount of time and expertise to create 3D models, this tool enables architects to efficiently generate such models. In order to develop this tool, we propose a learning-based framework that enables generating 3D models of buildings from 2D images. Given an arbitrary image of a building, we generate a 3D model that architects can easily modify to produce the final model. We consider a parametric representation of 3D building models to facilitate accurate rendering and editing of the models. Our framework consists of two main components: 1) a facade detection and frontalizer module that detects the primary facade of a building and removes camera projection to generate a frontal view of the facade, and 3) a 2D to 3D conversion module that estimated the 3D parameters of the facade in order to generate a 3D model of the facade. We consider a simulation tool to generate 3D building models and use these as training samples to train our model. These simulated samples significantly reduce the amount of expensive human-annotated samples as this task requires expert architects annotating building images. To evaluate our approach, we test on real building images that are annotated by expert architects.
A learning-based framework for generating 3D building models from 2D images
Roy, Anirban (author) / Kim, Sujeong (author) / Yin, Min (author) / Yeh, Eric (author) / Nakabayashi, Takuma (author) / Campbell, Matt (author) / Keough, Ian (author) / Tsuji, Yoshito (author)
2022-05-01
1206022 byte
Conference paper
Electronic Resource
English
Text2BIM: Generating Building Models Using a Large Language Model-based Multi-Agent Framework
ArXiv | 2024
|A Framework For Generating And Evolving Building Designs
Online Contents | 2005
|A Framework for Generating and Evolving Building Designs
SAGE Publications | 2005
|British Library Online Contents | 2013
|