Project developed within the scope of MaCAD Thesis 2023-24 in IAAC.
Description: Pix2Daylight aims to revolutionize daylight autonomy prediction in architectural design by developing a Pix2Pix machine learning model to predict daylight autonomy, with the location as an input variable from the user, motivated by the need to improve both efficiency and accuracy in daylight analysis.
Problem statement: Being part of the building codes in many countries throughout the whole world, daylight autonomy analysis is a long process due to the ray tracing simulations. However, it is an important part of the early design stages, where there are often iterations.
Idea: Quick daylight autonomy analysis in Revit, via Rhino.Inside, responsive to changes in the model
Solution: A Pix2Pix model is trained to provide daylight autonomy analysis in a very short time, applicable to any location with and EPW file, responsive to quick iterations on the design in Revit.
Beneficiaries: The target users of "Pix2Daylight" are the companies who mainly use Revit for their projects, and who also uses Rhino.Inside.
Our project aims to revolutionize daylight prediction in architectural design by developing a Pix2Pix machine learning model to predict daylight autonomy. Motivated by the need of an ML model that is applicable to any location in the world, with an EPW file, our focus is providing immediate, actionable feedback. By integrating this model directly into Revit, architects can receive real-time predictions on daylight compliance, facilitating quicker and more informed design decisions. This approach not only enhances the design process but also ensures that buildings meet health, well-being, and regulatory standards.
alternatively
clone the repo:
https://github.com/iaac-macad/Pix2Daylight.git
To use the project follow these steps: (after creating an environment where you install the requirements.txt on your computer)
- Step 1: go to datapreprocessing/image_encoding.py. You can input any room geometry with the reqired data in the file.
- Step 2: after cloning the repo, open in VS Code.
- Step 3: based on which encoding method you would like to proceed with, go to encoding1.ipynb or encoding2.ipynb in "datapreprocessing" folder. Set the train number, and run the script.
- Step 4: open train_save_test_model.py and set the hyperparameters you would like to train with.
- Step 5: in the terminal, type "python .\train_save_test_model.py".
- Step 6: after the training is complete, check the folder with your train number for the model, predictions and metrics. If you would like to visualize the loss graph, go to tensorboard_vis.ipynb and visualize the graphs for generator, discriminator and total.
- Step 1: go to datapreprocessing/image_encoding.py. You can input any room geometry with the required data in the file.
- Step 2: after cloning the repo, open in VS Code.
- Step 3: based on which encoding method you would like to proceed with, go to encoding1.ipynb or encoding2.ipynb in "datapreprocessing" folder. Set the train number, and run the script.
- Step 4: go to image_combining_tarfile.ipynb in the cloned repo, and run it with the training number you have set.
- Step 5: in the folder of your new train number, there is an archive.tar.gz file created. First, copy the folder in the link above to your drive, and create a new folder with your train number. Then, create a folder named "dataset", and copy the archive.tar.gz file in this directory in your drive. after mounting your drive, open "train_save_test_model.py" in drive and change the hyperparameters, as well as the train number.
- Step 6: next you can run the whole script for training.
- Step 7: for visualization of the loss graphs, you can download the v2 file of your training from logs/fit, and copy it to your local repository.
While working on the project the following challenges were encountered:
- excessive system and GPU RAM consumption: Most local GPUs are insufficient for the training. The code needs improvement to avoid this. Therefore, we suggest using Google Colab Pro for now.
- Model Deployment on the server: Since we could not achieve it using Google Cloud, we deploy our model locally while running our app.
- We learned too late that Google Cloud Functions focus on CPUs. Our GPU-accelerated model would benefit from a service like Vertex AI. We can simplify our user interface by deploying the model and sending direct web requests.
- For our user interface, we have used only native components in our Grasshopper scripts. Porting them to Python and uploading them to a Github repo would make it possible to offer them as a pyRevit Extension.
Distributed under the MIT License. See LICENSE.txt
for more information.
Dawid Drożdż - @daviddrozdz - e-mail - LinkedIn
Hande Karataş - @hande-karatas - e-mail - LinkedIn
* [Best README template](https://github.com/othneildrew/Best-README-Template)