Concordia researchers have developed a new technique that can help create high-quality, accurate 3D models of large-scale landscapes – essentially, digital replicas of the real world.
While more work is required before the researchers achieve their goal, they recently outlined their automated method in the Nature journal Scientific Reports.
Geometry, structure and appearance
The framework reconstructs the geometry, structure and appearance of an area using highly detailed images taken by aircraft typically flying higher than 30,000 feet.
These large-scale aerial images – usually more than 200 megapixels each – are then processed to produce precise 3D models of cityscapes, landscapes or mixed areas. They can model their appearance right down to the structures' colours.
The framework, called HybridFlow, was developed by Charalambos Poullis, an associate professor of computer science and software engineering at the Gina Cody School of Engineering and Computer Science, and PhD student Qiao Chen.
"This digital twin can be used in typical applications to navigate and explore different areas, as well as virtual tourism, games, films and so on," said Poullis.
"More importantly, there are very impactful applications that can simulate processes in a secure and digital way. So, it can be used by stakeholders and authorities to simulate 'what-if' scenarios in cases of flooding or other natural disasters. This allows us to make informed decisions and evaluate various risk-mitigating factors."
No need for deep learning
Current reconstruction methods rely on finding visual similarities between images to build 3D models. However, because the images are so large, issues such as occlusion and repetition can adversely affect a model's accuracy.
Traditional 3D modelling techniques rely on identifying key points in an image, matching them in another image and then propagating those matches across a specific area.
With HybridFlow, the images are clustered into sections that are perceptually similar and then at the pixel level. For instance, an image segment showing blue sky will be matched with another segment showing the same, just as a cluster showing a densely built up area will be matched with a cluster showing a similar pattern based on pixel-level analysis.
This makes the model more robust, as points are easier to track across images and processing time is accelerated to triangulate those points, resulting in an accurate reproduction.
Data-driven method
"It also eliminates the need for any deep learning technique, which would require a lot of training and resources," said Poullis. "This is a data-driven method that can handle an arbitrarily large image set."
He adds that the data is saved on disk, not in memory, which optimises the data pipeline. With a remote computer doing the processing, he notes, an average-sized model of an urban area can be created in less than 30 minutes.
Poullis said that he had already worked with officials in the flood-prone city of Terrebonne, just northeast of Montreal. Together they worked on modelling their city and simulating floods to help plan and evaluate mitigation measures.
"They know they cannot prevent the flooding, but we can provide them with tools to make informed decisions," he said. "We allow them to change the environment by introducing barriers such as sandbags, and then we run simulations to see how the floodwater flow is affected."