Dronument: Drone-based documentation of historical monuments

The documentation of the condition of historical sites, heritage buildings, and other artistic objects requires a specific approach. Outdoor images have been captured using kites, balloons, or even planes in the past. These approaches can be expensive and inaccurate while failing to provide sufficiently detailed images. In order to monitor the condition of a building and its artifacts in higher detail, large scaffolding construction is required.

Unmanned Aerial Vehicles (UAVs) have begun to be used for inspection and documentation in recent years. A UAV platform can supply the same documentation and inspection techniques as those provided by experts, but in locations unreachable by people without the use of large and expensive scaffolding installation, or even in locations which have never been documented before during an initial survey. The use of UAVs significantly speeds up the duration and significantly scales down the cost of the restoration works, while offering data acquisition from previously impossible angles and unreachable locations.

The goal of the Dronument project is to create an aerial platform enabling interior and exterior documentation of heritage sites. This includes development of software for data acquisition and processing, as well as a system for data distribution amongst the heritage sector. Typical documentation interests in interiors are either artistic elements, such as paintings, altars, statues, mosaics, frescores, stained glass, pillars, or pipe organs, and non-artistic elements, such as building structure changes (e.g., wall clefts).

The proposed system is designed for deployment in historical monuments, such as ancient or modern, war-damaged, dilapidated or restored cathedrals, chapels, churches, mausoleums, castles, and temples with dimensions varying from small chapels up to large cathedrals. The deployment of robots in these operational environments is a challenging task due to the absence of a global navigation satellite system (GNSS), adverse lighting conditions, and numerous other challenges. UAVs are required to move close to walls, columns, balconies, and other structures while autonomously navigating themselves among inspection points. An aerial system handling all these challenges must also provide exceptional robustness, which we propose to achieve by introducing a precise model-based control approach, reliable real-time state estimation, a high level of sensor & actuator redundancy, and feasible mission planning & navigation.

Architecture

The developed system designed for documentation in dark areas of large historical monuments uses a unique and reliable aerial platform with a multi-modal lightweight sensory setup to acquire data in human-restricted areas with adverse lighting conditions, especially in areas that are high above ground. The introduced localization method relies on an easy-to-obtain 3-D point cloud of a historical building, while it copes with a lack of visible light by fusing active laser based sensors. The approach does not rely on any external localization or on a preset motion-capture system. This enables fast deployment in the interiors of investigated structures, while being computationally undemanding enough to process data online and onboard an MAV equipped with ordinary processing resources.

The reliability of the system originates from carrying out tests in real-world historical objects in the course of several years of a research project in close cooperation with the National Heritage Institute of the Czech Republic. The system utilizes a unique hardware and software aerial platform designed in close consultation with restorers and conservationists, using experience from deployment of the system in numerous individual historical objects. It presents and shares the experience of what we believe to be the most comprehensive project in the field of autonomous documentation of historical monuments by an aerial system.

Our platform provides a robust light-independent localization system for interiors of historical buildings relying on 3-D lidar as its primary sensor. The 3-D localization offers precise full 6 degrees of freedom estimation, providing fast and robust state estimation integrated into a feedback loop of an MAV position control. Based on a quantitative analysis evaluated on aerial ground truth data, our approach reaches a persistent RMSE precision below 0.23 m. The drift-free system that is presented yields greater accuracy than map-based localization for autonomous cars and comparable accuracy to a drift-prone method employing a 3-D scanner on a ground vehicle.

The “dronument” platform can be deployed in three modes- manual, semi-autonomous, and fully autonomous- as allowed by the heritage institute and/or the superintendent of the structure. These modes are specified as follows:

  • manual  – a human operator controls all aspects of the flight using an operating transmitter while the MAV is autonomously localized in the environment to associate gathered data with the 3-D map,
  • semi-autonomous – a human operator commands the flight, while onboard systems control the sensory data acquisition and provide control feedback with respect to obstacles in a 3-D neighborhood,
  • autonomous – a human only specifies objects-of-interest (OoI) for documentation, and the system handles each stage of the entire mission – takeoff, stabilization & control, localization, navigation, trajectory optimization, data acquisition, and landing.

In addition to the deployment of a single MAV, the system is prepared for use in cooperative multi-MAV scenarios, as required for some documentation tasks. Typical non-invasive documentation consists of a multiple spectrum survey to obtain specific information valuable for various restoration purposes. For example, the use of different spectra contributes to more precise dating of paintings, as the glow of pigment combinations is unique to a certain period. Examples of single (S) and cooperative (C) tasks are:

  • Direct lighting (S)∗ – lighting of the scene from an onboard light with the illumination axis collinear with the optical axis of the camera.
  • Reflectance Transformation Imaging – a photographic technique for capturing the shape of a surface and the color of an object by combining photographs of the objects taken from a semi-static camera on an MAV under various illumination provided by a different MAV [21].
  • Three-point & strong-side lighting – filming techniques in which one to three sources of light are used in different locations relative to the optical axis of the camera. Model Predictive Control (MPC) approach is proposed for controlling a formation of MAVs with respect to the lighting techniques during an aerial deployment of cooperative teams in this task.
  • Radiography & UV screening – a method for viewing the internal structure of an object (e.g. a statue) by exposing it to X-ray or UV radiation (emission source onboard the first MAV) captured behind or in front of the object by a detector (the second MAV).
  • 3-D reconstruction – a method for aggregating the shape and the appearance of an object by combining laser and/or vision-based information into a 3-D model.
  • Photogrammetry – a method for extracting measurements of real objects from photographs.

Use Cases

The aerial platform has been deployed in several historical sites within the Czech Republic and Poland in order to document interior artifacts and structures and exterior facades and roofs.

Conclusion

The proposed solution is intended to be a tool for historians and restorers working in large historical buildings, such as churches and cathedrals, to provide access to areas that are difficult to reach by humans. In these sites, it is impossible to keep large scaffolding for extended periods of time due to regular services. This is, however, necessary for studying the long-term progress of restoration works and the required inspection of some parts of the churches had not even been reached by people in decades. To provide the same documentation and inspection techniques that are used by experts in lower and more easily accessible parts of the buildings, we employ a formation of autonomous UAVs where one of the robots is equipped by a visual sensor and the others with sources of light to provide the required flexibility for control of lighting.

This system provides local 3-D position and attitude without access to GNSS and, with the use of a laser inertial sensory setup, copes with bad lighting conditions. This makes it feasible for deployment in indoor areas high above the ground, as is characteristic for historical monuments. The presented analysis of the localization system proves it to be a reliable and robust source of information with sufficient precision, which enabled its deployment into the feedback loop of the position control system.

For further information about lighting: https://ieeexplore.ieee.org/document/8977312

Fotogalerie