Programmable shape formation is a process in which a number of atomic units can arrange themselves into a specified configuration. The long-term vision is to create programmable matter, where a user will be able to create physical objects on the fly from a so-called 'bucket of stuff' - ideally in a manner that is reconfigurable.
If we look at shape formation in nature and in conventional human manufacturing, we notice two broad classes of processes. The first one is additive, where the shape is formed by gradually adding material. The second class of shape formation processes is subtractive: here, the shape is formed by material being removed from an initial bulk. Previous to this project, most work in programmable shape formation has been additive, and is usually referred to as self-assembly.
A second distinction that can be made in programmable shape formation concerns whether the units are passive or active in terms of locomotion. In the former case, they are agitated by an external source such as shaking, and only decide when to attach to or detach from one another. In the latter case, the units are additionally able to move by themselves and require no external agitation.
In this project, we present a process for shape formation through active self-disassembly. The work was carried out on the Kilobot platform using 725 robots. The robots are arranged into a grid, but are not informed of their position in the grid, with the exception of three seed robots that define a coordinate system. In the first phase of the algorithm, the robots localize themselves with respect to the seed by a process of iterative trilateration. Upon localizing, each robot can decide whether or not it is part of a specified shape. Additionally, the robots know the location of an external light source, and use this information to decide whether it is easier for them to leave by going towards or against the light source. Once the robots agree that all of them have localized, they move on to the second phase of the algorithm, the self-disassembly itself. The robots that are not part of the shape move towards or away from the light source in layers, using random motion whenever necessary to avoid colliding into the shape.
It was possible to compare the self-disassembly process to a self-assembly process that had been previously implemented on the Kilobot platform.
Self-disassembly can form shapes faster by an order of magnitude, because it exploits parallelism much more effectively. Additionally, it is much more robust to single-robot failures or mistakes. The price paid is that (i) self-disassembly is 'wasteful' and can only form smaller shapes given a fixed number of robots, and (ii) the class of shapes that can be formed is somewhat more restricted (e.g., interior holes).
The gradient, or hop count, algorithm is inspired by natural phenomena such as the morphogen gradients present in multi-cellular development. It has several applications in multi-agent systems and sensor networks, serving as a basis for self-organized coordinate system formation, and finding shortest paths for message passing. It is a simple and well-understood algorithm in theory. However, we show here that in practice, it is highly sensitive to specific rare errors that emerge at larger scales. We implement it on a system of 1000 physical agents (Kilobot robots) that communicate asynchronously via a noisy wireless channel. We observe that spontaneous, short-lasting rare errors made by a single agent (e.g. due to message corruption) propagate spatially and temporally, causing cascades that severely hinder the algorithm’s functionality. We develop a mathematical model for temporal error propagation and validate it with experiments on 100 agents. This work shows how multi-agent algorithms that are believed to be simple and robust from theoretical insight may be highly challenging to implement on physical systems. Systematically understanding and quantifying their current limitations is a first step in the direction of improving their robustness for implementation.