Programmable shape formation is a process in which a number of atomic units can arrange themselves into a specified configuration. The long-term vision is to create programmable matter, where a user will be able to create physical objects on the fly from a so-called 'bucket of stuff' - ideally in a manner that is reconfigurable.
If we look at shape formation in nature and in conventional human manufacturing, we notice two broad classes of processes. The first one is additive, where the shape is formed by gradually adding material. The second class of shape formation processes is subtractive: here, the shape is formed by material being removed from an initial bulk. Previous to this project, most work in programmable shape formation has been additive, and is usually referred to as self-assembly.
A second distinction that can be made in programmable shape formation concerns whether the units are passive or active in terms of locomotion. In the former case, they are agitated by an external source such as shaking, and only decide when to attach to or detach from one another. In the latter case, the units are additionally able to move by themselves and require no external agitation.
In this project, we present a process for shape formation through active self-disassembly. The work was carried out on the Kilobot platform using 725 robots. The robots are arranged into a grid, but are not informed of their position in the grid, with the exception of three seed robots that define a coordinate system. In the first phase of the algorithm, the robots localize themselves with respect to the seed by a process of multilateration. Upon localizing, each robot decides whether or not it is part of the desired shape. The robots know the location (in their coordinate systems) of an external light source, and use this information to decide whether it is easier for them to leave the formation by going towards or away from the light source. Once the robots have all localized, they move on to the second phase of the algorithm. The robots that are not part of the shape begin moving towards or away from the light source in layers, using random motion whenever necessary to avoid colliding with the shape.
We were able to compare the self-disassembly algorithm to a self-assembly algorithm that had been previously implemented on the Kilobot platform. Self-disassembly can form shapes faster by an order of magnitude, because it exploits parallelism, whereas in self-assembly a single slow robot can cause congestion. Additionally, self-disassembly can handle noisy motion, but self-assembly requires relatively precise motion, because an error made by a single robot can propagate throughout the shape and become magnified. The main cons of self-disassembly relative to self-assembly are (i) it is 'wasteful' in the sense that the largest shape that can be formed with a given number of robots is smaller, and (ii) the class of shapes that can be formed is somewhat more restricted (e.g., shapes with interior holes are not possible).
The gradient, or hop count, algorithm is inspired by natural phenomena such as the morphogen gradients present in multi-cellular development. It has several applications in multi-agent systems and sensor networks, serving as a basis for self-organized coordinate system formation, and finding shortest paths for message passing. It is a simple and well-understood algorithm in theory. However, we show here that in practice, it is highly sensitive to specific rare errors that emerge at larger scales. We implement it on a system of 1000 physical agents (Kilobot robots) that communicate asynchronously via a noisy wireless channel. We observe that spontaneous, short-lasting rare errors made by a single agent (e.g. due to message corruption) propagate spatially and temporally, causing cascades that severely hinder the algorithm’s functionality. We develop a mathematical model for temporal error propagation and validate it with experiments on 100 agents. This work shows how multi-agent algorithms that are believed to be simple and robust from theoretical insight may be highly challenging to implement on physical systems. Systematically understanding and quantifying their current limitations is a first step in the direction of improving their robustness for implementation.