Introduction

Welcome and thank you for your interest in my work. I’ll try to explain here what I do and what led me to it.

Aura Labs is an effort to use the latest achievements of information technology and electronics to create interactive multimedia spaces called Auras: regions of physical space that react to their environment, including people inside them. The space is populated with small devices that can gather data from sensors and also influence the space through various actuators to create light, movement or sound. Environmental changes and people’s actions are picked up by the sensors and fed into algorithms running on the Controller, which is the “brain” of the system. Their output is used to alter the space through the actuators. This allows the creation of spaces that are in constant dialog with the people inside them as both of them react to each other in a closed loop. The project’s primary focus is on natural outdoor environments as they, potential data sources themselves, can also take part in the conversation. The outdoor scenario covers all aspects of indoor one, and more.

Aura Labs’ goal is to develop necessary know-how, best practices, technologies, hardware and software tools to create Auras as well as to build actual installations to support the development of the project.

Project Status

The system is in a Proof-of-Concept state. The open source AuraNode (sub-)project, which is the basis for WiFi-enabled devices that handle sensors and actuators, has had quite a few successful smaller deployments and is working in several commercial environments. On the Controller side, several tools have been tested and used to process sensor data. So far the largest installation contains 12 nodes.

Supported sensors and actuators

The Node can read the following sensors and physical properties:

The following actuators can be used to alter the space:

The Node

The Node is a generic device that provides a bidirectional link between the physical world and the Controller, the “brain” of the system. Each device’s configuration tells itself which sensors and actuators are attached to it. The sensors’ values are read and transmitted over WiFi to the Controller, where they are processed. The result is sent back to the Nerve nodes that display it using the actuators. Because of the central processing, there’s no restriction regarding the source of the readings and the destination of the result: sensor data from any node can propagate through the Controller to any actuator in the entire system.