This Project was developed as part of my research at The University of Tokyo, DLX – Design Lab
The project was presented at:
- 2017 – U.Tokyo IIS – Treasure Hunting Exhibition
- 2017 – ARS Electronica
- 2018 – U.Tokyo IIS – Potentialities exhibition, The National Art Center, Tokyo
- 2019 U.Tokyo IIS – Open Campus
As technology rapidly evolves and the boundaries between the physical and digital world begin to blur, interfaces and control systems need to develop accordingly. Presently, this change is gradual: we find ourselves constantly learning new actions that we need to adapt to our behaviours and eventually turn into habits.
What if objects could learn from individual human behavioural patterns and understand a user’s true intentions? What if we could use this to create intuitive, invisible, and premonitory interfaces controlled by our instincts?
Transparent Intent explores the future of the interface, predicting a future where objects can be controlled subconsciously. Using computer vision technology developed by Y Sato lab, we’ve designed a set of interfaces that demonstrate this future evolution. The first step removes the tangible interface – the object is controlled through an action. The next step removes the need for direct contact with the object: by controlling the object with a gaze and a gesture, we can start to control objects with simply our intent. The final stage starts to explore the scenario where objects are clairvoyant: they control themselves based on human behaviour. In this case, the object detects the sensations you are feeling and adjusts itself accordingly.
Pushed to the future, where advanced computer vision would allow objects to react to your subconscious behaviour, technology could effectively become invisible and simply form an extension of the human mind.
How we used Y.Sato Lab Research?
One of Y. Sato Lab’s main bodies of research is a computer vision technology in which a normal camera system is able to recognize actions and behaviours of the people captured by the system. Once the computer has identified these actions and behaviours, it is able to go beyond this and understand the person’s intention by analyzing gaze, facial direction and depth. We have used this research as a basis to speculate that computer vision could eventually allow us to gain access to human subconscious behaviour, and use this to design new, invisible, premonitory interfaces. Imagine an object or environment that could guess your intention by reading your subconscious behaviour and adapt to you accordingly.
Inspired by the questions which rose during this project we decided to explore the hypothesis: Can computer vision effectively monitor and comprehend our subconscious?
Expanding collaboration with Y. Sato Lab, we are now exploring various applications of Computer Vision techniques in the fascinating world of pupillometry. We shared some of our early experiments in this paper which was presented at the 35th Annual Meeting of the Japanese Cognitive Science Society in Osaka (2018).