At its core, this grant-funded project was a way of exploring emerging technologies and their applications to synthesizing research on complex issues. It was a response to the limitations of existing two-dimensional visual structures and frameworks for inquiry. The contexts we research and participate in as designers are complex, and when we try to represent what we’ve found and what it all means two-dimensionally often, we must compromise on the communication of that complexity. How can we then identify real needs and opportunities for intervention if we aren’t looking at the most honest depiction of a context? Additionally, the visual language we may use to communicate is, to some degree, reliant on our disciplinary dialect — that is, different disciplines likely have different ways of visually communicating discipline-specific information. The promise of this project, then, was that by constructing system representations, in real-time, in three dimensions that are immersive at human scale, we might be able to experience these constructions in ways akin to how humans process information from our real/physical world around us. This way of experiencing — or inhabiting — information might simultaneously force and allow us to engage more deeply in the complexity of our research focus as well as overcoming disciplinary dialect differences by leveraging humans’ associations with physical space.
See the full report here.