about the mv lab

background

The movement visualization (mv) lab is a research environment to study the rendering of figure movement in cinematic space. Our institutional home is the University of Illinois at Urbana-Champaign. Methodologically, historical poetics and neoformal analysis guides our understanding of film form while Laban/Bartenieff Movement Studies guides our study of human movement. The movement taxonomy at the core of Laban Movement Analysis is ideal for movement description, segmentation, and analysis, given its ability to discuss anatomical, quantitative, and qualitative aspects of human movement. By combining the formal rigor of these disciplines, we are able to generate a corpus of examples that broadens our understanding of how camera and figure movement can interact.

The motion-capture mv tool allows a moving agent to interact in real time with a skeleton visualization of their figure movement, while a second agent can manipulate the frame’s relationship to the virtual skeleton, mimicking the craft of the moving camera. This data can be subsequently recorded and databased for further analysis and post-capture manipulation. The result is a digital tool for moving image analysis to generate and study the array of relationships between the camera and body, with the aim of better understanding the stylistic and narrative choices available to filmmakers in the staging of figure and camera movement.

The digital environment, unconstrained by the laws of physics, permits nearly infinite ways to move the body and frame. This allows us to explore the most intimate of relationships between figure and frame by linking the virtual camera’s movement directly to the movement of the virtual body and amplifying that information through additional visual and sonic feedback. In contrast, by applying knowledge about industrial practices and technological availability, we can constrain the virtual camera’s range of motion to more accurately study what choices were available to filmmakers in their historical context. As a result, the mv lab unites creative practice, formal analysis, and digital tools to enrich our understanding of cinematic movement.

Higher-resolution renders and the ability to capture a wider range of movements are key ongoing goals, as are accessibility, privacy, and data protection. We continue to ask ourselves how we can develop this tool and the intellectual focus of the mv lab to be valuable to and usable by a diversity of bodies and in a wider range of research applications. In seeking to study a variety of movement signatures, we also strive to protect the privacy and data of our contributors by adhering closely to Institutional Review Board guidelines and by engaging in discussions of data privacy in the fields of media studies, Digital Humanities, and machine learning.

Our work and the research it builds upon has received generous support from the College of Media, the Campus Research Board, and the National Center for Supercomputing Applications at the University of Illinois as well as the Social Sciences and Humanities Research Council of Canada and the National Science Foundation.

bibliography

Alemi, Omid, Philippe Pasquier, and Chris Shaw. “Mova: Interactive Movement Analytics Platform.” ACM International Conference Proceeding Series, 2014. 37–42. Web.

Bordwell, David. Figures Traced in Light : On Cinematic Staging. Berkeley: U of California P, 2005. Print.

Cox, Donna. “Metaphoric Mappings: The Art of Visualization.” Aesthetic Computing, ed. Paul A. Fishwick. Cambridge: MIT P, 2006. 89–114. Print.

Hoyt, Eric and Charles R. Acland, ed. The Arclight Guidebook to Media History and the Digital Humanities. Sussex: REFRAME, 2014. Web.

Keating, Patrick. The Dynamic Frame: Camera Movement in Classical Hollywood. New York: Columbia University P, 2019. Print.

Laban, Rudolf von, and Lisa Ullmann. Choreutics. Alton: Dance Books, 2011. Print.

Thompson, Kristin. Breaking the Glass Armor: Neoformalist Film Analysis. Princeton: Princeton UP, 1988. Print.

the mv tool

The mv tool has two modes, each of which serves complementary research purposes.

The interactive mode engenders an embodied research space for both rigorous and playful figure and camera movement experimentation. Here the tool is operated by two agents: one moving in a Kinect- driven motion-capture space (the moving agent) and one operating the software and, when applicable, manipulating camera settings in the digital environment (computer/camera agent). A screen and sound system relaying live, interactive visualizations and sonifications of an abstracted body “skeleton” allows the participants to see (and hear) movement pathways. Through a set of calibration movement sequences, the computer agent can visualize their movement in interaction with various movement paradigms drawn from Laban Movement Analysis’ taxonomy of human movement and Rudolf Laban’s categorizations of where the body is moving in relation to its environment (Space). Real-time interaction with analytical concepts from Laban/Bartenieff Movement Studies (LBMS) and cinematographic frameworks is a central goal of this research, allowing participants a better understanding of their own movement patterns and the theoretical infrastructures shaping this exploratory space.

The capture mode allows users to record physical mover and virtual camera data, with the aim of progressively building a movement visualization database and relevant metadata. We are particularly interested in this tool’s ability to study historical research questions; beyond the experimentation with movement frameworks from Laban Movement Analysis, we seek to re-create examples of choreographed figure and camera movement in cinema to isolate those stylistic elements from other characteristics of film form, the complexities of mise-en-scène and sound, in particular. Camera movement’s range of motion in the mv tool can therefore be adapted and limited to better study historical cinematographic constraints, for example, by restricting the types of camera movement based on technological availability from a particular period. The accessibility and multifunctionality of the mv tool allows it to serve many academic purposes, from increased kinesthetic intelligence to the development of computer vision protocols for moving image analysis.

people

Jenny Oyallon-Koloski | co-director

Jenny is an Assistant Professor of Media and Cinema Studies at the University of Illinois at Urbana-Champaign and a Certified Movement Analyst (CMA) through the Laban/Bartenieff Institute for Movement Studies. Her written and videographic research on the musical genre, movement on screen, and Jacques Demy’s films is published in Studies in French Cinema, Post Script, [in]Transition: The Journal of Videographic Film and Moving Image Studies, Screenworks, and Digital Humanities Quarterly. Her current book project explores the storytelling power of figure movement and dance in film musicals.

Michael J. Junokas | co-director

Mike has a PhD in Informatics from the University of Illinois. His research is in developing innovative, multi-platform systems that have the ability to gather, interpret, process, and control signals in live artistic performance. Through the exploration of these systems, he creates non-linear methods of technological exploration that provoke artistic introspection and aesthetic reflection. He worked with Dr. Robb Lindgren on ELASTIC3S, an NSF Cyberlearning research project conducted by an interdisciplinary team of learning scientists and computer scientists at the University of Illinois at Urbana-Champaign. The goal of this project is explore ways that body movement can be used to enhance learning of “big ideas” in science. For the project, he has focused on developing gesture-recognition algorithms trained using one-shot learning. His research from this project has been published in ICLS, NARST, and JCAL, and has been used to inaugurate the Illinois Digital Ecologies and Learning Laboratory. Mike has also worked with Dr. Guy Garnett on the MovingStories project, an SSHRC-funded interdisciplinary, collaborative institutional research initiative for the design of digital tools for movement, meaning, and interaction. His research from this project has been published in ACM C&C, ICMC, and ISEA. Mike’s artistic and musical work has been exhibited at a variety of venues including McGill’s Transplanted Roots: Percussion Research Symposium, Illinois Wesleyan’s New Music Series, Illinois State’s New Sound Series, the School of the Art Institue’s Sullivan Galleries, and Experimental Sound Studio’s Outer Ear Series.

Kayt MacMaster | research assistant

Kayt is a dance artist, performance maker, and writer. She has spent the last decade living and working in New York City. Presently she is an MFA candidate in Dance at the University of Illinois Urbana-Champaign.

Sarah Marks Mininsohn | research assistant

Sarah is a choreographer, writer, and performance artist. She was most recently based in Philadelphia, where she created and presented collaborative performances. She is currently an MFA candidate in Dance at the University of Illinois, Urbana-Champaign.

Robbie Sieczkowski | research assistant

Robbie is a junior honors undergrad at UIUC studying Media and Cinema Studies with an Informatics minor, two certificates in the works (media production and technology entrepreneurship), and a deep passion for virtual reality and other entertainment media production. He is involved in the VR Club on campus as its Events Chair, and his aspiration is to work at a virtual reality, video game, or film studio in the near future and to one day run his own studio.

Dora Valkanova | research assistant

Dora is a Lecturer at the Department of Media and Cinema Studies at the University of Illinois. Her main research foci are history of U.S. Cold War cinema, contemporary U.S. Independent cinema, and film festivals.