sauce
auce

Semantic Animation and Crowd Docs



D2.2-Extract.pdf (Adobe PDF - 270Kb)

Simon Woeginger
IKINEMA

D2.2 Report high level control APIs and methods [M3]
The overall aim of this document is to perform a preliminary investigation into an employable, high-level methodology which can be leveraged to achieve the goals laid out in work package 6 of the SAUCE project. This is primarily with respect to generating high fidelity animation within the context of IKinemas domain of expertise, namely, procedurally generated animation through the use of Inverse Kinematics (hereby referred to as IK). In order to achieve these goals, investigation into the current state of the art within procedural animation must take place as well as establishing relevant work and motivations of related partners within the SAUCE group.


D6.1.pdf (Adobe PDF - 803Kb)

Simon Woeginger
IKINEMA

D6.1 Animation pipeline and demo of new runtime rig [M6]
This document provides details centred around the deliverable D6.1 titled “Animation pipeline and demo fo the new runtime rig” within work package (WP) 6 of the SAUCE project. The document begins with a brief introduction and explanation of a typical IKinema workflow used for procedurally generating animation using Inverse Kinematics providing background context for the reader in section 2. This section will provide and overview of a typical pipeline and detail the important components and user input required while explaining each step. Sections 3 provides an introduction to the deliverable 6.1 (the subject of this document) within the context of WP6, in particular WP6 task 2 (WP6T2). It describes the work set out in the tasks included in WP6 and how the deliverable 6.1 fits into the bigger picture. Section 4 discusses the pipeline and the planned demonstration in more detail. It includes the user interface for the pipeline with key elements highlighted and described. Section 5 compares the results of the pipeline development with the criteria set out in deliverable description within SAUCE project document.


D6.2-Extract.pdf (Adobe PDF - 557Kb)

Volker Helzle, Jonas Trottnow, Simon Spielmann
Filmakademie Baden-Württemberg

D6.2 Report on toolkit for Virtual Production [M9]
This deliverable is part of the work package 6 “Semantic Animation Production” which is dedicated to real-time control systems for authoring animated content using Smart Assets, automatically synthesizing new scenes from existing ones and integrating Smart Assets into Virtual Production scenarios with editable cameras and lights. The deliverable sets the basis to explore the use of Smart Assets in Virtual Production scenarios starting with an overview and evaluations of potential systems. The result of the evaluation is suggesting a toolset which will serve as basis for the developments in D6.4 “Virtual Production prototype toolkit”.
This prototype will ideally access results from the deliverables D6.3 “Working framework to handle relationship contexts between scene and people”, D6.5 “Animation graph traversal optimisation” and will be applied in work package 8 “Experimental Production, Evaluation and Innovation Assessment”.


D6.3.pdf (Adobe PDF - 1.68Mb)

Aljosa Smolic
Trinity College Dublin

D6.3 Working framework to handle relationship contexts between scene and people [M18]
A description of a framework for creation of environmental assets with semantic understanding incorporated which is used by utility-based AI agents to create a scene with emergent and expected behaviours for individual and crowd behaviours. This framework can then be used to allow asset re-use in semantically similar environments.


D6.6.pdf (Adobe PDF - 751Kb)

Josep Blat, David Moreno, Javi Agenjo, Hermann Plass
Universitat Pompeu Fabra

6.6 Motion Stylization Implementation [M24]
This report accompanies the public demonstrator of Motion Stylization Implementation carried out by the UPF-GTI, presenting the overall methodology and approach and details on the implementation towards achieving virtual characters identity through motion stylization. A guide to the use of the demonstrator, a standalone tool called M edusa is included as an annex to this report.


D5.4.pdf (Adobe PDF - 1.63Mb)

Mungo Pay
DNEG

D5.4 Tools for editing mo-cap data [M24]
The use of simulation as a tool to create crowd sequences has been an industry standard for the past two decades, however, there are limitations to this approach when applied to the fast paced nature of a VFX production schedule. This report contains information about the approaches taken to increase reusability of animation assets so as to increase the quality of crowd production shots whilsts reducing the amount of artist time to do so.


D5.5.pdf (Adobe PDF - 3.51Mb)

Mungo Pay
DNEG

D5.5 Tools for splicing together animation clips [M24]
When creating crowd shots for VFX shots in films, the reusability of animations becomes a consideration when bidding on shows as the cost of capturing bespoke animation data can be significant. To ameliorate this cost, we present a suite of tools that allow crowd artists to create new animation from existing clips in an automated manner by utilising a motion graph implementation, augmented with a constraint based trajectory editing toolkit.


D5.9.pdf (Adobe PDF - 2.26Mb)

Mungo Pay, Ewan Rice, David Reeves, Pisut Wisessing
DNEG

D5.9 Tools for synthesizing animation without a rig [M30]
Animation re-use in the VFX industry is critical when attempting to reduce the costs associated with asset production. The creation of bespoke animations will be handled by either the Animation or Motion Capture departments, but relying on them entirely to produce all required animation is impractical, especially when minor edits to previously created animation could be used. This deliverable investigates two use cases where simple animation editing is desirable without the requirement of the
full rig being available. The use cases are that of footstep cleanup and terrain adaptation for crowd scenes, and pre-roll generation for CFX tasks.


D6.4.pdf (Adobe PDF - 3.60Mb)

Jonas Trottnow, Simon Spielmann, Volker Helzle
Filmakademie Baden-Württemberg

D6.4 Virtual Production prototype toolkit [M30]
With this deliverable a demonstrator for the usage of smart assets in a virtual production toolkit is introduced. An exemplary semantic descriptor developed by DRZ provides labels for a 3D scene. Based on the labels, the scene can be set up for a usage in the Virtual Production Editing Tools (VPET) developed by FA.
On top, VPET has been extended to support character animations that can be context and scene aware. An API and protocol has been developed to transfer an animatable character to the VPET clients. A user can then direct the character by defining a new position and walking path for them through the tablet frontend. The new position is sent to an arbitrary animation engine and solved there to a bone animation. An implementation of the protocol for the MEDUSA animation engine is developed by UPF. FA has also implemented a demo animation solver using the Unity based scene host as an animation engine. Further this deliverable introduces a possibility to import smart assets and scenes directly into VPET without first importing assets to applications like Unity or Katana. This direct importer reads USD (universal scene description) files and provides them to the clients directly.


D6.5.pdf (Adobe PDF - 423Kb)

Simon Woeginger, Hermann Plass, Josep Blat
Universitat Pompeu Fabra

D6.5 Graph Traversal Optimisation [M30]
This deliverable is part of the reporting required for WP6 and details the work carried out within WP6 Task 3 - Time, space and world-awareness approach for animation synthesis. This internal report accompanies the demonstrator of the work, which is publicly available at GitHub


D6.8.pdf (Adobe PDF - 4.62Mb)

David Smyth, Amar Arslaan, Susheel Nath, Pisut Wisessing
Trinity College Dublin

D6.8 Crowd scene synthesis and metrics for quality evaluation [M30]
We provide a description of a prototype framework which can be used to rapidly generate a crowd simulation. The framework relies on the use of semantic data to feed into AI modules. We discuss how behaviour is separated from physical attributes (meshes and animation) which promotes re-use.