Gameface since version 1.9.5 supports a new CSS property: coh-composition-id
. This property allows for completely detaching the element from the default rendering flow and letting the user handle the drawing of the element. This feature is intended for achieving "real 3D" effects with the UI by taking the transformation data that the SDK provides and composing elements at any world position and orientation, not just on the UI plane.
transform-style: preserve-3d;
style on the element's parent. If that style is not specified, the transform will be flattened to a 2D plane an you won't get the desired effect.A custom compositor can be set using the cohtml::View::SetCustomSceneCompositor
API which would then receive callbacks when the composition needs to be drawn. You can also specify arbitrary user data that will be passed back through the interface callbacks.
By default, all renoir::ISublayerCompositor
callbacks are invoked on the Render Thread. In some cases it might be required to have them called on the Main Thread, for example if you want to compose the sublayer with the Material API of an engine that requires all calls to be made from the Main Thread.
You can configure what thread calls the renoir::ISublayerCompositor
callbacks during initialization of the SDK. To do that, you need to initialize the CoHTML SDK with the following parameters:
Library::Initialize
:LibraryParams::UseDedicatedLayoutThread
to false
System::CreateView
:ViewSettings::ExecuteLayoutOnDedicatedThread
to false
ViewSettings::ExecuteCommandProcessingWithLayout
to true
The data available in the OnDrawSubLayer
callback is the renoir::ISublayerCompositor::DrawData
structure:
unsigned ViewId
: The Id of the view which is drawing the composition.const char* SubLayerCompositionId
: Composition ID (coh-composition-id
).void* CustomSceneMetadata
: Custom data set with cohtml::View::SetCustomSceneCompositor
Texture2DObject Texture
: Backend texture ID to be sampled from.Texture2D TextureInfo
: Texture info for the sampled image (e.g. dimensions, format, etc.)float2 UVScale
: UV scaling needed to sample from the correct location.float2 UVOffset
: UV offset needed to sample from the correct location. Basically, the sampled point should be input.Additional.xy * UVScale.xy + UVOffset.xy
float4x4 FinalTransform
: Transformation of the quad to be drawn.Rectangle Untransformed2DTargetRect
: Vertex positions of the quad to be drawn.ColorMixingModes MixingMode
: Color mixing mode with the background.mix-blend-mode
CSS property to your composited element, the SDK will have now way of applying that. That's because a composited element is entirely under the client's control. The SDK supplies only the input data and leaves it at that. This means that you'll have to apply any custom blending in your pipeline as it will not work automatically.When the composition is no longer needed (e.g. the element is deleted from the HTML, you'll receive the OnCompositionRemoved
callback with the composition ID and the ID of the view which contains the composition.
You can use that callback for cleaning up any resources allocated for the specified composition.
transform
CSS property is meant to be the main influence on their position in the 3D world. However, under some circumstances, elements with coh-composition-id
may end up outside of the viewport due to layout. In these cases, the elements won't be rendered and no call to the OnDrawSubLayer
will be made. If such element with coh-composition-id
that is outside of the screen and has never been rendered is removed from the DOM, the OnCompositionRemoved
callback will be executed. The rendering library will also produce debug logs informing the user that the textures for these elements won't be deleted as they have not been created in the first place. This is not necessarily an error but it is something to keep in mind as it may be unexpected that these elements end up outside of the screen.A common scenario for using the compositor is to achieve "real 3D" UI. The feature allows you to specify 3D transformations in CSS and then use the information that the SDK passes to the compositor interface to render that element in your 3D world using the transformation that was specified in the CSS.
Usually, you'd create a single rectangle geometry to be stored on the GPU and reuse that for all composited surfaces, only changing the vertex transformation.
It is important to transform the geometry exactly into the Untransformed2DTargetRect
positions, otherwise applying the FinalTransform
matrix may have unexpected results.
Let's take for example and engine that uses the center of an object as its transform origin (as opposed to top-left being the transform origin in HTML/CSS, by default). If the rectangle geometry on the GPU has bounds MeshBounds, then the following list of transformations has to be applied:
T1 = Scale(Untransformed2DTargetRect.Size / MeshBounds)
T2 = Translate(Untransformed2DTargetRect.Position + Untransformed2DTargetRect.Size / 2)
FinalTransform
T3 = `FinalTransform`
T4 = Translate(-ViewSize / 2)
T5 = Scale(ParentBounds / ViewSize)
Finally, all of these must be combined. M1 = T1 * T2 * T3 * T4
is a simple matrix multiplication. Result = M1 * T5
must be completed as a transform multiplication in order to apply the scaling using the original axes of the element. In general, if a matrix is decomposed into a transform with components Q, S, T, where Q = quaternion, S = scale, and T = translation, the result of multiplying M1 * T5
would be
Result.Q = T5.Q * M1.QResult.S = M1.S * T5.SResult.T = T5.Q * (T5.S * M1.T) + T5.T
Recompose that into a matrix and that's what the final transform is.