Skip to content

Latent layers importance #6

@tals

Description

@tals

Hey, great analysis! :)
I've learned a lot from reading it.

Just a quick comment: the W matrix produced by the mapping network contains a single vector w_v tiled in with respect to layers (so w[0] = w[1] = ... = w[n_layers - 1]).
The layer-wise affine transformation happens on the synthesis network

The notebook however operates over this tiled w, which is why you saw the surprising behavior (since all the layers are identical).
I imagine that running it again over the transformed Ws would show something much different.

P.S: This bug also affects the result of the non-linear model

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions