{
    "byline": null,
    "dir": null,
    "excerpt": "Recent contextual word embeddings (e.g. ELMo) have shown to be much better than \u201cstatic\u201d embeddings (where there\u2019s a one-to-one mapping from token to representation). This paper is exciting because they were able to create a multi-lingual embedding space that used contextual word embeddings.",
    "length": 4266,
    "siteName": null,
    "title": "Cross-lingual alignment of contextual word embeddings, with applications to zero-shot dependency parsing"
}