* Cantinho Satkeys

Refresh History
  • FELISCUNHA: Votos de um santo domingo para todo o auditório  4tj97u<z
    Hoje às 11:39
  • j.s.: tenham um bom fim de semana,   49E09B4F 49E09B4F
    07 de Fevereiro de 2026, 14:31
  • j.s.: dgtgtr a todos  49E09B4F
    07 de Fevereiro de 2026, 14:30
  • FELISCUNHA: ghyt74  pessoall 49E09B4F
    06 de Fevereiro de 2026, 12:00
  • JPratas: try65hytr A Todos  4tj97u<z  2dgh8i k7y8j0 classic
    06 de Fevereiro de 2026, 05:17
  • joca34: ola amigos alguem tem este cd Ti Maria da Peida -  Mãe negra
    05 de Fevereiro de 2026, 16:09
  • FELISCUNHA: ghyt74  pessoal   49E09B4F
    03 de Fevereiro de 2026, 11:46
  • Robi80g: CIAO A TUTTI
    03 de Fevereiro de 2026, 10:53
  • Robi80g: THE SWAP FILM WALT DISNEY
    03 de Fevereiro de 2026, 10:50
  • Robi80g: SWAP
    03 de Fevereiro de 2026, 10:50
  • j.s.: dgtgtr a todos  49E09B4F
    02 de Fevereiro de 2026, 16:50
  • FELISCUNHA: ghyt74  pessoal   4tj97u<z
    02 de Fevereiro de 2026, 11:41
  • j.s.: try65hytr a todos  49E09B4F
    29 de Janeiro de 2026, 21:01
  • FELISCUNHA: ghyt74  pessoal  4tj97u<z
    26 de Janeiro de 2026, 11:00
  • espioca: avast vpn
    26 de Janeiro de 2026, 06:27
  • j.s.: dgtgtr  todos  49E09B4F
    25 de Janeiro de 2026, 15:36
  • Radio TugaNet: Bom Dia Gente Boa
    25 de Janeiro de 2026, 10:18
  • FELISCUNHA: dgtgtr   49E09B4F  e bom fim de semana  4tj97u<z
    24 de Janeiro de 2026, 12:15
  • Cocanate: J]a esta no Forun
    24 de Janeiro de 2026, 01:54
  • Cocanate: Eu tenho
    24 de Janeiro de 2026, 01:46

Autor Tópico: Deep Learning for NLP - Part 5  (Lida 202 vezes)

0 Membros e 1 Visitante estão a ver este tópico.

Offline mitsumi

  • Sub-Administrador
  • ****
  • Mensagens: 129146
  • Karma: +0/-0
Deep Learning for NLP - Part 5
« em: 13 de Agosto de 2021, 14:31 »
MP4 | Video: h264, 1280x720 | Audio: AAC, 44.1 KHz
Language: English | Size: 1.56 GB | Duration: 3h 31m

What you'll learn
Deep Learning for Natural Language Processing
Efficient Transformer Models: Star Transformers, Sparse Transformers, Reformer, Longformer, Linformer, Synthesizer
Efficient Transformer Models: ETC (Extended Transformer Construction), Big bird, Linear attention Transformer, Performer, Sparse Sinkhorn Transformer, Routing transformers
Efficient Transformer benchmark: Long Range Arena
Comparison of various efficient Transformer methods
DL for NLP
Requirements
Basics of machine learning
Basic understanding of Transformer based models and word embeddings
Description
This course is a part of "Deep Learning for NLP" Series. In this course, I will talk about various design schemes for efficient Transformer models. These techniques will come in very handy for academic as well as industry participants. For industry use cases, Transformer models have been shown to lead to very high accuracy values across many NLP tasks. But they have quadratic memory as well as computational complexity making it very difficult to ship them. Thus, this course which focuses on methods to make Transformers efficient is very critical for anyone who wants to ship Transformer models as part of their products.

Time and activation memory in Transformers grows quadratically with the sequence length. This is because in every layer, every attention head attempts to come up with a transformed representation for every position by "paying attention" to tokens at every other position. Quadratic complexity implies that practically the maximum input size is rather limited. Thus, we cannot extract semantic representation for long documents by passing them as input to Transformers. Hence, in this module we will talk about methods to address this challenge.

The course consists of two main sections as follows. In the two sections, I will talk about Efficient Transformer Models, Efficient Transformer benchmark and a Comparison of various efficient Transformer methods.

In the first section, I will talk about methods like Star Transformers, Sparse Transformers, Reformer, Longformer, Linformer, Synthesizer.

In the second section, I will talk about methods like ETC (Extended Transformer Construction), Big bird, Linear attention Transformer, Performer, Sparse Sinkhorn Transformer, Routing transformers. Long Range Arena is a recent benchmark for evaluating models on long sequence tasks with respect to accuracy, memory usage and inference time. We will discuss details about long range arena and finally wrap up with a philosophical categorization of various efficient Transformer methods.

For each method, we will discuss specific scheme for optimization, architecture and results obtained for pretraining as well as downstream tasks.

Who this course is for:
Beginners in deep learning
Python developers interested in data science concepts
Masters or PhD students who wish to learn deep learning concepts quickly
Folks wanting to ship their products across regions and languages (internationalization of their learning/predictive/generative models)

Screenshots


Download link:
Só visivel para registados e com resposta ao tópico.

Only visible to registered and with a reply to the topic.

Links are Interchangeable - No Password - Single Extraction