* Cantinho Satkeys

Refresh History
  • FELISCUNHA: ghyt74  pessoal   49E09B4F
    Hoje às 10:40
  • j.s.: dgtgtr a todos  4tj97u<z
    07 de Julho de 2025, 13:50
  • FELISCUNHA: Votos de um santo domingo para todo o auditório  4tj97u<z
    06 de Julho de 2025, 11:43
  • j.s.: [link]
    05 de Julho de 2025, 16:31
  • j.s.: dgtgtr a todos  4tj97u<z
    05 de Julho de 2025, 16:31
  • j.s.: h7t45 ao convidado de Honra batatinha pela sua ajuda
    05 de Julho de 2025, 16:30
  • FELISCUNHA: ghyt74  pessoal   4tj97u<z
    04 de Julho de 2025, 11:58
  • JPratas: dgtgtr Pessoal  101041 Vamos Todos Ajudar na Manutenção do Forum, Basta 1 Euro a Cada Um  43e5r6
    03 de Julho de 2025, 19:02
  • cereal killa: Todos os anos e preciso sempre a pedir esmolas e um simples gesto de nem que seja 1€ que fosse dividido por alguns ajudava, uma coisa e certa mesmo continuando isto vai levar volta a como se tem acesso aos tópicos, nunca se quis implementar esta ideia mas quem não contribuir e basta 1 € por ano não terá acesso a sacar nada, vamos ver desenrolar disto mais ate dia 7,finalmente um agradecimento em nome do satkeys a quem já fez a sua doação, obrigada
    03 de Julho de 2025, 15:07
  • m1957: Por favor! Uma pequena ajuda, não deixem que o fórum ecerre. Obrigado!
    03 de Julho de 2025, 01:10
  • j.s.: [link]
    02 de Julho de 2025, 21:09
  • j.s.: h7t45 ao membro anónimo pela sua ajuda  49E09B4F
    02 de Julho de 2025, 21:09
  • j.s.: dgtgtr a todos  4tj97u<z
    01 de Julho de 2025, 17:18
  • FELISCUNHA: Votos de um santo domingo para todo o auditório  4tj97u<z
    29 de Junho de 2025, 11:59
  • m1957: Foi de boa vontade!
    28 de Junho de 2025, 00:39
  • j.s.: passem f.v. por aqui [link]    h7t45
    27 de Junho de 2025, 17:20
  • j.s.: renovamos o nosso pedido para uma pequena ajuda para pagemento  do nosso forum
    27 de Junho de 2025, 17:19
  • j.s.: h7t45 aos convidados de honra Felizcunha e M1957 pela ajuda
    27 de Junho de 2025, 17:15
  • j.s.: dgtgtr a todos  4tj97u<z
    27 de Junho de 2025, 17:13
  • FELISCUNHA: ghyt74  pessoal  4tj97u<z
    27 de Junho de 2025, 11:51

Autor Tópico: Deep Learning for NLP - Part 5  (Lida 100 vezes)

0 Membros e 1 Visitante estão a ver este tópico.

Offline mitsumi

  • Sub-Administrador
  • ****
  • Mensagens: 121842
  • Karma: +0/-0
Deep Learning for NLP - Part 5
« em: 13 de Agosto de 2021, 14:31 »
MP4 | Video: h264, 1280x720 | Audio: AAC, 44.1 KHz
Language: English | Size: 1.56 GB | Duration: 3h 31m

What you'll learn
Deep Learning for Natural Language Processing
Efficient Transformer Models: Star Transformers, Sparse Transformers, Reformer, Longformer, Linformer, Synthesizer
Efficient Transformer Models: ETC (Extended Transformer Construction), Big bird, Linear attention Transformer, Performer, Sparse Sinkhorn Transformer, Routing transformers
Efficient Transformer benchmark: Long Range Arena
Comparison of various efficient Transformer methods
DL for NLP
Requirements
Basics of machine learning
Basic understanding of Transformer based models and word embeddings
Description
This course is a part of "Deep Learning for NLP" Series. In this course, I will talk about various design schemes for efficient Transformer models. These techniques will come in very handy for academic as well as industry participants. For industry use cases, Transformer models have been shown to lead to very high accuracy values across many NLP tasks. But they have quadratic memory as well as computational complexity making it very difficult to ship them. Thus, this course which focuses on methods to make Transformers efficient is very critical for anyone who wants to ship Transformer models as part of their products.

Time and activation memory in Transformers grows quadratically with the sequence length. This is because in every layer, every attention head attempts to come up with a transformed representation for every position by "paying attention" to tokens at every other position. Quadratic complexity implies that practically the maximum input size is rather limited. Thus, we cannot extract semantic representation for long documents by passing them as input to Transformers. Hence, in this module we will talk about methods to address this challenge.

The course consists of two main sections as follows. In the two sections, I will talk about Efficient Transformer Models, Efficient Transformer benchmark and a Comparison of various efficient Transformer methods.

In the first section, I will talk about methods like Star Transformers, Sparse Transformers, Reformer, Longformer, Linformer, Synthesizer.

In the second section, I will talk about methods like ETC (Extended Transformer Construction), Big bird, Linear attention Transformer, Performer, Sparse Sinkhorn Transformer, Routing transformers. Long Range Arena is a recent benchmark for evaluating models on long sequence tasks with respect to accuracy, memory usage and inference time. We will discuss details about long range arena and finally wrap up with a philosophical categorization of various efficient Transformer methods.

For each method, we will discuss specific scheme for optimization, architecture and results obtained for pretraining as well as downstream tasks.

Who this course is for:
Beginners in deep learning
Python developers interested in data science concepts
Masters or PhD students who wish to learn deep learning concepts quickly
Folks wanting to ship their products across regions and languages (internationalization of their learning/predictive/generative models)

Screenshots


Download link:
Só visivel para registados e com resposta ao tópico.

Only visible to registered and with a reply to the topic.

Links are Interchangeable - No Password - Single Extraction