* Cantinho Satkeys

Refresh History
  • FELISCUNHA: dgtgtr   49E09B4F
    12 de Novembro de 2024, 12:25
  • JPratas: try65hytr Pessoal  classic k7y8j0 yu7gh8
    12 de Novembro de 2024, 01:59
  • j.s.: try65hytr a todos  4tj97u<z
    11 de Novembro de 2024, 19:31
  • cereal killa: try65hytr pessoal  2dgh8i
    11 de Novembro de 2024, 18:16
  • FELISCUNHA: ghyt74   49E09B4F  e bom fim de semana  4tj97u<z
    09 de Novembro de 2024, 11:43
  • JPratas: try65hytr Pessoal  classic k7y8j0
    08 de Novembro de 2024, 01:42
  • j.s.: try65hytr a todos  49E09B4F
    07 de Novembro de 2024, 18:10
  • JPratas: dgtgtr Pessoal  49E09B4F k7y8j0
    06 de Novembro de 2024, 17:19
  • FELISCUNHA: Votos de um santo domingo para todo o auditório  4tj97u<z
    03 de Novembro de 2024, 10:49
  • j.s.: bom fim de semana  43e5r6 49E09B4F
    02 de Novembro de 2024, 08:37
  • j.s.: ghyt74 a todos  4tj97u<z
    02 de Novembro de 2024, 08:36
  • FELISCUNHA: ghyt74   49E09B4F  e bom feriado   4tj97u<z
    01 de Novembro de 2024, 10:39
  • JPratas: try65hytr Pessoal  h7ft6l k7y8j0
    01 de Novembro de 2024, 03:51
  • j.s.: try65hytr a todos  4tj97u<z
    30 de Outubro de 2024, 21:00
  • JPratas: dgtgtr Pessoal  4tj97u<z k7y8j0
    28 de Outubro de 2024, 17:35
  • FELISCUNHA: Votos de um santo domingo para todo o auditório  k8h9m
    27 de Outubro de 2024, 11:21
  • j.s.: bom fim de semana   49E09B4F 49E09B4F
    26 de Outubro de 2024, 17:06
  • j.s.: dgtgtr a todos  4tj97u<z
    26 de Outubro de 2024, 17:06
  • FELISCUNHA: ghyt74   49E09B4F  e bom fim de semana
    26 de Outubro de 2024, 11:49
  • JPratas: try65hytr Pessoal  101yd91 k7y8j0
    25 de Outubro de 2024, 03:53

Autor Tópico: How LLMs Understand & Generate Human Language  (Lida 5 vezes)

0 Membros e 1 Visitante estão a ver este tópico.

Online mitsumi

  • Moderador Global
  • ***
  • Mensagens: 116446
  • Karma: +0/-0
How LLMs Understand & Generate Human Language
« em: 02 de Outubro de 2024, 11:03 »
How LLMs Understand & Generate Human Language



Released: 9/2024
Duration: 1h 54m | .MP4 1280x720, 30 fps(r) | AAC, 48000 Hz, 2ch | 372 MB
Genre: eLearning | Language: English


Your introduction to how generative large language models work.
Overview
Generative language models, such as ChatGPT and Microsoft Bing, are becoming a daily tool for a lot of us, but these models remain black boxes to many. How does ChatGPT know which word to output next? How does it understand the meaning of the text you prompt it with? Everyone, from those who have never once interacted with a chatbot, to those who do so regularly, can benefit from a basic understanding of how these language models function. This course answers some of your fundamental questions about how generative AI works.
In this course, you learn about word embeddings: not only how they are used in these models, but also how they can be leveraged to parse large amounts of textual information utilizing concepts such as vector storage and retrieval augmented generation. It is important to understand how these models work, so you know both what they are capable of and where their limitations lie.
About the Instructor
Kate Harwood is part of the Research and Development team at the New York Times, researching the integration of state-of-the-art large language models into the Times' reporting and products. She also teaches introduction to AI courses through The Coding School. She has a MS in computer science from Columbia University. Her primary focus is on natural language processing and ethical AI.
Learn How To
Understand how human language is translated into the math that models understand
Understand how generative language models choose what words to output
Understand why some prompting strategies and tasks with LLMs work better than others
Understand what word embeddings are and how they are used to power LLMs
Understand what vector storage/retrieval augmented generation is and why it is important
Critically examine the results you get from large language models
Who Should Take This Course
Anyone who
Is interested in demystifying generative language models
Wants to be able to talk about these models with peers in an informed way
Wants to unveil some of the mystery inside LLMs' black boxes but does not have the time to dive deep into hands-on learning
Has a potential use case for ChatGPT or other text-based generative AI or embedding storage methods in their work

Screenshots


Download link

rapidgator.net:
Citar
https://rapidgator.net/file/60edeb41bd8ebbc0e6de2f196a7174e9/nhnrk.How.LLMs.Understand..Generate.Human.Language.rar.html

ddownload.com:
Citar
https://ddownload.com/unl9lj1svwdb/nhnrk.How.LLMs.Understand..Generate.Human.Language.rar