* Cantinho Satkeys

Refresh History
  • j.s.: dgtgtr a todos  49E09B4F
    Hoje às 16:50
  • FELISCUNHA: ghyt74  pessoal   4tj97u<z
    Hoje às 11:41
  • j.s.: try65hytr a todos  49E09B4F
    29 de Janeiro de 2026, 21:01
  • FELISCUNHA: ghyt74  pessoal  4tj97u<z
    26 de Janeiro de 2026, 11:00
  • espioca: avast vpn
    26 de Janeiro de 2026, 06:27
  • j.s.: dgtgtr  todos  49E09B4F
    25 de Janeiro de 2026, 15:36
  • Radio TugaNet: Bom Dia Gente Boa
    25 de Janeiro de 2026, 10:18
  • FELISCUNHA: dgtgtr   49E09B4F  e bom fim de semana  4tj97u<z
    24 de Janeiro de 2026, 12:15
  • Cocanate: J]a esta no Forun
    24 de Janeiro de 2026, 01:54
  • Cocanate: Eu tenho
    24 de Janeiro de 2026, 01:46
  • Cocanate: boas minha gente
    24 de Janeiro de 2026, 01:26
  • joca34: BOM DIA AL TEM ESTE CD Star Music - A Minha prima Palmira
    23 de Janeiro de 2026, 15:23
  • joca34: OLA
    23 de Janeiro de 2026, 15:23
  • FELISCUNHA: Bom dia pessoal  4tj97u<z
    23 de Janeiro de 2026, 10:59
  • JPratas: try65hytr Pessoal  4tj97u<z 2dgh8i k7y8j0 classic
    23 de Janeiro de 2026, 05:16
  • j.s.: try65hytr a todos  49E09B4F
    20 de Janeiro de 2026, 18:15
  • FELISCUNHA: ghyt74  pessoal   49E09B4F
    20 de Janeiro de 2026, 11:07
  • j.s.: dgtgtr a todos  49E09B4F
    18 de Janeiro de 2026, 16:02
  • FELISCUNHA: ghyt74   49E09B4F  e bom fim de semana  4tj97u<z
    17 de Janeiro de 2026, 11:18
  • JPratas: try65hytr Pessoal  2dgh8i k7y8j0 yu7gh8
    16 de Janeiro de 2026, 04:50

Autor Tópico: Cloud Hadoop Scaling Apache Spark  (Lida 406 vezes)

0 Membros e 1 Visitante estão a ver este tópico.

Offline mitsumi

  • Sub-Administrador
  • ****
  • Mensagens: 129146
  • Karma: +0/-0
Cloud Hadoop Scaling Apache Spark
« em: 04 de Abril de 2020, 10:28 »

Cloud Hadoop: Scaling Apache Spark
.MP4, AVC, 1280x720, 30 fps | English, AAC, 2 Ch | 3h 13m | 477 MB
Instructor: Lynn Langit

Apache Hadoop and Spark make it possible to generate genuine business insights from big data. The Amazon cloud is natural home for this powerful toolset, providing a variety of services for running large-scale data-processing workflows. Learn to implement your own Apache Hadoop and Spark workflows on AWS in this course with big data architect Lynn Langit. Explore deployment options for production-scaled jobs using virtual machines with EC2, managed Spark clusters with EMR, or containers with EKS. Learn how to configure and manage Hadoop clusters and Spark jobs with Databricks, and use Python or the programming language of your choice to import data and execute jobs. Plus, learn how to use Spark libraries for machine learning, genomics, and streaming. Each lesson helps you understand which deployment option is best for your workload.

Topics include:

File systems for Hadoop and Spark
Working with Databricks
Loading data into tables
Setting up Hadoop and Spark clusters on the cloud
Running Spark jobs
Importing and exporting Python notebooks
Executing Spark jobs in Databricks using Python and Scala
Importing data into Spark clusters
Coding and executing Spark transformations and actions
Data caching
Spark libraries: Spark SQL, SparkR, Spark ML, and more
Spark streaming
Scaling Spark with AWS and GCP
 

Download link:
Só visivel para registados e com resposta ao tópico.

Only visible to registered and with a reply to the topic.

Links are Interchangeable - No Password - Single Extraction