* Cantinho Satkeys

Refresh History
  • okapa:
    24 de Dezembro de 2025, 19:01
  • sacana10: A todos um feliz natal
    24 de Dezembro de 2025, 17:57
  • cereal killa: dgtgtr passei por ca para vos desejar feliz natal e familias  :smiles_natal:
    24 de Dezembro de 2025, 15:46
  • bruno mirandela: deso a todos um feliz natal
    24 de Dezembro de 2025, 14:31
  • FELISCUNHA: ghyt74   :34rbzg9:  e bom natal  :13arvoresnatalmagiagifs:
    24 de Dezembro de 2025, 10:15
  • tgh12: mikrotik
    24 de Dezembro de 2025, 07:49
  • tgh12: Spanish for Beginners: Spanish from 0 to Conversational
    24 de Dezembro de 2025, 04:57
  • JPratas: try65hytr Pessoal  4tj97u<z
    24 de Dezembro de 2025, 03:03
  • m1957: Para toda a equipa e membros deste fórum, desejo um Natal feliz e que o novo ano de 2026, seja muito próspero a todos os níveis.
    24 de Dezembro de 2025, 00:47
  • FELISCUNHA: Bom dia pessoal   :34rbzg9:
    22 de Dezembro de 2025, 10:35
  • j.s.: :13arvoresnatalmagiagifs:
    21 de Dezembro de 2025, 19:01
  • j.s.: try65hytr a todos  :smiles_natal: :smiles_natal:
    21 de Dezembro de 2025, 19:01
  • FELISCUNHA: ghyt74  49E09B4F  e bom fim de semana  4tj97u<z
    20 de Dezembro de 2025, 11:20
  • JPratas: try65hytr Pessoal  2dgh8i k7y8j0 classic dgf64y
    19 de Dezembro de 2025, 05:26
  • cereal killa: ghyt74 e boa semana de chuva e frio  RGG45wj erfb57j
    15 de Dezembro de 2025, 11:26
  • FELISCUNHA: Votos de um santo domingo para todo o auditório  4tj97u<z
    14 de Dezembro de 2025, 09:28
  • j.s.: tenham um excelente fim de semana com muitas comprinhas  :13arvoresnatalmagiagifs: sdfgsdg
    13 de Dezembro de 2025, 14:58
  • j.s.: dgtgtr a todos  :smiles_natal:
    13 de Dezembro de 2025, 14:57
  • FELISCUNHA: dgtgtr   49E09B4F  e bom fim de semana   :34rbzg9:
    13 de Dezembro de 2025, 12:29
  • JPratas: try65hytr Pessoal  4tj97u<z 2dgh8i classic bve567o+
    12 de Dezembro de 2025, 05:34

Autor Tópico: PySpark for Data Science - Intermediate  (Lida 170 vezes)

0 Membros e 1 Visitante estão a ver este tópico.

Offline mitsumi

  • Sub-Administrador
  • ****
  • Mensagens: 129146
  • Karma: +0/-0
PySpark for Data Science - Intermediate
« em: 21 de Julho de 2021, 01:54 »
MP4 | Video: h264, 1280x720 | Audio: AAC, 44.1 KHz, 2 Ch
Genre: eLearning | Language: English + .srt | Duration: 16 lectures (2 hour, 9 mins) | Size: 1.19 GB
You get to learn about how to use spark python or PySpark to perform data analysis.

What you'll learn

This module on PySpark Tutorials aims to explain the intermediate concepts such as those like the use of Spark session in case of later versions and the use of Spark Config and Spark Context in case of earlier versions.
his will also help you in understanding how the Spark related environment is set up, concepts of Broadcasting and accumulator, other optimization techniques include those like parallelism, tungsten, and catalyst optimizer.

Requirements

The pre-requisite of these PySpark Tutorials is not much except for that the person should be well familiar and should have a great hands-on experience in any of the languages such as Java, Python or Scala or their equivalent. The other pre-requisites include the development background and the sound and fundamental knowledge of big data concepts and ecosystem as Spark API is based on top of big data Hadoop only. Others include the knowledge of real-time streaming and how big data works along with a sound knowledge of analytics and the quality of prediction related to the machine learning model.

Description

This module on PySpark Tutorials aims to explain the intermediate concepts such as those like the use of Spark session in case of later versions and the use of Spark Config and Spark Context in case of earlier versions. This will also help you in understanding how the Spark-related environment is set up, concepts of Broadcasting and accumulator, other optimization techniques include those like parallelism, tungsten, and catalyst optimizer. You will also be taught about the various compression techniques such as Snappy and Zlib. We will also understand and talk about the various Big data ecosystem related concepts such as HDFS and block storage, various components of Spark such as Spark Core, Mila, GraphX, R, Streaming, SQL, etc. and will also study the basics of Python language which is related and relevant to be used along with Apache Spark thereby making it Pyspark. We will learn the following in this course:

Regression

Linear Regression

Output Column

Test Data

Prediction

Generalized Linear Regression

Forest Regression

Classification

Binomial Logistic Regression

Multinomial Logistic Regression

Decision Tree

Random Forest

Clustering

K-Means Model

Pyspark is a big data solution that is applicable for real-time streaming using Python programming language and provides a better and efficient way to do all kinds of calculations and computations. It is also probably the best solution in the market as it is interoperable i.e. Pyspark can easily be managed along with other technologies and other components of the entire pipeline. The earlier big data and Hadoop techniques included batch time processing techniques.

Pyspark is an open-source program where all the codebase is written in Python which is used to perform mainly all the data-intensive and machine learning operations. It has been widely used and has started to become popular in the industry and therefore Pyspark can be seen replacing other spark based components such as the ones working with Java or Scala. One unique feature which comes along with Pyspark is the use of datasets and not data frames as the latter is not provided by Pyspark. Practitioners need more tools that are often more reliable and faster when it comes to streaming real-time data. The earlier tools such as Map-reduce made use of the map and the reduce concepts which included using the mappers, then shuffling or sorting and then reducing them into a single entity. This MapReduce provided a way of parallel computation and calculation. The Pyspark makes use of in-memory techniques that don't make use of the space storage being put into the hard disk. It provides a general-purpose and a faster computation unit.
Who this course is for:

The target audience for these PySpark Tutorials includes ones such as the developers, analysts, software programmers, consultants, data engineers, data scientists , data analysts, software engineers, Big data programmers, Hadoop developers. Other audience includes ones such as students and entrepreneurs who are looking to create something of their own in the space of big data.
Screenshots


Download link:
Só visivel para registados e com resposta ao tópico.

Only visible to registered and with a reply to the topic.

Links are Interchangeable - No Password - Single Extraction