* Cantinho Satkeys

Refresh History
  • FELISCUNHA: Votosde um santo domingo para todo o auditório  4tj97u<z
    Hoje às 11:06
  • j.s.: bom fim de semana  49E09B4F
    23 de Novembro de 2024, 21:01
  • j.s.: try65hytr a todos
    23 de Novembro de 2024, 21:01
  • FELISCUNHA: dgtgtr   49E09B4F  e bom fim de semana
    23 de Novembro de 2024, 12:27
  • JPratas: try65hytr A Todos  101yd91 k7y8j0
    22 de Novembro de 2024, 02:46
  • j.s.: try65hytr a todos  4tj97u<z 4tj97u<z
    21 de Novembro de 2024, 18:43
  • FELISCUNHA: dgtgtr  pessoal   49E09B4F
    20 de Novembro de 2024, 12:26
  • JPratas: try65hytr Pessoal  4tj97u<z classic k7y8j0
    19 de Novembro de 2024, 02:06
  • FELISCUNHA: ghyt74   49E09B4F  e bom fim de semana  4tj97u<z
    16 de Novembro de 2024, 11:11
  • j.s.: bom fim de semana  49E09B4F
    15 de Novembro de 2024, 17:29
  • j.s.: try65hytr a todos  4tj97u<z
    15 de Novembro de 2024, 17:29
  • FELISCUNHA: ghyt74  pessoal   49E09B4F
    15 de Novembro de 2024, 10:07
  • JPratas: try65hytr A Todos  4tj97u<z classic k7y8j0
    15 de Novembro de 2024, 03:53
  • FELISCUNHA: dgtgtr   49E09B4F
    12 de Novembro de 2024, 12:25
  • JPratas: try65hytr Pessoal  classic k7y8j0 yu7gh8
    12 de Novembro de 2024, 01:59
  • j.s.: try65hytr a todos  4tj97u<z
    11 de Novembro de 2024, 19:31
  • cereal killa: try65hytr pessoal  2dgh8i
    11 de Novembro de 2024, 18:16
  • FELISCUNHA: ghyt74   49E09B4F  e bom fim de semana  4tj97u<z
    09 de Novembro de 2024, 11:43
  • JPratas: try65hytr Pessoal  classic k7y8j0
    08 de Novembro de 2024, 01:42
  • j.s.: try65hytr a todos  49E09B4F
    07 de Novembro de 2024, 18:10

Autor Tópico: Computer Vision In Python! Face Detection & Image Processing (10/2020)  (Lida 78 vezes)

0 Membros e 1 Visitante estão a ver este tópico.

Online mitsumi

  • Moderador Global
  • ***
  • Mensagens: 117505
  • Karma: +0/-0

Computer Vision In Python! Face Detection & Image Processing
Video: .mp4 (1280x720, 30 fps(r)) | Audio: aac, 44100 Hz, 2ch | Size: 4.62 GB
Genre: eLearning Video | Duration: 59 lectures (10 hour, 42 mins) | Language: English
 Learn Computer Vision With OpenCV In Python! Master Python By Implementing Face Recognition & Image Processing In Python

What you'll learn

    Use OpenCV to work with image files
    Understanding the fundamentals of computer vision & image processing
    Use Python and OpenCV to draw shapes on images and videos
    Get started with image manipulation with OpenCV, including smoothing, blurring, thresholding, and morphological operations.
    OpenCV Image Manipulation Fundamentals using Python. Also includes a Python basics refresher session.
    Open and Stream video with Python and OpenCV
    Detect Objects, including corner, edge, and grid detection techniques with OpenCV and Python
    Create Face Detection Software Using Haar Classifier
    Have a toolbox of the most powerful Computer Vision models
    Understand the theory behind Computer Vision
    Create powerful Computer Vision applications

Requirements

    Basic Python programming knowledge

Description

Computer vision is an interdisciplinary field that deals with how computers can be made to gain high-level understanding from digital images or videos. From the perspective of engineering, it seeks to automate tasks that the human visual system can do. Computer vision is concerned with the automatic extraction, analysis and understanding of useful information from a single image or a sequence of images. It involves the development of a theoretical and algorithmic basis to achieve automatic visual understanding. As a scientific discipline, computer vision is concerned with the theory behind artificial systems that extract information from images. The image data can take many forms, such as video sequences, views from multiple cameras, or multi-dimensional data from a medical scanner. As a technological discipline, computer vision seeks to apply its theories and models for the construction of computer vision systems.

Distinctions

The fields most closely related to computer vision are image processing, image analysis and machine vision. There is a significant overlap in the range of techniques and applications that these cover. This implies that the basic techniques that are used and developed in these fields are similar, something which can be interpreted as there is only one field with different names. On the other hand, it appears to be necessary for research groups, scientific journals, conferences and companies to present or market themselves as belonging specifically to one of these fields and, hence, various characterizations which distinguish each of the fields from the others have been presented.

Computer graphics produces image data from 3D models, computer vision often produces 3D models from image data. There is also a trend towards a combination of the two disciplines, e.g., as explored in augmented reality.

The following characterizations appear relevant but should not be taken as universally accepted:

    Image processing and image analysis tend to focus on 2D images, how to transform one image to another, e.g., by pixel-wise operations such as contrast enhancement, local operations such as edge extraction or noise removal, or geometrical transformations such as rotating the image. This characterization implies that image processing/analysis neither require assumptions nor produce interpretations about the image content.

    Computer vision includes 3D analysis from 2D images. This analyzes the 3D scene projected onto one or several images, e.g., how to reconstruct structure or other information about the 3D scene from one or several images. Computer vision often relies on more or less complex assumptions about the scene depicted in an image.

    Machine vision is the process of applying a range of technologies & methods to provide imaging-based automatic inspection, process control and robot guidance in industrial applications. Machine vision tends to focus on applications, mainly in manufacturing, e.g., vision-based robots and systems for vision-based inspection, measurement, or picking (such as bin picking). This implies that image sensor technologies and control theory often are integrated with the processing of image data to control a robot and that real-time processing is emphasized by means of efficient implementations in hardware and software. It also implies that the external conditions such as lighting can be and are often more controlled in machine vision than they are in general computer vision, which can enable the use of different algorithms.

    There is also a field called imaging which primarily focuses on the process of producing images, but sometimes also deals with processing and analysis of images. For example, medical imaging includes substantial work on the analysis of image data in medical applications.

    Finally, pattern recognition is a field which uses various methods to extract information from signals in general, mainly based on statistical approaches and artificial neural networks. A significant part of this field is devoted to applying these methods to image data.

Applications

Applications range from tasks such as industrial machine vision systems which, say, inspect bottles speeding by on a production line, to research into artificial intelligence and computers or robots that can comprehend the world around them. The computer vision and machine vision fields have significant overlap. Computer vision covers the core technology of automated image analysis which is used in many fields. Machine vision usually refers to a process of combining automated image analysis with other methods and technologies to provide automated inspection and robot guidance in industrial applications. In many computer-vision applications, the computers are pre-programmed to solve a particular task, but methods based on learning are now becoming increasingly common. Examples of applications of computer vision include systems for:

    Automatic inspection, e.g., in manufacturing applications;

    Assisting humans in identification tasks, e.g., a species identification system

    Controlling processes, e.g., an industrial robot;

    Detecting events, e.g., for visual surveillance or people counting, e.g., in the restaurant industry;

    Interaction, e.g., as the input to a device for computer-human interaction;

    Modeling objects or environments, e.g., medical image analysis or topographical modeling;

    Navigation, e.g., by an autonomous vehicle or mobile robot; and

    Organizing information, e.g., for indexing databases of images and image sequences.

Medicine

One of the most prominent application fields is medical computer vision, or medical image processing, characterized by the extraction of information from image data to diagnose a patient. An example of this is detection of tumors, arteriosclerosis or other malign changes; measurements of organ dimensions, blood flow, etc. are another example. It also supports medical research by providing new information: e.g., about the structure of the brain, or about the quality of medical treatments. Applications of computer vision in the medical area also includes enhancement of images interpreted by humans-ultrasonic images or X-ray images for example-to reduce the influence of noise.

Machine Vision

A second application area in computer vision is in industry, sometimes called machine vision, where information is extracted for the purpose of supporting a manufacturing process. One example is quality control where details or final products are being automatically inspected in order to find defects. Another example is measurement of position and orientation of details to be picked up by a robot arm. Machine vision is also heavily used in agricultural process to remove undesirable food stuff from bulk material, a process called optical sorting.

Military

Military applications are probably one of the largest areas for computer vision. The obvious examples are detection of enemy soldiers or vehicles and missile guidance. More advanced systems for missile guidance send the missile to an area rather than a specific target, and target selection is made when the missile reaches the area based on locally acquired image data. Modern military concepts, such as "battlefield awareness", imply that various sensors, including image sensors, provide a rich set of information about a combat scene which can be used to support strategic decisions. In this case, automatic processing of the data is used to reduce complexity and to fuse information from multiple sensors to increase reliability.

Autonomous vehicles

One of the newer application areas is autonomous vehicles, which include submersibles, land-based vehicles (small robots with wheels, cars or trucks), aerial vehicles, and unmanned aerial vehicles (UAV). The level of autonomy ranges from fully autonomous (unmanned) vehicles to vehicles where computer-vision-based systems support a driver or a pilot in various situations. Fully autonomous vehicles typically use computer vision for navigation, e.g. for knowing where it is, or for producing a map of its environment (SLAM) and for detecting obstacles. It can also be used for detecting certain task specific events, e.g., a UAV looking for forest fires. Examples of supporting systems are obstacle warning systems in cars, and systems for autonomous landing of aircraft. Several car manufacturers have demonstrated systems for autonomous driving of cars, but this technology has still not reached a level where it can be put on the market. There are ample examples of military autonomous vehicles ranging from advanced missiles to UAVs for recon missions or missile guidance. Space exploration is already being made with autonomous vehicles using computer vision, e.g., NASA's Curiosity and CNSA's Yutu-2 rover.

Tactile Feedback

Materials such                                                                                                                                                                                                       as rubber and silicon are being used to create sensors that allow for applications such as detecting micro undulations and calibrating robotic hands. Rubber can be used in order to create a mold that can be placed over a finger, inside of this mold would be multiple strain gauges. The finger mold and sensors could then be placed on top of a small sheet of rubber containing an array of rubber pins. A user can then wear the finger mold and trace a surface. A computer can then read the data from the strain gauges and measure if one or more of the pins is being pushed upward. If a pin is being pushed upward then the computer can recognize this as an imperfection in the surface. This sort of technology is useful in order to receive accurate data of the imperfections on a very large surface. Another variation of this finger mold sensor are sensors that contain a camera suspended in silicon. The silicon forms a dome around the outside of the camera and embedded in the silicon are point markers that are equally spaced. These cameras can then be placed on devices such as robotic hands in order to allow the computer to receive highly accurate tactile data.

Other application areas include:

    Support of visual effects creation for cinema and broadcast, e.g., camera tracking (matchmoving).

    Surveillance.

    Driver drowsiness detection

    Tracking and counting organisms in the biological sciences

[Reference: Wikipedia]

Who this course is for:

    Anyone interested in Data Science, Computer Vision or Artificial Intelligence
    Beginners who wants to start with Python Computer Vision using OpenCV
    Anyone interested to build computer vision applications
    Anyone who wants to level up in programming this year
    Anyone who wants to advance his/her career in python programming

Download link:
Só visivel para registados e com resposta ao tópico.

Only visible to registered and with a reply to the topic.

Links are Interchangeable - No Password - Single Extraction