Analyze Images

 







Introduction

The field of artificial intelligence that deals with visual perception is called computer vision. Several services that enable typical computer vision scenarios are included in Azure AI Vision. A subfield of artificial intelligence (AI) called Azure AI Vision uses software to understand visual data, frequently from photos or video feeds.

Provision an Azure AI Vision Resource

The Azure AI Vision service is designed to help you extract information from images. It provides functionality that you can use for:

Description and tag generation - determining an appropriate caption for an image, and identifying relevant "tags" that can be used as keywords to indicate its subject.

Object detection - detecting the presence and location of specific objects within the image.

People detection - detecting the presence, location, and features of people in the image.

Image metadata, color, and type analysis - determining the format and size of an image, its dominant color palette, and whether it contains clip art.

Category identification - identifying an appropriate categorization for the image, and if it contains any known landmarks.

Background removal - detecting the background in an image and output the image with the background transparent or a greyscale alpha matte image.

Moderation rating - determine if the image includes any adult or violent content.

Optical character recognition - reading text in the image.

Smart thumbnail generation - identifying the main region of interest in the image to create a smaller "thumbnail" version.

You can provision Azure AI Vision as a single-service resource, or you can use the Azure AI Vision API in a multi-service Azure AI Services resource.

Analyze an Image

The Analyze Image REST method, or the equivalent method in the SDK for your preferred programming language, can be used to analyze an image. You can specify the visual features you wish to include in the analysis (and if you choose categories, whether or not to include details of celebrities or landmarks). A JSON document with the specified data is returned by this method.

Specifying the visual features you want analyzed in the image determines what information the response will contain. Most responses will contain a bounding box (if a location in the image is reasonable) or a confidence score (for features such as tags or captions).

Conclusion

We have successfully learnt the basics of image analysis.





























Comments

Popular posts from this blog

Information Protection Scanner: Resolve Issues with Information Protection Scanner Deployment

How AMI Store & Restore Works?

Create A Store Image Task