Subscribe: Andrej Tozon's blog
http://feeds.feedburner.com/TheAttic
Added By: Feedage Forager Feedage Grade B rated
Language: English
Tags:
add  api  click  computer vision  detected  face api  face  faces  group  key  person  result  service  text  vision api  vision 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Andrej Tozon's blog

Andrej Tozon's blog



In the Attic



 



Microsoft Cognitive Services - Computer Vision

Wed, 19 Jul 2017 01:44:00 +0100

Similar to Face API, Computer Vision API service deals with image recognition, though on a bit wider scale. Computer Vision Cognitive Service can recognize different things on a photo and tries to describe what's going on - with a formed statement that describes the whole photo, a list of tags,  describing objects and living things on it, or, similar to Face API, detect faces. It can event do basic text recognition (printed or handwritten). Create a Computer Vision service resource on Azure To start experimenting with Computer Vision API, you have to first add the service on Azure dashboard. The steps are almost identical to what I've described in my Face API blog post, so I'm not going to describe all the steps; the only thing worth of a mention is the pricing: there are currently two tiers: the free tier (F0) is free and allows for 20 API calls per minute and 5.000 calls per month, while the standard tier (S1) offers up to 10 calls per second. Check the official pricing page here. Hit the Create button and wait for service to be created and deployed (should take under a minute). You get a new pair of key to access the service; the keys are, again, available through the  Resource Management -> Keys section. Trying it out To try out the service yourself, you can either try the official documentation page with ready-to-test API testing console, or you can download a C# SDK from nuget (source code with samples for UWP, Android & iOS (Swift). Also, source code used in this article is available from my Cognitive Services playground app repository. For this blog post, I'll be using the aforementioned C# SDK. When using the SDK, The most universal API call for Computer Vision API is the AnalyzeImageAsync: var result = await visionClient.AnalyzeImageAsync(stream, new[] {VisualFeature.Description, VisualFeature.Categories, VisualFeature.Faces, VisualFeature.Tags});var detectedFaces = result?.Faces;var tags = result?.Tags;var description = result?.Description?.Captions?.FirstOrDefault().Text; var categories = result?.Categories; Depending on visualFeatures parameter, the AnalyzeImageAsync can return one or more types of information (some of them also separately available by calling other methods): Description: one on more sentences, describing the content of the image, described in plain English, Faces: a list of detected faces; unlike the Face API, the Vision API returns age and gender for each of the faces, Tags: a list of tags, related to image content, ImageType: whether the image is a clip art or a line drawing, Color: the dominant colors and whether it's a black and white image, Adult: indicates whether the image contains adult content (with confidentiality scores), Categories: one or more categories from the set of 86 two-level concepts, according to the following taxonomy: The details parameter lets you specify a domain-specific models you want to test against. Currently, two models are supported: landmarks and celebrities. You can call the ListModelsAsync method to get all models that are supported, along with categories they belong to. Another fun feature of Vision API is recognizing text in image, either printed or handwritten. var result = await visionClient.RecognizeTextAsync(stream);Region = result?.Regions?.FirstOrDefault();Words = Region?.Lines?.FirstOrDefault()?.Words; The RecognizeTextAsync method will return a list of regions where printed text was detected, along with general image text angle and orientation. Each region can contain multiple lines of (presumably related) text, and each line object will contain a list of detected words. Region, Line and Word will also return coordinates, pointing to a region within image where that piece of information was detected. Also worth noting is the RecognizeTextAsync takes additional parameters: language – the language to be detected in the image (default is “unk” – unknown), detectOrientation – detects the image orientation based on orientation of detected text (default is true). Source code and sample app for this blog post is a[...]



Microsoft Cognitive Services - playground app

Fri, 14 Jul 2017 23:17:00 +0100

I've just published my Cognitive Services sample app to github. Currently it's limited to Face API service, but I'll work on expanding it to cover other services as well.The Microsoft Cognitive Service Playground app aims to support: managing person groups,managing persons,associating faces with persons,training person groups,detecting faces on photos,identifying faces.Basic tutorial1. Download/clone the solution, open it in Visual Studio 2017 and run.2. Enter the key in the Face API Key text box. If you don't already have a Face API access key, read this blog post on how to get it.3. Click the Apply button.4. If the key is correct, you will be asked to persist the key for future use. Click Yes if you want it to be stored in application local data folder - it will be read back every time application is started (note: the key is stored in plain text, not encrypted).5. Click the Add group button.6. Enter the group name and click Add.7. Select the newly created group and start adding persons. 8. Click the Add person button.9. Enter person's name and click Add. The person will be added to the selected group.10. Repeat steps 8 and 9 to add more persons in the same group.11. Click the Open image button and pick an image with one or more faces on it.12. The photo should be displayed and if any faces were detected, they should appear framed in rectangles. If not, try with different photo.13. Select a person from the list and click on the rectangle around the face that belongs to that person. A context menu should appear.14. Select the Add this face to selected person option. The face is now associated with selected person.15. Repeat steps 13 and 14 for different photos and different persons. Try associating multiple faces to every single person.16. Click the Train group button. Training status should appear. Wait for the status to change to Succeeded. Your group is trained!17. Open a new photo, preferably one you haven't use before for training, but featuring a face that belongs to one of the persons in the group. Ensure the face is detected (the rectangle is drawn around it).18. Click on the rectangle and select Identify this face.19. With any luck (and the power of AI), the rectangle will get the proper name tag. Previously unknown face has just got a name attached to it!20. Enjoy experimenting with different photos and different faces ;)21. Revisit my older blog posts on the subject (here and here).[...]