Do you know Azure Cognitive Services?
Azure Cognitive Services is a set of APIs Vision, Speech, Language, Knowledge, Search to make your applications more intelligent.
Today we will create a console application that will use the Vison API named Computer Vision to analyze a karting image. As a base, we will create a project with the .NET Core Console App template in Visual Studio.
Before starting please check to following documentation to obtain a Computer Vision API subscription key:
Creation
The first thing you will need is to add the following NuGet package to your project: Microsoft.ProjectOxford.Vision.DotNetCore
Let's now declare the AnalyzeImage method. This method will call the AnalyzeImage method of the Computer Vision API:
... using Microsoft.ProjectOxford.Vision; using System.Threading.Tasks; ... namespace VisionRacing.AnalyzeKartingImage { class Program { ... private static async Task AnalyzeImage(string apiKey, string apiUrl, string imageUrl) { var vsc = new VisionServiceClient(apiKey, apiUrl); var visualFeatures = new VisualFeature[] { VisualFeature.Description, VisualFeature.Tags }; var analysisResult = await vsc.AnalyzeImageAsync(imageUrl, visualFeatures); ShowAnalysisResult(analysisResult); } ... } }
We provide three parameters:
As you can see the code to call the AnalyzeImage method is pretty simple. Once our VisionServiceClient created, we call AnalyzeImageAsync by providing the image Url and the VisualFeatures we want to get an analyze for. In our case we use Description to get a description of the image and Tags to get the tags associated to the image. Other tags available are ImageType, Color, Faces, Categories.
Once the image analyzed by the API we will show the analysis result via the following method:
... using Microsoft.ProjectOxford.Vision.Contract; using System; using System.Linq; ... namespace VisionRacing.AnalyzeKartingImage { class Program { ... private static void ShowAnalysisResult(AnalysisResult result) { Console.ForegroundColor = ConsoleColor.White; Log("Image analysis result"); Console.WriteLine(); if (result.Description != null) { Console.ForegroundColor = ConsoleColor.Green; Log("1. Image description"); Console.ForegroundColor = ConsoleColor.Gray; if (result.Description.Captions.Any()) { foreach (var caption in result.Description.Captions) { Log($" Caption: {caption.Text} (Confidence {caption.Confidence.ToString("P0")})"); } } else { Log(" No image caption"); } Console.WriteLine(); if (result.Description.Tags.Any()) { Log($" Tags: {string.Join(", ", result.Description.Tags)}"); Console.WriteLine(); } } if (result.Tags != null && result.Tags.Any()) { Console.ForegroundColor = ConsoleColor.Green; Log("2. Image tags"); foreach (var tag in result.Tags) { Console.ForegroundColor = ConsoleColor.Gray; Log($" Name: {tag.Name} (Confidence {tag.Confidence.ToString("P0")}{(string.IsNullOrEmpty(tag.Hint) ? string.Empty : $" | Hint: {tag.Hint}")})"); } Console.WriteLine(); } } private static void Log(string message) { Console.WriteLine(message); } } }
Here we go through the results to properly display the Description of the image and the Tags.
And finally, here is the main entry point of the console application:
... using System; ... namespace VisionRacing.AnalyzeKartingImage { class Program { static void Main(string[] args) { try { var apiKey = "Your Cognitive Services Vision API Key."; var apiUrl = "Cognitive Services Vision API URL."; var kartingImageUrl = "https://github.com/vivienchevallier/Article-AzureCognitive.Vision-Racing/raw/master/Images/Karting/Karting%20(9).jpg"; AnalyzeImage(apiKey, apiUrl, kartingImageUrl).Wait(); } catch (Exception ex) { Console.ForegroundColor = ConsoleColor.Red; Console.WriteLine(ex.Message); Console.WriteLine(); } Console.ForegroundColor = ConsoleColor.White; Console.WriteLine(); Console.WriteLine("Press any key to exit..."); Console.ReadLine(); } ... } }
You will need to provide:
As you can see we call the AnalyzeImage method previously created.
Example of use
The console application is now ready to run, let's execute it:
Image analysis result
1. Image description
Caption: a person riding a motorcycle down the road (Confidence 71 %)
Tags: grass, road, outdoor, riding, racing, small, red, sitting, motorcycle, wearing, man, driving, track, runway, street, traffic, white, plane, people
2. Image tags
Name: grass (Confidence 100 %)
Name: road (Confidence 100 %)
Name: outdoor (Confidence 100 %)
Name: racing (Confidence 82 %)
Name: turn (Confidence 22 %)
Press any key to exit...
If everything goes well, you should see the same kind of logs as above.
To go further
Now here is the karting image we analyzed with the Computer Vision API. If we check the Image tags are pretty accurate. However, the caption "a person riding a motorcycle down the road" is not accurate, I know because it is me racing my karting!
So, what's the problem here? The analysis result is not what we expected as I'm obviously not riding a motorcycle here. I took this specific image on purpose because it may happen that the Computer Vision API does not return accurate results depending on what kind of images your analyzing. In our case it's an auto racing sport, karting, and I guess for now Computer Vision hasn't been trained enough with this kind of image. So, what to do in that case?
Well in my next article about Azure Cognitive Services we will discover how Custom Vision Service can help by building our own Vision service!
Summary
We have seen how to analyze an image with Azure Cognitive Services Computer Vision API in a .NET Core console application.
You can get the project source code here:
Browse the GitHub repository
(Note that the project uses .NET Core 2.0)
Please feel free to comment or contact me if you have any question about this article.