Development
Development

A Quick Swim in Data Lake

Using cognitive services and Data Lake Analytics


As a developer, I never cared much about Data Lake. While I’ve always sort of been responsible for the full stack, including the database stuff, I tended to shy away from stuff that had too much to do with data. I’ve been working with some interesting customers recently, though, that have interesting needs when it comes to their data.

You see, in recent years, something has happened to our data – all of it is potentially interesting. We may not know it yet, but all data has potential value. The data is also growing extensively. Seven (!) years ago, we’ve already reached the milestone of producing more content every two days than we, as a civilisation, produced up to that point in history. There are some interesting reads on this, if you’re curious.

One thing that caught my attention especially, being an amateur photographer, is that Data Lake Analytics lets you run diverse workloads, such as ETL, Machine Learning, Cognitive and others. Now, that had me thinking…

The Food Map

I’m a coeliac. I was diagnosed as having a gluten allergy – “and not the hipster kind” was basically what my doctor said, followed immediately by: “but you’re allowed to have steak, wine, whiskey and coffee, so you’ll be alright”. Good advice… 😊 Now, back to the problem at hand. Since the diagnosis, I’ve had to start changing the diet. It usually goes OK, as most places cater for the allergy, but sometimes there have been massive fails, for example I am never eating a Domino’s Pizza ever again. That was a horrible waste of three days of my life. Since those issues, I’ve started keeping a food diary. Like a hipster, I’ve taken a photograph of mostly all the meals I’ve eaten, especially if I eat them outside. Now, to be fair, it started because some of the food I was having was amazingly delicious (thanks Stuart, for the sushi in Seattle).

Like a customer I was working with, I now had a big collection of photos. Granted, they have several Petabytes, and mine still fits on my iPhone, but nevertheless the situation is almost the same. So, in the mentality of Big Data, all data has potential. So, what can I do with this?

So, I’ve decided to put all the photos in a Data Lake Store (the storage, powering the Azure Data Lake offering), and write a simple U-SQL query that looks at the photos, and using computer vision (cognitive services), figures out if it’s a picture of food. If it is, get the GPS location where it was taken.

Step 1: Setup the ADLA/ADLS accounts

First things first, I had to create an Azure Data Lake Analytics (ADLA) instance in my subscription. That was the easiest part.

Next up is enabling the Sample Scripts as per the “documentation“. Of course it doesn’t mention how to get the reference assemblies anywhere in the “documentation”, which meant lots of Googling. At one point, someone wrote that they are added as you add the Sample Scripts. Then clicking on the More  tab and selecting Install U-SQL Extensions .

This results in some interesting DLLs getting installed in our store. I had a conversation with one of the (awesome) Cognitive Services PMs a while ago, and these have the same capabilities as the full-on APIs that we offer, just a different access (obviously, this isn’t accessed via REST APIs).

For my experiment to work, then, the thing I needed was actually the Image Tagging algorithm. Sadly, when I tried to run the sample in the documentation, it failed with an internal exception. To test this out, I resorted to the other example, which is to use the Face Detection DLLs. I knew they work – the script to test this is here.

Step 2: Implementing the Custom Image Tagger

Now that I had the environment setup, and I knew the Cognitive stuff works, just not for my specific requirement, I figured I might as well try and do it differently. So I did what any developer would do, downloaded the DLLs and had a peek inside it using dotPeek. I then wrote my own custom implementation of the IProcessor interface that is used to extend U-SQL. U-SQL, btw, is the language you (can) use to interact with ADLA.

I was looking at the implementation provided, and decided I’ll do it a bit differently. This is the core of the implementation:

The Process method get each row that ADLA is processing, along with an output that is an IUpdatableRow. All we do in there is get the byte array of the image, and call the ProduceTags  method in the “Tagger”. Have a look at the full implementation here.

To implement this, I’ve opted for a custom Class Library (For U-SQL Application), in my Visual Studio Solution. To use it though, you have to register it first. That involves a few steps:

  1. Right click on the project, and select Register Assembly...
  2. Make sure you select the right account; this got me the most times…
  3. I opted to also deploy Managed Dependencies (specifically ImageRecognitionWrapper ), directly from the existing set in the cloud. The path will look something like this adl://oxford1adls.azuredatalakestore.net/usqlext/assembly/cognition/vision/tagging/ImageRecognitionWrapper.dll.
  4. Remember to tick Replace Assembly if it already exists. You will forget it, and it will fail.

Now then, time for our first run.

 

How did it do? Let’s take a look at one of the photos:

Sashimi in Seattle

That’s some mighty good sashimi, we had with a colleague in Seattle. The output produced by ADLA for this photo is, formatted for readability:

Tag Confidence (0-1)
food 0.9370058
table 0.906905
plate 0.8964964
square 0.6789738
piece 0.6518051
indoor 0.6433577
dish 0.9999959
sashimi 0.961922

It’s worth noting that I’ve added this line of code to the tagger implementation:

I’ve set the default threshold to 0.5, which means I only get the tags the algorithm is fairly confident in. It works for me. Another thing that’s interesting is the confidence level in sashimi. That’s just amazing.

Step 3: Filtering the ones with food

Now that we have our computer vision sorted, we need to find only the photos that have food in it. I used the U-SQL extensibility for this, again, and implemented a simple C# code-behind function:

I know it’s basic, but it fine for our POC. I can call this from my U-SQL statement, like so:

So, the above statement comes from a bit further down the road, but it essentially is a SQL WHERE statement. The difference is it doesn’t compare anything, but calls out the above C# function (which returns a boolean). How cool is that?

Step 4: GPS location extraction

I now had the photos analysed, and I was confident I can get the ones with food. So, next step, I was interested in the location. These days, most of the photos taken have GPS coordinates embedded in their EXIF metadata. From working with another customer, I already knew where to look and how to get to it. It involved copying some code from a U-SQL sample repository. That gave me some fun capabilities of dealing with the images, including a pointer on how to deal with metadata. However, as it turns out, the GPS coordinates are stored a bit differently – they are stored in multiple properties (long/lat as you’d expect, but each of them also has a “reference” field, which basically describes orientation from what I gather). Through the magic of Stack Overflow I was able to get to a working implementation of this:

To use this, I needed to go back to the U-SQL script:

The U-SQL statement from a paragraph higher now comes into better perspective. We basically split the processing into two branches – one gets the GPS location, the other the tags, then we essentially join the two results together and filter only the ones we know have food as a tag. The result gets outputted into a CSV file – no, no reason, just to keep it simple.

Running this, now gives us the following result (subset):

Now we’re getting somewhere. The coordinates in there are correct, and correspond to Umi, in Seattle. Omnomnom.

Step 5: Displaying the results on a map

All that is left, is to display the results on a map. I wanted to keep it simple, and just display a dot for each location I’ve eaten at. PowerBI seemed to be the obvious choice to do that. There’s  an easy way to get data from ADLS, but you need to use the desktop version of PowerBI.

You then provide the full URL to the CSV file that is outputted by the script, and do a bit of trickery. Running this for a subset of my photos produces the following map:

Conclusion

I’ve published all the code for this on GitHub. You should have enough between this post and that repository to get started. As for me, my next step is to upload a lot more of my photos, and continue improving the map. I’ll also consider outputting the results into something else, though for now, that’s not really a problem. The one thing I would love though, is to actually display the photo. Guess that’ll be the next step.


Anže Vodovnik is a proud father, enthusiastic guitarist and passionate software developer. He enjoys presenting at conferences sharing his experience of over 15 years of creating software. He was briefly a Microsoft MVP for Azure before forfeiting the title when he joined Microsoft UK, where he’s now working hand-in-hand with customers to help them develop and use solutions based on the Microsoft Azure platform.

View Comments