PixBlog 8 - Land of endless panoramas !! (Alaska)

Some hues of color [Turnagain arm, Seward Highway]

Ice, Ice baby !! [Harding Icefield/Exit Glacier]

Crash and Burn [Holgate Glacier]

For the love of fall colors [Glenn Highway]

Cracks & Racks [Matanuska Glacier]

Ice in front of us and supposedly underneath us [Matanuska Glacier]

No savage souls around !! [Savage River, Denali National Park]

Please Mr.Fog, let us see the snow caps [Sable Pass, Denali National Park]

Just another view too good to ignore [Toklat River, Denali National Park]

Bcoz some mountains have big bottoms [Mt.Denali (center) , Denali National Park]

PixBlog 7 - Get Lost(In Montana) !!




           No effects, just some raindrops !! [Roosevelt Arch]


        Looking Wyoming, Walking Montana [ Location - Gardner, Montana]


       Literally get lost in Montana [Somewhere in Montana]


     Different hues of glacial water [Many Glacier Road]


   Imminent loss of glaciers and pertinent loss of beauty [Swiftcurrent Lake]


Grinnell Glacier in the center pouring streams into the lake [Grinnell Lake]


The Horn chillin in the midst of ice and fire [Going to the sun road]


Lush meadow, Hidden lake and over-powering mountains [Logan Pass]

Going indeed to the sun [Going-to-the-sun road]

Bear choosing the next branch to ingest [Going-to-the-sun road]

Most Often Substring / Most Common Substring


Came across an interesting question on hackerrank which goes as follows -
We have a string of length N. Can you figure out the number of occurrences of the most frequent substring in this string? We are only interested in substring of length from K to L and in each substring the number of distinct characters must not exceed M. The string contains only lower-case letters(a-z). 
Constraints:
2<=N<=100000
2<=K<=L<=26,L<N
2<=M<=26

Sample Input :-
5
2 4 26 // value of K L M
abcde 
Sample Output :-
1
The solution to the above problem is
-Any substring between length K and L will be inserted into the trie.  
-Each trieNode also has a frequency counter which will be incremented when an already existing substring is inserted.
-While inserting, we need to check for the number of distinct characters in the given substring and not insert any substring which has distinct characters greater than m. In case, the number of distinct characters is greater than m, we return -1 as frequency from insertTrie function so as to know that the given substring was invalid and not inserted into trie.
- While inserting, we can keep track of the current maximum frequency and return that at the end as the answer.

Implementation in java is as below :-
New Document
import java.util.*;

public class MostOftenSubstring {
    TrieNode root;
    Set distinctCharacters = new HashSet();
 
    public int getHighestFrequencyOfSubstring(int n, int k, int l, int m, String s) {
        if (s == null || n == 0) {
            return 0;
        }
        if (k < 0 || l < 0 || m <= 0) {
            return 0;
        }
        int maxFrequency = 1;
        root = new TrieNode();
        for (int i = 0; i < n; ++i) {
            for (int j = k; j <= l && j <= n; ++j) {
                String substring = s.substring(i,j);
                // return -1 if distincts more than m
                int currentFrequency = insertIntoTrie(substring, m);
                if (currentFrequency == -1) 
                    break;
                maxFrequency = (currentFrequency > maxFrequency) ? currentFrequency : maxFrequency;                 
            }
            k += 1;
            l += 1;
        }
        return maxFrequency;
    }
     
     public int insertIntoTrie(String word, int m) {
         distinctCharacters.clear();
         TrieNode parent = root;
         for(char c : word.toCharArray()) {
             distinctCharacters.add(c);
             if (distinctCharacters.size() > m) {
                 return -1;
             }
             int i = c - 'a';
             if(parent.next[i] == null) 
                 parent.next[i] = new TrieNode();
             parent = parent.next[i];
         }
         if (parent.word != null) {
            parent.frequency += 1;
         } else {
             parent.word = word;
             parent.frequency = 1;
         }
         return parent.frequency;
     }

     class TrieNode {
         TrieNode[] next = new TrieNode[26];
         String word;
         int frequency = 0;
     }
}

Time Complexity is O(N.(L-K)) without considering trie insertion and O(N. (L-K)2) if considering trie insertion.

PixBlog 6 - The land of 14ers [Colorado]

  In the midst of the DeCaLiBron - Mt. Democrat to the right, Mt. Bross to the far left[Alma, Colorado]

  Moment of contemplation - left (Mt.Democrat) or right(Mt.Cameron) [Cameron-Democrat saddle]

Heavy legs and tired lungs - Summit of Mt. Democrat(14154 feet !!)

Football field at 14k feet !! - Summit of Mt. Cameron

A water-filled kite on the ground - Looking down at Kite Lake Trailhead [Alma, Colorado]

Summit Lake below Mt.Evans

Looking down Mt.Evans Road and Scenic Byway [Summit of Mt. Evans]

The scramble awaits - Looking up Mt. Bierstadt summit [Clear Creek, Colorado]

Sun-lit sawtooth ridge - Looking down from summit of Mt. Bierstadt

Of fleeting lakes and perennial ridges - Summit of Mt. Bierstadt

Scary Sawtooth ridge from the side 

For some lines are drawn pretty fine - The Continental Divide

Road to Aspen aint that way !! [Independence Pass, Colorado]

El famous Maroon Bells - [Maroon Bells, Colorado]

Beautiful Breck - [Main St, Breckenridge, Colorado]


Spark Graphx Example with RDF data - Modelling IMDB data to a graph

Had some time to deep dive into GraphX and as it is only supported in Scala, was a good experience. Coming from experimenting on graph databases, graphx seemed an easy platform and makes one not worry about sharding/partitioning your data. The graphx paper and bob ducharme's blog seemed good starters to getting started with playing on graphx.
So I managed to get all Indian movie data for 10 different languages(Hindi, Marathi, Punjabi, Telugu, Tamil, Bhojpuri, Kannada, Gujarati, Bengali, Malayalam) with the actors and directors associated with the films.
Now came the part of building the relationships between the above entities. GraphX uses VertexId associated with each vertex in the graph which is Long(64 bit). So I went about creating the same for all entities and replacing them in the movie file for easier parsing later when creating edges(relationships). The essential part which makes GraphX super easy and super fun is that the partitioning takes place on vertices,i.e., the vertices go along with the edge when edges are on a different node but the VertexID associated with that vertex will still be the same as any other node. The essential syntax to consider is -

For Vertex -> RDD[(VertexId, (VertexObject)) 
In my case, the vertex RDD is -> RDD[(VertexId, (String, String))

For Edge -> Edge(srcVertexId, dstVertexId, "<edgeValueString>")
You can also have a URI or other data types as your edgeValue(can be a weight to associate to the edge when you happen to be choosing the destination to choose based on edge value).

These two when combined together in this line create the graph object for us -
val graph = Graph(entities,relationships)

                                                          Graph depiction of data

So, I ended up connecting a movie to all its actors and back, its director and back and the language the movie was in and back. This led to some half a million edges for my graph(in essence half of them are edges but I made two edges between all sets of entities and movie). Also, I added literal properties to the movie(which are not nodes) such as imdbRating and potentialAction(webUrl of Imdb for that movie). Literal properties can also be thought of adding to the vertexObject directly if the Object is a class with the properties as different variables. The code to construct and query the graph is below. The main program can be replaced by a scalatra webserver and be used for the same but I was more focused on the graph functionality.

New Document
import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.SparkContext._
import org.apache.spark.graphx._
import org.apache.spark.rdd.RDD
import scala.io.Source
import org.json4s._
import org.json4s.jackson.JsonMethods._
import scala.collection.mutable.ListBuffer

object MoviePropertyGraph {
  val baseURI = "http://kb.rootchutney.com/propertygraph#"

  class InitGraph(val entities:RDD[(VertexId, (String, String))], val relationships:RDD[Edge[String]],
                  val literalAttributes:RDD[(Long,(String,Any))]){
    val graph = Graph(entities,relationships)

    def getPropertyTriangles(): Unit ={
          // Output object property triples
       graph.triplets.foreach( t => println(
         s"<$baseURI${t.srcAttr._1}> <$baseURI${t.attr}> <$baseURI${t.dstAttr._1}> ."
       ))
    }

    def getLiteralPropertyTriples(): Unit = {
      entities.foreach(t => println(
        s"""<$baseURI${t._2._1}> <${baseURI}role> \"${t._2._2}\" ."""
      ))
    }

    def getConnectedComponents(): Unit = {
      val connectedComponents = graph.connectedComponents()
      graph.vertices.leftJoin(connectedComponents.vertices) {
        case(id, entity, component) => s"${entity._1} is in component ${component.get}"
      }.foreach{ case(id, str) => println(str) }
    }

    def getMoviesOfActor(): Unit = {
      println("Enter name of actor:")
      val actor = Console.readLine()
      graph.triplets.filter(triplet => (triplet.srcAttr._1.equals(actor)) && triplet.attr.equals("inFilm"))
        .foreach(t => println(t.dstAttr._1))
    }

    def getMoviesOfDirector(): Unit = {
      println("Enter name of director:")
      val director = Console.readLine()
      graph.triplets.filter(triplet => (triplet.srcAttr._1.equals(director)) && triplet.attr.equals("movie"))
        .foreach(t => println(t.dstAttr._1))
    }

    def getMoviesInLanguage(): Unit = {
      println("Enter language:")
      val lang = Console.readLine()
      graph.triplets.filter(triplet => (triplet.dstAttr._1.equals(lang)) && triplet.attr.equals("inLanguage"))
        .foreach(t => println(t.srcAttr._1))
    }

    def getTopMoviesBeginningWith(): Unit = {
      println("Enter prefix/starting tokens:")
      val prefix = Console.readLine()
      graph.vertices.filter(vertex => (vertex._2._1.startsWith(prefix)) && vertex._2._2.equals
        ("movie"))
        .distinct()
        .leftOuterJoin(literalAttributes.filter(literalTriple => (literalTriple._2._1.equals("imdbRating"))))
        .sortBy(element => element._2._2.get._2.asInstanceOf[Double], false)
        .foreach(t => println(t._2._1 +" " + t._2._2.get._2.asInstanceOf[Double]))
    }

    def getActorsInMovie(): Unit = {
      println("Enter movie:")
      val movie = Console.readLine()
      graph.triplets.filter(triplet => (triplet.srcAttr._1.equals(movie)) && triplet.attr.equals("actor"))
        .foreach(t => println(t.dstAttr._1))
    }
  }

  def main(args: Array[String]) {
    val sc = new SparkContext("local", "MoviePropertyGraph", "127.0.0.1")
    val movieFile = ""
    val actorFile = ""
    val directorFile = ""
    val languageFile = ""

    val movies = readJsonsFile(movieFile, "movie", sc)
    val actors = readCsvFile(actorFile, "actor", sc)
    val directors = readCsvFile(directorFile, "director", sc)
    val languages = readCsvFile(languageFile, "language", sc)

    val entities = movies.union(actors).union(directors).union(languages)
    val relationships: RDD[Edge[String]] = readJsonsFileForEdges(movieFile, sc)
    val literalAttributes = readJsonsFileForLiteralProperties(movieFile, sc)
    val graph = new InitGraph(entities, relationships, literalAttributes)
// Some supported functionality in graphx
//    graph.getPropertyTriangles()
//    graph.getLiteralPropertyTriples()
//    graph.getConnectedComponents()

    var input = 0
    do {
      println("1. Movies of actor \n2. Movies of director \n3. Movies in language \n4. Top Movies beginning with " +
        "\n5. Actors in movie \n6. Exit\n")
      input = Console.readInt()
      input match {
        case 1 => graph.getMoviesOfActor()
        case 2 => graph.getMoviesOfDirector()
        case 3 => graph.getMoviesInLanguage()
        case 4 => graph.getTopMoviesBeginningWith()
        case 5 => graph.getActorsInMovie()
        case 6 => "bye !!!"
        case something => println("Unexpected case: " + something.toString)
      }
    } while (input != 6);
    sc.stop
  }

  def readJsonsFile(filename: String, entityType: String, sparkContext:SparkContext):  RDD[(VertexId, (String, String))] = {
    try {
     val entityBuffer = new ListBuffer[(Long,(String,String))]
      for (line <- case="" catch="" classof="" entitybuffer.append="" entitybuffer="" entitytype="" ex:="" exception="" filename="" getlines="" head.tolong="" head="" jsondocument="" movieid="" name="" return="" source.fromfile="" sparkcontext.parallelize="" tring="" val=""> println("Exception in reading file:" + filename + ex.getLocalizedMessage)
        return sparkContext.emptyRDD
    }
  }

  def readCsvFile(filename: String, entityType: String, sparkContext:SparkContext): RDD[(VertexId, (String, String))] = {
    try {
      val entityBuffer = new ListBuffer[(Long,(String,String))]
      for (line <- -="" .map="" .split="" 1="" array="" case="" entitybuffer.appendall="" filename="" getlines="" k="" line.length="" line.substring="" source.fromfile="" split="" v=""> (k.substring(2, k.length-1), v.substring(1, v.length).toLong)}
          .map { case(k,v) => (v,(k,entityType))})
      }
      return sparkContext.parallelize(entityBuffer)
    } catch {
      case ex: Exception => println("Exception in reading file:" + filename + ex.getLocalizedMessage)
        return sparkContext.emptyRDD
    }
  }

  def readJsonsFileForEdges(filename: String, sparkContext:SparkContext): RDD[Edge[String]] = {
    try {
      val relationshipBuffer = new ListBuffer[Edge[String]]
      for (line <- actor.tolong="" actor="" actors="" case="" catch="" classof="" dge="" director="" ex:="" exception="" filename="" for="" getlines="" head.tolong="" infilm="" inlanguage="" jsondocument="" movie="" movieid="" nt="" println="" relationshipbuffer.append="" relationshiprdd.count="" relationshiprdd="" return="" source.fromfile="" tring="" val=""> println("Exception in reading file:" + filename + ex.getLocalizedMessage)
        return sparkContext.emptyRDD
    }
  }

  def readJsonsFileForLiteralProperties(filename: String, sparkContext:SparkContext): RDD[(Long,(String,Any))] = {
    try {
      val literalsBuffer = new ListBuffer[(Long,(String,Any))]
      for (line <- case="" catch="" classof="" ex:="" exception="" filename="" getlines="" head.tolong="" head="" imdbrating="" jsondocument="parse(line)" literalsbuffer.append="" literalsbuffer="" movieid="" ouble="" potentialaction="" return="" source.fromfile="" sparkcontext.parallelize="" tring="" val=""> println("Exception in reading file:" + filename + ex.getLocalizedMessage)
        return sparkContext.emptyRDD
    }
  }
}


The function "getTopMoviesBeginningWith" is the coolest of all functions as it sorts the entities based on literal properties(imdb) joined into the same RDD and provides a lame autosuggest for movies starting with a prefix.
My actors, director and languages files were in the form of csv containing name and vertexID values that I had pre-generated. For example,
'shah rukh khan': 75571848106
'subhash ghai': 69540999294
'hindi': 10036785003, 'telugu': 61331709614
My movie file was in the form of a json containing all this connected data. For example -
{"inLanguage": 10036785003,
  "director": 52308324411,
  "movieId": "0112870",
  "potentialAction": "http://www.imdb.com/title/tt0112870/",
  "name": "dilwale dulhania le jayenge",
  "imdbRating": 8.3,
 "actors": [75571848106, 19194912859, 79050811553, 37409465222, 44878963029, 10439885041, 85959578654, 31779460606, 39613049947, 29336046116, 64948799477, 71910110134, 1911950262, 19092081589, 77693233734]
}

If you happen to be needing these files, do let me know and I can email them. Last but not the least, I found building my project using SBT really easier than non SBT method as it gave a gradle like method to build projects.
Here is my sample sdt file that I used to build the project -

name := "PropertyGraph"

version := "1.0"

scalaVersion := "2.10.4"

connectInput in run := true

// additional libraries
libraryDependencies ++= Seq(
  "org.apache.spark" %% "spark-core" % "1.5.0" % "provided",
  "org.apache.spark" %% "spark-sql" % "1.5.0",
  "org.apache.spark" %% "spark-hive" % "1.5.0",
  "org.apache.spark" %% "spark-streaming" % "1.5.0",
  "org.apache.spark" %% "spark-streaming-kafka" % "1.5.0",
  "org.apache.spark" %% "spark-streaming-flume" % "1.5.0",
  "org.apache.spark" %% "spark-mllib" % "1.5.0",
  "org.apache.spark" %% "spark-graphx" % "1.5.0",
  "org.apache.commons" %% "commons-lang3" % "3.0",
  "org.eclipse.jetty"  %% "jetty-client" % "8.1.14.v20131031",
  "org.json4s" %% "json4s-native" % "{latestVersion}",
  "org.json4s" %% "json4s-jackson" % "{latestVersion}"
)

The experimentation did not give me a real time computation(like Neo4j or Virtuoso graph databases) of the graph as spark processing takes time on a single spark node and I safely assume increasing number of nodes will increase the time due to parallel distribution and fetching. I hope this continues to get better with future versions of spark(soon to be out 1.6 !!)

PixBlog 5 - The Pacific Northwest [Northern Oregon & Olympic National Park]

            Cannon Beach [Haystack Rock]

 Canada across the strait!![Port Angeles]

                                   Another one of those roads to perdition [Olympic National Park]

Turn up that heat [Heart O' the Hills road, Olympic National Park] 

Tunnel in a tunnel [Heart O' the Hills road, Olympic National Park]  

  Ridgy-Ridge [Hurricane ridge, Olympic National Park]

[Heart O' the Hills road, Olympic National Park]   

 [Elwha Rainforest, Olympic National Park]  

 [Lake Crescent, Olympic National Park]

Lost trees up-top [Rialto beach]

All possible permutations of tokens in a line given each token has two(n) possible values

So I came across an interesting problem and the problem was essentially a modification of permutation problems that I had seen before. I did end up writing the brute force solution which was not good enough and had to revisit the problem for optimizing it.
Here is my best attempt at describing the problem -
Each token in a line is sent through a encryption module and gives out a transformed version of the token. The encryption module has some bugs in it and hence not all words may always get encrypted/transformed, find all possible representations of the same line so that the representations cover all possible permutations if each token was successfully transformed or not.

I hope I did not make the problem statement too confusing but to make things a bit more clearer, here is an example :-
Line -> “W1 W2"
“W1" transforms to a transformed word called “TW1”
“W2” transforms to a transformed word called “TW2”

Hence, considering that both words may not always get transformed, here are all the possible permutations of the above line.
Permutation1 -> “W1 W2”
Permutation2 -> “TW1 W2”
Permutation3 -> "W1 TW2”
Permutation4 -> “TW1 TW2”

Similarly, if the line was “W1 W2 W3” with the respective transforms being “TW1”, “TW2” and “TW3” then the possible permutations are :-
“W1 W2 W3”
“TW1 W2 W3”
“TW1 W2 TW3”
“TW1 TW2 W3"
“W1 W2 TW3"
“W1 TW2 W3"
“W1 TW2 TW3"
“TW1 TW2 TW3”

Solution 1(Brute force) :-
This was the first approach I took to get things done.
Create a hashset or a treeset of results and go over each possible combination with the given token at hand. To top it off, I made the solution worse by making it recursive. So for example, if you take the first token in a three token line, then get all the possible combinations of the remaining two tokens and append them to W1 and TW1. Then, move onto W2, and get all possible permutations of (W1, TW1) and (W3,TW3) and attach them around W2 and TW2.
As can be seen, this solution is tedious and wasteful. We end up recalculating many of the already finished permutations and there needs to be an easy way to avoid that.
The worst case is when you have many tokens around the token at hand and one has to recursively calculate the combinations on both sides of the token and then again permutate with the given token. So, one creates three subproblems :-
i) Calculate permutation before the token(P1)
ii) Calculate permutation after the token(P2)
iii) Add in Permutations of P1,token,P2

I wish not to rewrite code for such a lame approach though the time complexity will be O(N X Maximum Number of Permutations before a token X Maximum Number of Permutations after a token) where N is number of tokens
which basically amounts to O(N!(N-1)!….1!).
Enough said about how bad it was.
Unfortunately, this method only worked until like 10 or 11 tokens on my laptop and my laptop gave up after that.

Solution 2(Use a tree/graph approach):-
This solution was thought about it so as to avoid recalculating a path already taken. Once all possible paths of a tree are taken, one can be assured that all possible permutations have been calculated. The way of thinking about this solution is to go have a directed graph with two start points W1 and TW1. And then do a Depth First Search for last token and its transform(in this case W3 and TW3). I think of this as a problem having two sources(W1and TW1) and two sinks(W3 and TW3).
Here is a diagram to explain the same.
fig - Graph Approach

The interesting part is that using this approach, I ended up using a little more space than I normally would to keep tabs of the nodes I have already visited.
This approach had O(V * E) time complexity which is O(N x 2N) which is essentially O(N^2) with O(2N) space complexity to maintain the visited nodes.

Solution 3(Using a binary bit based approach) :-
Thanks to Randy Gobbel for noting a pattern from the above graph and suggesting a uniquely simple approach without any extra space needed. Randy noted that each place in the resultant permutation has two possible values(W or TW). This is more or less like 0 or 1. Hence, the permutations amount for a three token line amount to -
000, 001, 010, 011, 100, 101, 110, 111
where 0 -> the token itself
          1 -> the token’s transform
As a result, this solution boils down to simply iterating over all possible values from 0 to 2 raised to the power of the number of tokens. Below is the code for the approach. The method signature involves a list of tokens and a Map/Dict containing transforms for each token.

1:  public List getPossiblePermutations3(List words, Map transformMap) { 
2:        if (words == null || words.isEmpty()) 
3:            return null; 
4:        List permutationList = new ArrayList<>(); 
5:        for (int i = 0; i < (int)Math.pow(2,words.size()); ++i) { 
6:            StringBuilder permutation = new StringBuilder(); 
7:            for (int j = 0; j < words.size(); ++j) { 
8:                //Check if bit is set 
9:                if((i & (int)Math.pow(2, j)) == 0) 
10:                    permutation.append(words.get(j) + " "); 
11:               else 
12:                    permutation.append(transformMap.get(words.get(j)) + " "); 
13:            } 
14:            //Trim out end spaces/blank characters 
15:            permutationList.add(permutation.toString().trim()); 
16:        } 
17:        return permutationList; 
18:    } 
The time complexity for this approach is O(N x 2^N) with no space implications.

Thanks for reading…