Trying to share some code ideas


Month: January 2018

Testing for Unconnected Components in a Graph

Testing for Unconnected Components in an Undirected Graph

With a graph structure it is possible that parts of the graph will not be connected to each other.  An example of this would be with social networks, not all users are friends with other users.

The code will find the total number of connected components of the graph, or graph parts in an undirected graph.

See code below.

Finding an Exit from a Maze

Finding an Exit from a Maze using undirected graphs.

We can think of a maze as a rectangular grid of cells with paths between adjacent cells. If we want to find if there is a path from a given cell to a given exit from the maze, where the exit is represented by a cell, you can represent the maze as an undirected graph.

The nodes of the graph are cells of the maze, and two nodes are connected an undirected edge if they are adjacent and there is no wall between them. Therefore we can surmise we just need to see if a path, series of edges connecting the nodes, to determine if the two nodes are connected.

See code below.

Spark – Streaming Data

Spark – Streaming Data, Capturing and Querying

Today we will look at how to capture streaming data and perform some simple queries as the data is streamed. We will use the regular expressions library and the PySpak library.  The streaming data comes from a weather station that transmits different weather at different intervals.  We will need to find the correct data out of the stream and output the results.

Let’s get started.

Spark – SQLContext

Spark – SQLContext

Today we will look at the SQLContext object from the PySpark library and how you can use it to connect to a local database.  In the example below we will:
Connect to a local PostgreSQL database and read the contents into a dataframe.
Run some simple SQL queries
And join two data frames together

Let’s get started.

Hadoop Spark – Hello World, Word Count

Hadoop Spark – Word Count

One of the first things to do in most programming languages is create a “Hello World!” program.  The equivalent in Spark is to create a program that will read the contents of a file and count the number of occurrences of each word.

Below I will show a basic example, so let’s start counting.

Read CSV files using Python Panda Dataframes

The great thing about the Panda’s library for Python is how easily it can manipulate data sources.  We will look at one of the first things you will want to do, read a .csv file.

Powered by WordPress & Theme by Anders Norén