a blog on spatial research and data visualization in R. by paul bidanset © 2013-15

Creating a TRULY Interactive Map of Craft Breweries in VA Using the leafletR Package (Guest Blog Post by Dr. Keegan Hines)


It’s a good feeling when a great friend who is smarter than you offers to write a blog post, for your blog, that’s better than anything you’ve written so far. Friends, colleagues, people who’ve not yet realized they are at the wrong site: please allow me to introduce to you the awe-inspiring Dr. Keegan Hines. He got his PhD in neuroscience from the University of Texas at Austin in 2014 and is now a data scientist doing some super-secret-James-Bond-machine-learning work for a DoD contractor near D.C. When he is not breathing life into spatially-centered instructional R blogs, he is part of an improv comedy troop, does some consulting work, and serves a mean campfire omelet. Without further adieu…

Microbreweries and Interactive Maps With Leaflet

This is a guest post from Keegan Hines, he’s a neat fella, you can follow him on the internet.

This post is about interactive data visualizations and some powerful new tools that are available to the R community. Moving beyond static data graphics toward interactive experiences allows us to present our audience with much more complex information in an easily digestable way. To empower these interactive graphics, we’re going to utilize tools such as HTML and javascript, technologies that drive the web-based interactive experiences you have every day. But the best part is that we’ll benefit from these technologies without having to learn anything about web development. We’re going to create some amazing things using only R!

As a guiding example, lets return to a previous blog post where Paul visualized the locations of microbreweries in Virginia . In that post, Paul introduced Plotly, a super cool company that allows you to create and deploy interactive graphics on their web-based service. Here, we’re going to do this all ourselves, with help from a new R package called leaflet. So let’s jump right in.

Here’s some boiler-plate stuff. We need to install the package manually from github and then load it.


So first thing, let’s grab some location that we might want to put on a map. I’ll use a function from the ggmap package.

 somePlace <-ggmap::geocode("Washington,DC")      

So we have a dataframe (with one row) and lat/lon coordinates for some arbitrary point in Washington, DC. We’re going to use functions from the leaflet package to generate a map around this point.

leaflet(somePlace) %>% addTiles() %>% addMarkers()

Now we have this draggable, zoomable, interactive map with a single line of R!

A little explanation of what we just did. In case it’s unfamiliar, I’ll first point out that we’re using the forward pipe %>% thing. The forward pipe was introduced in the magrittr package and has now been adopted in lots of places. The idea is that we can pass the output of a function as the input to the next function. This allows us to write code that reads left to right and is more aligned with our logic. For example:

# this is nested and confusing, we have to read it inside-out
# this is sequential and awesome, it reads left to right
    c(1,2,3) %>% sum() %>% sqrt()

So back to leaflet. The first funciton we use is called leaflet() and this returns a base leaflet object, sort of the starting point for everything we might do. We passed our data frame as an argument to leaflet(), and so any later functions that might require data will look to this data frame.

We then sent the output of leaflet() to another function, addTiles(). This is because the output of leaflet() doesn’t have enough visual information to actually create a map – we haven’t provided enough detail yet about what we want. The function addTiles() updates the leaflet object by providing the visual look and feel through different “tiles”. In fact, there’s many different styles of map we could make, just by choosing different tiles. Here’s some examples:

leaflet(somePlace) %>% addProviderTiles("Stamen.Watercolor") %>% addMarkers()

leaflet(somePlace) %>% addProviderTiles("Stamen.Toner") %>% addMarkers()

The full list of available tiles is here.

And so the third function in this simple example is addMarkers(). This function’s purpose is pretty obvious and results in the big blue marker thing on the map. What it does is look through the provided data frame for any columns that are similar to “lat” or “lon” and then plots them. And it’ll do so for every row in the data frame, so it’s effortless to put lots of points on a map, as we’ll see below. There are also a few other other functions that are similar and plot slightly different things. You might be able to guess what addCircles() or addPolyLines() are capable of, but as an example:

leaflet(somePlace) %>%
	addProviderTiles("Stamen.Toner") %>%

So let’s move on to our more interesting example – the breweries. I’ve scraped a list of microbreweries in Virginia and gotten their names, websites, addresses and so. Since I want lat/lon info as well, I’ve used ggmap::geocode() to estimate those as well. The result is a dataframe called ‘breweries’ that has 106 rows and looks like this:

> names(breweries)
[1] "Name"    "Address" "Phone"   "Website" "lat"     "lng"    		

> head(breweries[,c(1,4:6)])
	 Name                         Website                 lat             lng
1  Wolf Hills Brewing Co    www.wolfhillsbrewing.com     36.71231    -81.96560
2  Blue Mountain Brewery www.bluemountainbrewery.com     37.96898    -78.83499
3 Quattro Goomba Brewery       www.goombabrewery.com     38.98597    -77.61748
4   Hops Grill & Brewery          www.hopsonline.com     38.83758    -77.05116
5   Port City Brewing Co     www.portcitybrewing.com     38.80800    -77.10137
6  Loose Shoe Brewing Co    www.looseshoebrewing.com     37.56500    -79.06352

So let’s put em on a map.

 leaflet(breweries) %>% addTiles() %>% addMarkers()

Pretty effortless I’d say! This is great except we don’t know which brewery is which, it’s just anonymous points on a map. We could try to add some text to a map, but remember our goal is to utilize web-based interactivity. So we’re going to take advantage of a click-based popup by inserting the Name column of the data frame.

leaflet(breweries) %>% addTiles() %>% addMarkers(popup=breweries$Name)

And one final trick to make it just a little smoother. We want to add a hyperlink to the website in the popup. Since our data frame has a column for all the websites, we could do this easily in a similar way to what we just did with the Name column. But we can take it a step further. Now I promised you that we don’t need to know any web development stuff in order to make these maps (and we don’t!). But if you happen to have a little side knowledge, you can embed any HTML or javascript that you want. In this case, I’m going to use HTML’s < a > tag for hyperlinks, so that each brewery name actually links out to its website.

popup_style<-paste0("<a href=http://",breweries$Website," target='_blank'>",breweries$Name,"</a>") leaflet(breweries) %>% addTiles() %>% addMarkers(popup=popup_style)

Now we can easily zoom around and explore Virginia’s thriving craft-brew industry! You have to admit that’s a pretty awesome thing we were able to create with just a couple lines of R. And the interactivity allows us to encode a lot information (locations, names, and website of all the breweries) in a simple exerience that any viewer can explore at their own pace. As you might guess, this is just the beginning of what we can do with leaflet, and there’s a great guide at RStudio’s site.

If you’re like me, you’re very excited about incorporating web-based interactivity in your data analyses with R. This general idea of wrapping javascript-based experiences into an easy-to-use R package is something that’s gaining a lot of traction lately. To me, this is one of the most exciting innovations in the R community in the last couple years and is taking off in many exciting directions. If you want to learn more, I’m developing a course for DataSociety entitled “Advanced Visualization With R”. In the course, we’ll explore many of these web-based technologies including Leaflet, rCharts, Shiny and more, so look to sign up later this summer!


Giving a Darn About Statistics: Baseball, Shark Attacks, and Green M&M’s

Trying to hide from statistics is tough. Believe me. I tried.


And I did a pretty decent job keeping the subject at bay until my early twenties. Much like grammar, statistics are everywhere (wait…statistics …is everywhere?).  It’s unavoidable. We see statistics in the news:


in the news…


and, well, it’s really all over the news.

With such an abundance of statistics being thrown around in today’s society, most people must have a firm grasp on their meaning, right? Could it be we don’t have as strong of a grasp on statistics as we thought?  Well this certainly would explain why people still buy lottery tickets (1:175,000,000), but are still too scared of sharks to swim in the ocean (1:11,500,000).


Statistics education seems to lie dormant throughout grade school.  Take me for example (pardon me while I make a widespread claim based solely upon my own experience – not very stats like). Up until my junior year of high school, the only stats lessons I received were from gym class teachers and sports coaches.


And it didn’t get any better. My first real encounter with a statistics course in high school did everything it possibly could to prevent any sort of intuitive, applied, ‘real world’ relevance to the subject.  Distribution curves, t-tests, z-scores, and countless problems about flipping coins and green M&Ms didn’t win me over.




I did the bare minimum, and true story: I got a D in that class. On the last day, I literally got down on my knees and begged my teacher for a C- because I was applying to colleges. She did. I should send her some flowers.

Eventually senior college courses happened and my plague-like avoidance of the subject became a natural fascination. Fast forward a few years and I now do it for a living. What made the switch? Thanks to the dedication and passion of many professors, students, authors, and Wikipedia editors, I finally realized that I’d be hard pressed to find something MORE likely to be used ‘in the real world’ – a previous struggle that unfortunately fueled many years of academic apathy for me. This was my road to Damascus.

Statistics makes everything better. It equips us with the power to not only measure what’s going on, but to monitor changes over time, and most importantly, the ability to solve problems that arise.

It’s the driving force of modern medicine.  With clinical research, it helps doctors know what makes us better, and what makes us worse.


The car you ride in made it off the lot because each part made fell within a certain range of acceptability. Statistics ensure safety.

sc32 sc33c

Statistical tools can tell us whether or not public health campaigns are working. Do advertisements that highlight how bad tobacco is make people smoke less?



Businesses use statistics to measure customer happiness and to see what they can do better.


Making customers happy means more business.


More business means more people have jobs,  and more people earn money, to buy more … stuff.


Statistics even makes sports better! Games are more entertaining.  If teams drafted players with low batting averages and low RBIs, baseball would be more boring than it already is (if that’s possible!).


R is an amazing statistical tool to create, execute, visualize … and essentially solve (almost) all of the world’s problems. I’m looking forward to the rest of the year and continuing to hammer out some intuitive exercises with this blog. If you had some rough early encounters with stats or statistical programming, and they’ve left you with an unpleasant taste in your mouth, I urge you to reconsider and revisit this area. Statistics isn’t very hard; it’s just all about finding a teaching approach that makes things click for you, and I am here to *attempt* to provide that using a relevant, intuitive approach. If I had thrown in the towel after my first one (or three) terrible encounters with statistics teachers who may have been a little too dry – a little too abstract in their teachings for my personal learning style – I never would’ve gotten into this field. And man, do I love this field (in case you couldn’t tell from my crudely illustrated Paint renderings). Please sign up for my email list (at the very bottom) and even email me with some things you’d like to see on the site!

Statistics makes everything better (110%).


Good Conference, Good Eats, and Good Hangs with Some Smart, Fun Dudes

Not much of a blog post but as I wrap up this next one, I wanted to share…

Last week I presented my and John Lombard’s most recent paper at the 61st Annual North American Meetings of the Regional Science Association International in Bethesda, Maryland. The particular session on spatial modeling was hosted by the International Geographic Union. It was an amazing conference and I strongly recommend it to anyone in the quantitative geography field.

To top it all off, I actually got to shake hands with Luc Anselin in the hotel bar, then head out for a great dinner with some fine gentlemen. Good times, folks, good times.


Pictured left to right: Eliahu Stern (Ben-Gurion University of the Negev); Graham Cochrane (University of Leeds); Tomaz Dentinho (University of the Azores); Robert Tanton (University of Canberra); Paul Bidanset (Ulster University); Bob Stimson (University of Queensland); John Lombard (Old Dominion University)

Pictured left to right:
Eliahu Stern (Ben-Gurion University of the Negev); Graham Cochrane (University of Leeds); Tomaz Dentinho (University of the Azores); Robert Tanton (University of Canberra); Paul Bidanset (Ulster University); Bob Stimson (University of Queensland); John Lombard (Old Dominion University)

Adding Google Drive Times and Distance Coefficients to Regression Models with ggmap and sp

Space, a wise man once said, is the final frontier.

Not the Buzz Alrdin/Light Year, Neil deGrasse Tyson kind (but seriously, have you seen Cosmos?). Geographic space. Distances have been finding their way into metrics since the cavemen (probably). GIS seem to make nearly every science way more fun…and accurate!

Most of my research deals with spatial elements of real estate modeling. Unfortunately, “location, location, location” has become a clichéd way to begin any paper or presentation pertaining to spatial real estate methods. For you geographers, it’s like setting the table with Tobler’s first law of geography: a quick fix (I’m not above that), but you’ll get some eye-rolls. But location is important!

One common method of taking location and space into account in real estate valuation models is by including distance coefficients (e.g. distance to downtown, distance to center of city). The geographers have this straight-line calculation of distance covered,  and R can spit out distances between points in a host of measurement systems (euclidean, great circle, etc.). This straight-line distance coefficient is a helpful tool when you want to help reduce some spatial autocorrelation in a model, but it doesn’t always tell the whole story by itself (please note: the purpose of this post is to focus on the tools of R and introduce elements of spatial consideration into modeling. I’m purposefully avoiding any lengthy discussions on spatial econometrics or other spatial modeling techniques, but if you would like to learn more about the sheer awesomeness that is spatial modeling, as well as the pit-falls/pros and cons of each, check out Luc Anselin and Stewart Fotheringham for starters. I also have papers being publishing this fall and would be more than happy to forward you a copy if you email me. They are:

Bidanset, P. & Lombard, J. (2014). The effect of kernel and bandwidth specification in geographically weighted regression models on the accuracy and uniformity of mass real estate appraisal. Journal of Property Tax Assessment & Administration. 11(3). (copy on file with editor).


Bidanset, P. & Lombard, J. (2014). Evaluating spatial model accuracy in mass real estate appraisal: A comparison of geographically weighted regression (GWR) and the spatial lag model (SLM). Cityscape: A Journal of Policy Development and Research. 16(3). (copy on file with editor).).

Straight-line distance coefficients certainly can help account for location, as well as certain distance-based effects on price. Say you are trying to model negative externalities of a landfill in August, assuming wind is either random or non-existent, straight-line distance from the landfill to house sales could help capture the cost of said stank. Likewise with capturing potential spill-over effects of an airport  – the sound of jets will diminish as space increases, and the path of sound will be more or less a straight line.

But  again, certain distance-based elements cannot be accurately represented with this method. You may expect ‘distance to downtown’ to have an inverse relationship with price: the further you out you go, more of a cost is incurred (in time, gas, and overall inconvenience) getting to work and social activities, so demand for these further out homes decreases, resulting in cheaper priced homes (pardon the hasty economics). Using straight-line distances to account commute in a model, presents some problems (aside: There is nary a form of visualization capable of presenting one’s point more professionally than Paint, and as anyone who has ever had the misfortune of being included in a group email chain with me knows, I am a bit of a Paint artist.). If a trip between a person’s work and a person’s home followed a straight line, this would be less of a problem (artwork below).

commute1But we all know commuting is more complicated than this. There could be a host of things between you and your place of employment that would make a straight-line distance coefficient an inept method of quantifying this effect on home values … such as a lake:

commute2… or a Sarlacc pit monster:


Some cutting edge real estate valuation modelers are now including a ‘drive time’ variable. DRIVE TIME! How novel is that? This presents a much more accurate way to account for a home’s distance – as a purchaser would see it – from work, shopping, mini-golf, etc. Sure it’s been available in (expensive) ESRI packages for some time, but where is the soul in that? The altruistic R community has yet again risen to the task.

To put some real-life spin on the example above, let’s run through a very basic regression model for modeling house prices.

sample = read.csv("C:/houses.csv", header=TRUE)
model1 <- lm(ln.ImpSalePrice. ~ TLA + TLA.2 + Age + Age.2 + quality + condition, data = sample)

We read in a csv file “houses” that is stored on the C:/ drive and name it “sample”. You can name it anything, even willywonkaschocolatefactory. We’ll name the first model “model1”. The dependent variable, ln.ImpSalePrice.,  is a log form of the sale price. TLA is ‘total living area’ in square feet. Age is, well, age of the house, and quality and condition are dummy variables. The squared variables of TLA and Age are to capture any diminishing marginal returns.

AIC stands for ‘Akaike information criterion’. Some guy from Japan coined it in the 70’s and it’s a goodness-of-fit measurement to compare models used on the same sample (the lower the AIC, the better).

[1] 36.35485

The AIC of model1 is 36.35.

Now we are going to create some distance variables to add to the model. First we’ll do the straight-line distances. We make a matrix  called “origin” consisting of  start-points, which in this case is the long/lat of each house in our dataset.


We next create a destination – to where we will be measuring the distance. For this example, I decided to measure the distance to a popular shopping mall downtown (why not?). I obtained the long/lat coordinates for the mall by right clicking on it in Google Maps and clicking “whats here?” (also could’ve geocoded in R).


Now we use the  spDistsN1 function to calculate the distance. We denote longlat=TRUE so we can get the value from origin to destination in kilometers. The second line just adds this newly created column of distances to our dataset and names it dist.

km <- spDistsN1(origin, destination, longlat=TRUE)

This command I learned from a script on Github – initially committed by Peter Schmiedeskamp – which alerted me to the fact that R was capable of grabbing drive-times from the Google Maps API.  You can learn a great deal from his/their work so give ’em a follow!



location is the column containing each house’s lat/long coordinates, in the following format (36.841287,-76.218922). locMall is a column in my data set with the lat/long coords of the mall in each row. Just to clarify: each cell in this column had the exact same value, while each cell of “location” was different.  Also something amazing: mode can either be “driving,” “walking,” or “bicycling”!

Now let’s look at the results:

from to m km miles seconds minutes
1 (36.901373,-76.219024) (36.848950, -76.288018) 10954 10.954 6.806816 986 16.433333
2 (36.868871,-76.243859) (36.848950, -76.288018) 7279 7.279 4.523171 662 11.033333
3 (36.859805,-76.296122) (36.848950, -76.288018) 2101 2.101 1.305561 301 5.016667
4 (36.938692,-76.264474) (36.848950, -76.288018) 12844 12.844 7.981262 934 15.566667
1 0.27388889
2 0.18388889
3 0.08361111
4 0.25944444

Amazing, right? And we can add this to our sample and rename it “newsample”:


Now let’s add these variables to the model and see what happens.

[1] 36.44782

Gah, well, no significant change. Hmm…let’s try the drive-time variable…

[1] 36.10303

Hmm…still no dice. Let’s try them together.

[1] 32.97605

Alright! AIC has been reduced by more than 2 so they together have a statistically significant effect on the model.

Of course this is a grossly reduced model, and would never be used for actual valuation/appraisal purposes, but it does lay elementary ground work for creating distance-based variables, integrating them, and demonstrating their ability to marginally improve models.

Thanks for reading. So to bring back Cap’n Kirk, I think a frontier more ultimate than space, in the modeling sense,  is space-time – not Einstein’s, rather ‘spatiotemporal’.  That will will be for another post!



Creating an Interactive Map of Craft Breweries in VA Using the plotly R Package

Well folks, another new year’s resolution down the drain. I was initially shooting for a post each month for 2014. More projects came. Plates were full. Plates were emptied. More plates were filled again. I think I will just alter my resolution to 12 posts this year. That’s a fair compromise with myself, right? That’s what we Americans do. Needless to say, it will likely be a busy last week of December for me.

I’m taking a short break from the previous series to share a great data visualization platform I stumbled upon called plotly.There is even an R package that allows you to feed data directly to their site for further analysis and manipulation. Blew my mind and I had to share. Anyway, check out their site for some mesmerizing graphics and data visualization capabilities!

This post is based off of a guest blog post by Matt Sundquist of plotly on Corey Chivers’ blog bayesianbiologist. I tweaked the code only slightly to accommodate my data and I added a geocoding section. Other than that, they are the masterminds.

Alright so with the obvious boom of craft breweries here in Virginia (and well, across the country), I thought I’d be well-received doing a post on two of my favorite things: geographic data visualization and booze.

First off, in order to harness the great powers of plotly, you must register at https://plot.ly/ for your own account. Next, we install the package that will allow us to connect from R to our fresh, new plotly account.


After loading the packages, we can log in to our plotly account straight from R by typing in our respective username and API key (to obtain your API key, log in to plot.ly via your web browser, click Profile > Edit Profile and you will see your API key)

p <- plotly(username="bobdole", key="abcbaseonme")

For my data set of craft brewery locations in Virginia, I queried a data set of current brewery licensees in the state from the Virginia Department of Alcoholic Beverage Control website. I then removed the 'big guys' (sorry, this bud is not for you) and aggregated the count of breweries by city/town and saved as a .csv file. Now we read in our data:

data = read.csv("C:/breww.csv", header=TRUE)

Matt's data already had location coordinates. Since mine only has the respective city/state, I need to geocode it so R will understand how to plot locations on the map. For this I am using the ever-faithful ggmap package.

We named the sheet "data" when we read it in and the column that has the city/state of each brewery is called "City". We can now batch geocode each city. The function geocode() returns and m x 2 matrix, where m is the number of rows of data (cities) and the 2 columns are the latitude (default column name is lat ) and longitude (default column name is lon) of each respective city. We create two new columns in our data set and set them equal to the two columns of the data frame loc we just created.

loc <- geocode(as.character(data$City))

We call the state outlines using the map() function, take its xy coordinates, and assign this as the first trace for plotting the map.

trace1 <- list(x=map("state")$x,

We then create the second trace by extracting the longitude and latitude from our data (assigning as x and y plots, respectively). We specify that the size of the bubbles on the map is based on data$No (i.e. bigger bubble, more breweries), which is the column containing the number of breweries in each respective city.

trace2 <- list(x= data$lon,

Finally, we combine the two traces and send our data to our plotly profile.

response <- p$plotly(trace1,trace2)
url <- response$url
filename <- response$filename

Like magic, running the last code will open your browser and load your fancy new map in the plot.ly interface, ready for you to zoom, crop, and manipulate to your heart's content!

Map Browser Interface

Static shots can also be exported at very high resolutions from the plotly site:

Craft Breweries in Virginia via R & plotly

Maps like this often produce results that may mislead folks. They often just reflect populations, not a higher propensity to consume craft beer - more people in an area (i.e. Richmond, DC, Virginia Beach) results in the capacity and demand for more breweries overall. A 'craft breweries per capita map' would arguably tell a more interesting story. Thanks for reading!

Presenting Paper @ URISA and IAAO 18th GIS/CAMA Technologies Conference, Feb 24-27, Jacksonville, Florida

I will be attending as well as presenting an original paper at this conference next week. Come say hello if you’ll be there!

“Learning more about Geographically Weighted Regression: Optimal Spatial Weighting Functions Used in Mass Appraisal of Residential Real Estate” by Paul E. Bidanset and John R. Lombard

Shapefile Polygons Plotted on Google Maps Using ggmap in R – Throw some, throw some STATS on that map…(Part 2)

Well it’s been long enough since my last post. Had a few things on my plate (vacation, holidays, another holiday, some more holidays, and quite a lot of research). March is almost here but the good news is that I have plenty of work stored up to start serving out some intuitive approaches for learning R. Speaking of that…

In the hefty amounts of research I’ve been doing lately, I’ve come across many, MANY R-based blogs and tutorials. There are so many fantastic resources out there. But there are also a few not-so-good ones. Some code examples seem to confuse more than clarify. Scrolling to the bottom of a tutorial for a glance at the comments is usually a good way to gauge whether or not the audience received it well (I’ve also noticed that R-learners are much less negative and troll-like than the majority of those who comment on say, well, every other community-based website in the world). A couple tutorials don’t even include code. Unfortunately, I think often times these bloggers have ulterior motives (showing off their technical/statistical/whatever capacity), consequently flushing a graspable, empowering learning experience down the toilet.

With this site, I’m going to continue attempting to hammer on what is actually transpiring in R, ideally without dragging my feet and stagnating the more advanced users, using an intuitive approach so readers understand not just THAT something happens, but HOW and WHY it happens. This hopefully means they remember, are able to reproduce results, and ultimately grow in their learning. So please, keep the feedback coming (even if it is in troll form)!

Alright enough about me. Let’s pick up from where we left off.

For this post, I am going to show you how to plot, or overlay, the polygons of a shapefile on top of a Google Map. The polygons in this example will be of neighborhoods in the city of Baltimore.

The City of Baltimore is a kind of paragon when it comes to a municipality’s dissemination of public data. Rob Mealy’s amazing blog (on R and maps n’ stuff) first tipped me off to this website. I downloaded the neighborhood shapefiles (Neighborhood 2010.zip) from the site and unzipped the file to my C drive.

Now since we are going to be reading shapefiles into R, we need to install a package that is capable of doing so. There are several, but for this example we are going to use rgdal.


Since I unzipped my shapefile data to my C drive, I am going to tell R it is from THERE I will be working. I set this as my working directory with:


From now on during this session, R will automatically use this location to retrieve and save files, unless specifically told to do otherwise.

We read in the shapefile with:

Neighborhoods <- readOGR(".","nhood_2010")

We named the shapefile "Neighborhoods" (by typing the name of to the left of <-). The first set of quotations in the command is looking for the location of the data. We already set the working directory to C: so the dot is telling R "slow your roll; you don't need to look any further". The second set of quotations in the command is looking for the name of the layer, which in this case is "nhood_2010".

Now we need to prepare our object so that it may be portrayed on a map. R doesn't know what to do with it in its current form. A few lines of code will transform this caterpillar into a beautiful, map-able butterfly. First, run:

Neighborhoods <- spTransform(Neighborhoods, CRS("+proj=longlat +datum=WGS84"))

spTransform allows us to convert and transform between different mapping projections and datums. This line of code is telling R to convert our Neighborhoods file to longitude/latitude projection and World Geodetic System 1984 datum - a global coordinate (GPS) system used by Google Maps (the initial object was set to a Lambert Conic Conformal projection and a NAD83 datum, as well as a GRS80 ellipsoid). This last bit of information is useful, but you really don't have to know exactly what it means. Just know that there are a bunch of various coordinate systems that die-hard geography nerds have created (for what I'm sure are good reasons), and all you have to do is smile and remember that we're essentially just converting our coordinates into a friendly format for integrating with Google Maps (I'm sure I'm going to get heat from one of those geography nerds for diluting it in this way).

Now the fortify command (from the package ggplot2) takes all that wonderful spatial data and converts it into a data frame that R knows understands how to put onto a map.


Neighborhoods <- fortify(Neighborhoods)

Alright meow, we are going to take the map we previously created, BaltimoreMap, and add polygons outlining the neighborhoods from our shapefile. *Side Note: I keep the name of the object the same with each transformation I make. This is a preferential thing. As you are learning, you may wish to name each step differently (e.g. BaltimoreMap1, BaltimoreMap2; Neighborhoods1, Neighborhoods2) so you may go back and look at each one and understand the transformations that take place at each step, which also allows you to identify the area you messed up if you end up receiving an error message along the way.

And now we run the final code:

BaltimoreMap <- BaltimoreMap + geom_polygon(aes(x=long, y=lat, group=group), fill='grey', size=.2,color='green', data=Neighborhoods, alpha=0)

Shapefile Polygons Plotted on Google Maps Using ggmap in R

There we have it. She looks good! Notice in the last command we specified that the name of our data is Neighborhoods. This is important. When we set x=long and y=lat, this isn't just us declaring that we want to use longitude and latitude for our projection; we are telling R that the coordinates for the x (horizontal) and y (vertical) axis of our plot (map) are stored in the columns of our data (Neighborhoods) called 'long' and 'lat', respectively.

Now play around a bit with the various options for fill, size, color, and alpha (which is the level of transparency from 0 to 1, with the level of opaqueness increasing as you approach 1), as well as the various maptypes and zoom levels from part one. Next session we'll plot some values (more examples below). Thanks for reading!

Shapefile Polygons Plotted on Google Maps Using ggmap in R

Shapefile Polygons Plotted on Google Maps Using ggmap in R

Shapefile Polygons Plotted on Google Maps Using ggmap in R

All works on this site (spatioanalytics.com) are subject to copyright (all rights reserved) by Paul Bidanset, 2013-2014. All that is published is my own and does not represent my employers or affiliated institutions.

Throw some, throw some STATS on that map…(Part 1)

R is a very powerful and free (and fun) software package that allows you to do, pretty much anything you could ever want. Someone told me that there’s even code that allows you to order pizza (spoiler alert: you actually cannot order pizza using R :( ). But if you’re not hungry, the statistical capabilities are astounding. Some people hate code; their brains shut down and they get sick when they look at it, subsequently falling to the floor restricted to the fetal position. I used to be that guy, but have since stood up, gained composure, sat back down, and developed a passion for statistical programming. I hope to teach the R language with some more intuition in order to keep the faint-of-heart vertical and well.

Alright so for the start in this series, I’m going to lay the foundation for a Baltimore, MD real estate analysis and demonstrate some extremely valuable spatial and statistical functions of R. So without too much blabbing, let’s jump in…

For those of you completely new to R, its interface allows you to download different packages which perform different functions. People use R for so many different data-related reasons, and the inclusion of all or most of the packages would be HUGE, so each one, housed in various servers located around the world, can be downloaded simply. For the first-time use of each package, you’ll need to install it. They will then be on your machine and you will simply load them for each future use.

For the initial map creation, we need to install the following (click Packages->Install Package(s)and holding Ctrl allows you to select multiple ones at a time):


Since these are now installed to our machine, we simply load these packages each session we use them. Loading just ggmap and RgoogleMaps will automatically load the others we just downloaded. With each session, open a script and once you’ve written out your code, highlight it and right-click “Run Line or Selection,” or just press Ctrl -> R. A quick note: unlike other programming languages like SAS and SQL, R is case sensitive.

To load run:


We will specify the object of the map center as CenterOfMap. Anything to the left of “<-” in R is the title and anything to the right are the specified contents of the object. Now for the map we’re using, the shape of Baltimore behaves pretty well, so we can just type within the geocode() command “Baltimore, MD” ( R is smart and that’s all it takes).

CenterOfMap <- geocode("Baltimore, MD")

Not all areas are as symmetrically well behaved as Baltimore, and for other cases, my preferred method of centrally displaying an area's entirety begins with entering the lat/long coordinates of your preferred center. For this, I go to Google Maps, find the area I wish to map, right click on my desired center and click "What's here?" and taking the lat/long coordinates which are then populated in the search bar above. For Baltimore, I'm going to click just north of the harbor.

The code would then look like this:

CenterOfMap <- geocode(" 39.299768,-76.614929")

Now that we told R where the center of our map will be, lets make a map! So remember, left of the "<-" will be our name. I'd say naming the map 'BaltimoreMap' will do.

Baltimore <- get_map(c(lon=CenterOfMap$lon, lat=CenterOfMap$lat),zoom = 12, maptype = "terrain", source = "google")
BaltimoreMap <- ggmap(Baltimore)

Alright, to explain what just happened, getmap() is the command to construct the map perimeters and lay down its foundation. I'm going to retype the code with what will hopefully explain it more intuitively.

get_map(c(lon='The longitude coordinate of the CenterOfMap object we created. The dollar sign shows what follows is part of what is before it, for example ExcelSpreadsheet$ColumnA, lat='The latitude coordinate of the CenterOfMap object we created.', zoom = 'The zoom level of map display. Play around with this and see how it changes moving from say,5 to 25', maptype = 'We assigned "terrain" but there are others to suit your tastes and preferences. Will show more later.', source = 'We assigned "google" but there are other agents who provide types of mapping data')

And the grand unveiling of the first map...

Now that is one good lookin' map. Just a few lines of code, too.

I'll show you some other ways to manipulate it. I like to set the map to black & white often times so the contrast (or lack thereof) of the values later plotted are more defined. I prefer the Easter bunny/night club/glow-in-the-dark type spectrums, and so, I usually plot on the following:

Baltimore <- get_map(c(lon=CenterOfMap$lon, lat=CenterOfMap$lat),zoom = 12, maptype = "toner", source = "stamen")
BaltimoreMap <- ggmap(Baltimore)

We just set the night sky for the meteor shower. Notice that all we did was change maptype from "terrain" to "toner," and source from "google" to "stamen."

A few other examples:

Baltimore <- get_map(c(lon=CenterOfMap$lon, lat=CenterOfMap$lat),zoom = 12,source = "osm")
BaltimoreMap <- ggmap(Baltimore)

This map looks great but it's pretty busy - probably not the best to use if you will be plotting a colorful array of values later.

Here's a fairly standard looking one, similar to Google terrain we covered above.

Baltimore <- get_map(c(lon=CenterOfMap$lon, lat=CenterOfMap$lat),zoom=12)
BaltimoreMap <- ggmap(Baltimore, extent="normal")

And one for the hipsters...

Baltimore <- get_map(c(lon=CenterOfMap$lon, lat=CenterOfMap$lat),zoom = 12, maptype = "watercolor",source = "stamen")
BaltimoreMap <- ggmap(Baltimore)

George Washington and the cartographers of yesteryear would be doing cartwheels if they could see this now. The upcoming installments in this series will cover:

1) Implementing Shapefiles and GIS Data
2) Plotting Statistics and other Relationship Variables on the Maps
3) Analyzing Real Estate Data and Patterns of Residential Crime and Housing Prices

Thanks for reading this!If you have any problems with coding or questions whatsoever, please shoot me an email (pbidanset[@]gmail.com) or leave a comment below and I'll get back to you as my schedule permits (should be quickly). Cheers.

All works on this site (spatioanalytics.com) are subject to copyright (all rights reserved) by Paul Bidanset, 2013-2015. All that is published is my own and does not represent my employers or affiliated institutions.