Geography 168 - Intrm GIS
Emilie Barnett's Geography Lab blog
Tuesday, June 12, 2012
Wednesday, March 16, 2011
Final Project - California Hazard Map
Introduction:
These past few years have felt like the Earth is very unhappy with us humans, what with the catastrophic Earthquakes in Haiti and Chile last year, along with just plain bad ones in Italy, Baja and China, and now the devastating earthquake/tsunami crisis happening in Japan right now. Fires as well have become an increasing concern as global warming heats/dries out parts of the world. In 2007 we had the series of wildfires in Southern California that burned thousands of properties, and then the Station fires in 2009. Last summer hundreds of wildfires broke out across Russia, causing thousands of people to die from smog poisoning.
With all that in mind I (and I suspect many other students) wanted to create a hazard map of California that takes both fire and earthquakes into account. In addition to this, I wanted to focus in on LA County, my home, and determine the most and least risky places to live. In order to do this I also needed to include population density information into the risk assessment – the greater the density the more damage likely.
As denizens of California, especially Southern California, We are intimately familiar with the risks posed by both earthquakes and fires. Fortunately, due to better disaster preparation, building codes, and luck, we have mostly avoided true catastrophe (we took the lessons learned from the Great Earthquake of 1906 to heart). Take for example the Loma Prieta earthquake of 1989, which at a 7.1 magnitude killed 64 people. Contrast this with an earthquake of 6.9 magnitude that happened in Armenia 10 months later (where there are no earthquake-proof codes), had a death toll of over 25,000.
Despite our level of preparedness, the possibility of disaster always hangs over us. For as long as I can remember, my house and schools have been fully stocked with earthquake kits, fire and earthquake drills are routinely mandated, and no matter where I go hiking, there always seems to be a big patch of scorched earth and dead trees left over from a recent fire. The fact is that we live in an area with a dry climate prone to fires that also happens to be situated on top of several fault lines. The question is not if a big earthquake or fire will come, but when.
Methods:
Although the concept of my map was pretty simple, there was quite a bit of spatial analysis and various other steps involved, so I will attempt to be as clear as possible in my methodology.
Part one: Hazard Map for the whole state of California, taking into account risk from Earthquake, fire and population density.
1. Creating a classified hazard map of earthquakes
Data needed: state map plus county boundaries; earthquake point layer
a. Step 1: Input Earthquake epicenter coordinates and magnitudes into an excel doc. My list was of significant earthquakes since 1900, meaning they had a magnitude of 6.5 or higher, or they caused over 200,000$ worth of damage.
b. Step 2: Add X, Y data to state map as a point feature class.
c. Step 3: Convert points to raster data using spatial interpolation based on magnitude. This way the map would appropriately reflect the danger associated with each earthquake depending on distance to the epicenter. I decided to go with the kriging method, because the tutorial we used in class for the interpolation lab says that kriging “assumes that the distance or direction between sample points reflects a spatial correlation that can be used to explain variance in the surface.” This sounds appropriate for earthquakes, as the factors that caused one earthquake are likely to affect the ground near to it.
d. Step 4: Reclassify the new raster. I chose for there to be five classifications, somewhat arbitrarily, because I thought it wasn’t too many or too few to clearly reflect the analysis on the map.
2. Creating a classified fire hazard map
Data needed: fire hazard layer (vector)
a. Step 1: Convert the fire hazard layer, which was a vector shapefile, into raster format, based on the hazard level attribute, using the spatial analyst tool.
b. Step 2: Do a reclassification as in the earthquake layer.
3. Creating a population density map to asses areas of maximum potential damage
Data needed: Census population data
a. Step 1: Convert the shapefile to raster based on population density
b. Reclassify
4. Create two final hazard maps, one displaying the regions according to likelihood of earthquake or fire, and the other incorporating the population data to assess at risk of high damage.
a. Step 1: For the earthquake/fire map, do a simple raster addition, adding the two reclassifications together. Label this map: Hazard Risk
b. Step 2: Do the same but this time add the population density reclassification to the other two. Label this map: Hazard impact, since population density mostly affects how bad the damage will be.
Part two: focus in on LA County and determine the cities both at the highest and lowest risk of a damaging disaster
1. Create LA County layer with damage assessment classification
a. Step 1: I did a simple select by attribute: name = LA county, to highlight the parcel, then extracted it into a new layer
b. Step 2: Created a spatial analyst mask using the new county layer as my extent, and then did an empty raster calculation on the risk classification map (i.e. I clicked evaluate without any specifications). This created a copy of the map with the county extent.
2. Convert the raster to vector, so that I could extract parcels based on attribute in the next step
a. Step 1: Simple raster to vector tool spatial analyst toolbar, with the transformation based on risk level. The vector shapefile it created was somewhat rough looking, but that is to be expected considering it was trying to turn square pixels into amorphous polygons
3. Create two maps: one showing the areas of LA County with highest risk, and one showing areas with lowest risk.
a. Step 1: Select features by attribute, the attribute being the three highest risk classes.
b. Step 2: export those into a new layer, so I could highlight them with different symbology in final product
c. Step 3-4: do the same with the low-risk parcels
Part three: Assembling it all into a coherent final product:
1. One map showing the two full-state extent hazard and damage classifications
2. One map breaking the classifications down into the three component maps: Earthquake point interpolation, fire hazard and population density
3. One map showing the classification for LA County, and the areas of high and low risk.
Results:
Looking at the hazard map for earthquakes and fires, we can see that most of the areas associated with high risk values are along the coast and up in Northern California, with the highest being in Ventura County and the Bay area. This is consistent with our knowledge of where the fault lines run through and to where there are a lot of fires. The biggest area with practically no risk involved is in the bottom of La County. This makes little sense to me, as I know that there were three earthquake epicenters in the area, and we get fires. Perhaps there was an error in the data? When I look at the map that includes population density, however, the map makes more sense, as the areas of highest risk values are clustered around population centers, with the sparsely populated areas getting lower values. The surprising area to me is the line of medium/high and high risk values stretching down the middle of northern California. There must be very high risk of fire or earthquake. To examine further we must look at the maps of each individual risk. In terms of frequency of risk levels (aka what risk levels are most prevalent), here is a graph showing the distribution: While this chart doesn't give us real world information, like what cities these classes cover, for example, it does tell us the majority of California is at a relatively low risk level. But if you happen to be in the area those high risk pixels cover, watch out!

The earthquake map shows the areas of highest risk to be just north of Ventura County, the bay area, and the Humboldt region in northern California. The Bay area and Ventura county results are unsurprising considering they are on faults. The Humboldt risk value is explained by a series of offshore earthquakes this past century. The areas of lowest risk are certain areas in central California and in extreme southern California.
The fire map is slightly more interesting, for it reveals the area of highest risk to be the counties touching the coast line, and the ring of mountains around the central valley. It is this last part that explains why the part of this spine threading northern California has such a high hazard assessment – though it is sparsely populated it has a high enough fire risk to make it stand out on the map.
Finally, the population density is pretty clear. We all know that the bay area and LA/San Diego counties will have the highest densities, and the area bordering Nevada and Arizona will have the lowest.
Moving on to the LA county close-up, one thing jumps out at me immediately: not all the classes are represented – the two highest are absent altogether. This is good news for us LA dwellers. However, the bad news is that West LA, as in Santa Monica, and north La, as in Simi Valley, are at the most risk out of anyone. This is mainly due to fire hazard. Although of course, everywhere in LA has a high population density, adding on to the score. As for the areas of lowest risk, they appear to be the San Fernando Valley, Lancaster/Palmdale area, and parts of Long Beach and Orange County. This is unfortunate news for us Westwooders. Conclusion:
I feel that this was a very worthwhile project altogether. There were, of course, some flaws – for example, I’m not sure if interpolating historic earthquake points is a totally legitimate way of calculating earthquake risk, but it was all I could think to do - and I really would have liked to include some other important risks we face here in California, such as flooding and landslides, but I just couldn’t find the data. Obviously if I was really in charge of putting together a hazard map for California, I could spend my time looking for quality data, and even spend money getting it. However, I feel that accuracy aside, the project was good for me to reinforce all the spatial analyst tools we have learned this quarter, as well as other random tools, such as querying, projection changing, extraction, and adding X, Y data points. In addition, I spent a long time putting together my three maps using all the display tricks I know, and I had fun doing the project.
Sources:
For California county boundaries:
Los Angeles County GIS Data Portal - http://egis3.lacounty.gov/dataportal/?category_name=boundaries_political
For Earthquake points:
USGS, by way of the CA State Department of Conservation
http://www.conservation.ca.gov/cgs/rghm/quakes/Pages/eq_chron.aspx
For the Fire Hazard Map:
California Department of Fire and Forestry
http://frap.cdf.ca.gov/data/frapgisdata/select.asp?theme=5
For the Population Density Map:
California Department of Fire and Forestry
http://frap.cdf.ca.gov/data/frapgisdata/select.asp?theme=2
For Earthquake information:
“Earthquake hazards and risks”, adapted from lecture notes of Prof. Stephen A. Nelson Tulane University
http://earthsci.org/processes/struct/equake2/EQHazardsRisks.html
Tuesday, March 1, 2011
Spatial Interpolation
This lab was about downloading a point dataset with precipitation information, and then create different raster surface models that visually represent the spread and concentration of different values. So areas that are strongly red, for example, mean that they get a lot of rain. There were two datasets - one for the season total this year, and the other for the season normal. The final step was to create two difference maps, one for each method of interpolation.
The first model i chose was inverted distance weighted, which calculates a surface model using sample points. There were only a couple minor differences between the maps i got for the season total and normal - an area in the upper right corner that was drier than expected, and an area in the middle right that was wetter than expected. i'm not sure that inverted distance weighted was the best method to use on this specific lab, because i was only going off of 65 or so points, spread over the entirety of LA county. The lab pdf about surface interpolation said that this works best when there are enough points to create a truly accurate reading off the samples. as you can see, the two maps aren't very detailed.
The second model i used was the spline one. This one uses a model that minimizes surface curvature, thus ending up with a smooth surface that passes through all the input points. This worked a little better i think on the data points, and we ended up with two maps that give a slightly more informative spread that the previous. We can easily see which areas received less/more rain than was expected.
For the maps measuring the changes in precipitation between expected and actual, it was a simple matter of using the raster calculator to subtract one from the other. I chose to do this using an absolute value function, so that the end result would just show the areas where there was a big change and where there was almost no change at all. Actually this worked pretty well with both models, because both show similar areas of big change.
Tuesday, February 22, 2011
Lab 6 - Fire Hazard Analysis
In this lab we had to create a fire hazard map for the area around which the station fire broke out last year. The first step was to download all the necessary data from the internet: a data elevation model of the LA county area, land cover/fuel information, and the station fire perimeter. I got the DEM from the streamless website, the land cover info from the forestry website, and the fire perimeter from our old lab in gis 7. Once I had all that I could start with my analysis.
First i created a hillshade layer out of the DEM, not really because i needed it for the analysis, but because it makes the final result look better, and more visually clear. Then I created a slope model out of the DEM. The slope model is necessary, because the steepness of a slope affects how at-risk an area is for fires. The next step was to reclassify the slope categories, so that i didn't have a million unique classes confusing the picture. I decided that five was a good number since it matches with the number of classes for fuel cover.
Next i did a similar procedure on the fuel cover. I reclassified the data according to the standards provided in the tutorial. I looked at the metadata provided along with the land data, which explained what type of land cover each class corresponded to. then i referenced the tutorial to see what new classification they should get.
The final step was to do some raster addition to combine the two reclassified layers into one. The final slope/fuel layer is the one that is really important, because by adding up the data value of each pixel (according to what classes they were in the two base layers), you get a final set of class numbers that are easily identifiable as high or low risk. And when you add in the fire perimeter, you can easily see why the station fire broke out where it did: That area is extremely high risk!
Thus finally it was just a simple step of giving everything appropriate symbology and laying it out in an attractive, professional manner.
First i created a hillshade layer out of the DEM, not really because i needed it for the analysis, but because it makes the final result look better, and more visually clear. Then I created a slope model out of the DEM. The slope model is necessary, because the steepness of a slope affects how at-risk an area is for fires. The next step was to reclassify the slope categories, so that i didn't have a million unique classes confusing the picture. I decided that five was a good number since it matches with the number of classes for fuel cover.
Next i did a similar procedure on the fuel cover. I reclassified the data according to the standards provided in the tutorial. I looked at the metadata provided along with the land data, which explained what type of land cover each class corresponded to. then i referenced the tutorial to see what new classification they should get.
The final step was to do some raster addition to combine the two reclassified layers into one. The final slope/fuel layer is the one that is really important, because by adding up the data value of each pixel (according to what classes they were in the two base layers), you get a final set of class numbers that are easily identifiable as high or low risk. And when you add in the fire perimeter, you can easily see why the station fire broke out where it did: That area is extremely high risk!
Thus finally it was just a simple step of giving everything appropriate symbology and laying it out in an attractive, professional manner.
Monday, February 14, 2011
lab 5 - spacial analysis
Emilie Barnett
This week’s lab focused on teaching us how to do various aspect of spatial analysis – which, as I understand it, is one of the most valuable functionalities of GIS. The project was to find the most suitable areas in a fictional county in Montana to build a new landfill. When you build a landfill you have to take multiple factors into account, such as the slope of the elevation (obviously you can’t build a land fill on a steep slope), soil drainage information, distance to streams and rivers, what kind of land it is, and distance to already existing landfills.
Although the information we used in the lab is made up, this is a very real issue, as we can see in the article about the Kettleman City landfill. Kettleman is possibly being affected by its proximity to California’s largest toxic landfill with increased rates in birth defects, which is serious.
Anyways, the general idea of what we were doing in the lab, was to perform whatever analysis we wanted (slope, buffers, etc), and then reclassify the results to have only 5 classes, with each class assigned a weight. A one means that all the land that falls into that area is highly unsuitable and 5 means that it is highly suitable. The final analysis was to add up all the individual criteria using their classes and get one last map that shows the same thing (high numbers = good, low numbers = bad) but for all factors involved. Therefore an area with a really high number, like 23 (the highest possible), meant that it was over four kilometers from a stream, an appropriate distance from open landfills, on good land for building landfills, that is also flat, and it has good drainage qualities.
My final map is weighted according to some fake numbers supplied by the tutorial. This means that instead of just adding up each category, you multiply each one by a decimal that is greater than 0 and less than 1. All the decimals have to add up to one. So for example, my final calculation looked like this: ([Reclass2 of Coverclass] * .3) + ([reclass of sl_dist] *.3) + ([reclass of slope of elev] *.2) + ([Reclass of soildrain] *.1) + ([Reclass of Stream Buffers] *.1))*5, instead of like this: [Reclass2 of Coverclass] + [reclass of sl_dist] + [reclass of slope of elev] + [Reclass of soildrain] + [Reclass of Stream Buffers]. This is just a way to give different categories different weights of importance, because in real life you might care that it is an appropriate distance from a water source a little more than you care if the land it is on is perfectly flat, so that a situation similar to that in Kettleman city doesn’t occur.
In all of the maps I am displaying, the darker the color, the more suitable that area is, so in the final map you can easily tell which areas correspond to the best land.
This lab was pretty cool and interesting, because we are learning how to do analysis that is extremely valuable and applicable to real life situations. This is what people who work in GIS actually have to do for their jobs.
Wednesday, February 2, 2011
Written stuff for post
Analysis part:
I agree somewhat with the city ordinace, because as my map shows, there are still plenty of dispensaries in LA outside the 1000 ft buffer around schools, parks and libraries. I think that people have the right to decide if they want their kids to be constantly exposed to weed. And it is true that weed dispensaries have been popping up all over the place. they are everywhere! For my project, i couldn't even geocode all of them in LA, i had to limit myself to 86!
The issue of weed is a very sensitive one, and very relevent to us here in California, the "weed capital" of the country. I personally don't see anything wrong with weed, and in fact believe that in general it is a lot safer than alcohol. However, there are many who are adamantly opposed to weed, as it is illegal, and think that it attracts criminals, etc. These people should be able to send their kids to the park and to school without worrying they will be overexposed.
However, perhaps 1000 ft is a little extreme? These weed dispensaries are very discreet (there are some in Westwood, and they usually have shades on the windows, and don't smell like weed from the outside), and the weed industry is a very profitable one for the state of California. therefore i would propose limiting the buffer to 500, so as not to force more dispensaries to close.
My map is a closeup of the area where the highest concentration of weed dispensaries are, west/downtown LA.
I agree somewhat with the city ordinace, because as my map shows, there are still plenty of dispensaries in LA outside the 1000 ft buffer around schools, parks and libraries. I think that people have the right to decide if they want their kids to be constantly exposed to weed. And it is true that weed dispensaries have been popping up all over the place. they are everywhere! For my project, i couldn't even geocode all of them in LA, i had to limit myself to 86!
The issue of weed is a very sensitive one, and very relevent to us here in California, the "weed capital" of the country. I personally don't see anything wrong with weed, and in fact believe that in general it is a lot safer than alcohol. However, there are many who are adamantly opposed to weed, as it is illegal, and think that it attracts criminals, etc. These people should be able to send their kids to the park and to school without worrying they will be overexposed.
However, perhaps 1000 ft is a little extreme? These weed dispensaries are very discreet (there are some in Westwood, and they usually have shades on the windows, and don't smell like weed from the outside), and the weed industry is a very profitable one for the state of California. therefore i would propose limiting the buffer to 500, so as not to force more dispensaries to close.
My map is a closeup of the area where the highest concentration of weed dispensaries are, west/downtown LA.
Subscribe to:
Comments (Atom)



