Field Survey
Exploring Surveying Techniques and Technologies: (3)
1. Community Garden Sampling and Drone Work
![]() |
Reference Map: Locator map of the community garden project location
|
Introduction:
This assignment aimed to collect surveyed data of a community garden located in Eau Claire, Wisconsin while simultaneously introducing students to a variety of new data collection tools. Tools utilized for collecting data included a survey grade GPS, thermometer, pH reader, and a TDR Probe for measuring volumetric water content within soils. Later, an additional drone survey of the landscape would contribute to the overall collected data.
Methods:
For the purposes of this lab, the class met at the garden location on April 26th and collaborated as a single, large scale group to collect the various types of data necessary within the community garden. Sub-groups of students within the class were created so that each group was responsible for a single data collection tool. Groups were encouraged to switch tools with other groups throughout so that each sub-team could become familiar with all of the data collection methods being utilized.
The first data collection tool utilized in the community garden survey was a Dual Frequency Survey Grade GPS shown in Figure 1 for assessing the very specific locations where data was being collected. Each of the points were marked by orange flags. The device surveyed a total of 30 data location points for the first 10 data points collected, but was shortened to 10 data points per stop after survey point 10 to speed up the collection process.
![]() |
Figure 2: pH Measuring Tool |
![]() |
Figure 3: TDR Probe Measuring Water Content in Soils |
The next tool was a pH probe that worked to measure the acidity levels found within the soil points surveyed. A sample of soil from each of the data points was scooped into a container and diluted with water so that the probe could be inserted into the container to measure the pH levels found in the soils in that area. Figure 2 provides an image of the tool. The third tool used for data collection was the Time-Domain Reflectometry (TDR) Probe, which sent out electrical pulses into the soil structure to measure each collection point's volumetric water content as a percentage. A picture of this tool is provided in Figure 3.
The last group used a standard thermometer to measure the temperature of the soil at each of the marked survey points. Given the common use of the tool, a photo is not provided.
A combination of the resulting data collected was entered in to the survey grade GPS at each of the surveyed points, storing the field data into its attribute table. Once all points were collected with measurements using each of these tools, the data is ready to be viewed and manipulated in ArcGIS. Since the project was a class collaboration, Professor Hupy offered to combine the data into a single spreadsheet for use by the rest of the class.
-----------
![]() |
Figure 4: Ground Control Point |
![]() |
Figure 5: Survey Grade GPS marking the location of the GCP |
On day two of the project, students prepared the area for a flyover mission using a M600 UAS drone. Student's laid out the route of the drones mission by setting up ground control points (GCP) and collecting their x,y location data for input into the drone's course. A sample ground control point is pictured in Figure 4, and the collection of its specific location data using same survey grade GPS unit as was used in part one of the lab can be seen collecting its location point in Figure 5.
![]() |
Figure 6: M600 Drone
|
With the GCP's laid out and mapped, the drone was ready for its flyover mission, taking pictures and collecting data of the surveyed area. Figure 6 shows pictures of the M600 UAS drone used in flight, price tagged at $13,000. Once the flyover run is complete, to location points collected at each of the GCPs will be used when imported into ArcGIS to tie down the photos taken by the drone into a cohesive mosaic of the surveyed area.
Finally, using the outputs of the data collected by the tools detailed in part one of the assignment in combination with the resulting final mosaic of the landscape surveyed in part two, a variety of maps can be made to symbolize the significance in the collected data.
Results:
The first of a total of five maps generated from the collected data was a locator map of the Community Garden Project within Eau Claire, shown in The Reference Map in the Introduction. Points where the data was collected within the garden are marked in yellow, paired with a base map at the bottom most layer, overlaid by the resulting mosaic generated in the flyover run to provide an updated image of what the site looked like at the time of data collection. While there may be a few feet in error to account for between the mosaic and the underlain base map, the two files appear to align fairly well. Next, a series of four interpolation maps could be generated to showcase the distribution of the data collected at the various point locations.
The first data symbolized was the elevation of the sites location, as shown in Map 1. The elevation of the sight was relatively flat, ranging only by about one meter in elevation between any two given points. The patterns in elevation tended to decrease slightly to the East, which could suggest that the watershed patterns would end up trailing to the East as well.
The next set of collected data accounted for the ground temperature of the analyzed soils. Map 2 shows cooler temperatures recorded at the western side of the garden and slightly warmer temperatures on the eastern half. Temperatures ranged between 11.6 and 13.1 degrees Celcius, fluxuating only by 1.5 degrees overall.
Next, moisture content data was interpolated for analysis, resulting in Map 3. As predicted by the elevation map, the percentage of water content is lower to the northwest, and higher to the southeast, most likely following the areas larger scale drainage basin. Moisture content in the soil tended to have the widest values range of all the explored variables, ranging up to a 13% difference between collected point data!
Finally, In Map 4 accounted for the pH Levels read by the probes in the soil content. In this map, water appears more acidic in the northwest, and more basic in the southeast. This could have implications on plant growth in these areas, as optimal pH levels for plant growth average at a pH level of 6.5. The higher levels accounted for in the southeast portion of the garden could threaten the plants attempting to grow in this area.
Conclusion:
This project worked to introduce students to a variety of new tools that can be used for data collection while out in the field. Upon conclusion of this project, students gained experience working with the following field data collection tools: survey grade GPS, thermometer, pH reader, and a TDR Probe. It exposed students to the benefits of drone use in the field and how data can later be compiled to reflect various details in a given area.
2. Tree Species Sampling
Introduction:
![]() |
Figure 1: Survey Stations along Putnam Trail |
While surveying using a grid-based system can be useful for mapping smaller plots, technological advancements with GPS has allowed field surveying to become an easier, faster, and more accurate tool for obtaining and displaying field data for scales of various size. However, the use of any type of technological equipment comes with risk for technological failure while the surveyor is out in the field. Still, the job must get done! For this week's lab, students used various a variety of tools at varying locations along Putnam Trail to survey a series of select trees using the old-school Distance-Azimuth survey method. This method requires the measurement of distance and compass degree between several surveyed points (10 trees) and one pin-point location tied to a latitude and longitude coordinate pair (data collection stations) to be used later for mapping. The surveyed area and stations are pictured in the reference map (Figure 1).
Methods:
To best familiarize students with a variety of equipment that could be used to obtain distance and azimuth data in the field, students divided into three groups and worked as a team to collect 10 data points from each of the three stations, using the various tools provided as they rotated through.
![]() |
Figure 2: Image of TruPulse 360 |
At Station 1, a TruPulse 360 range laser was used to determine the distance between the surveying pin-point and its selected surrounding data points, as seen in Figure 2. Other necessary tools included a standard compass for measuring the azimuth of the surveyed trees, and a basic GPS unit to measure the coordinates of the central point, or data collector's position. At the data collection point, the latitude and longitude pair read 44.796 deg. N and -91.5016 deg. W. While collecting the data, students alternated roles operating the TruPulse 360 range viewer and compass, measuring the diameter of selected trees, and recording the data until reaching the data collection total of 10 various points. This method was especially accurate in measuring the distances between collection point and tree. Also noteworthy of this tool's function, the measuring units displayed on the reading scope could be easily changed in the equipment's settings to read in either imperial units, metric units, (as used), or in degrees. Some challenges that arose when using this method, however, was the sensitivity of the tool's reading. In some instances, the tool tended measure the distance of a small branch that intercepted the scope on the way to the intended select tree. For this reason, many of the collected data points at this station had to be double, and triple checked for distance integrity of the actually intended data point.
![]() |
Figure 3: Measuring Diameter of Trees at breast-level |
Station 2 was the most time consuming because aside from the need for the GPS to denote a specific coordinate point, this station made due without the use of technology completely. Instead, students used a tape measure and compass to determine distances between data collection point at 44.79585 deg. N and -91.50033 deg. W and its surrounding trees. For this reason, the group did not travel nearly as far for plotted points, and the second station appears to be the more clustered data collection group of the three methods. While this method can be especially handy in case of equipment failure, it was also the least accurate of the three methods. Again, students alternated roles between holding and leading the tape measurement, reading the compass azimuth, measuring tree diameters, and recording the resulting data. Figure 3 shows a fellow group mate taking the circumference of a tree at standard breast level in order to find the diameter. Some complications the group faced in gathering data included the struggle to pick trees that were a far enough distance away to appear significant when plotted on a map, but not so far that another tree would block the tape measure's route, causing a curve in the tape and skewing the measurement's reading.
Survey Station 3 used a range reader and receiver combination to record the distance. The data collector held the range reader gun at 44.795383 deg. N and -91.499388 deg. W while the person measuring the tree's diameters held the receiving end of the pair. The reader would measure and display the distance between it and the signals picked up by the receiving device. Some of the challenges that arose from this method was the occasional inability for the reader to pick up on the signal sent out by the receiver. This usually just required minor positioning adjustments so that signals would send properly without signal interruption.
Once all of the data had been collected in the field, it was then necessary to format the data into an Excel file and normalize into a format compatible with ArcMap. A sample snapshot of the final data sheet is displayed below in Figure 4.
Figure 4: Normalized Data Table in Excel
|
Figure 5: "Bearing Distance to Line"
and "Feature Vertices to Point"
Commands Location
|
After creating the table, the routes and distances measured between the three central data collection points and their corresponding trees were imported and plotted onto the map using the Bearing Distance to Line command in ArcToolbox under Data Management >> Features as shown in Figure 5. Figure 5 also references the location of the tool used to place the points where the trees surveyed were stationed at the end of the measured distance line. This tool could be found in the same section of ArcToolbox labeled Feature Vertices to Point. Finally, a topological image was placed beneath the resulting plotted points for reference.Figure 6 illustrates the results generated after the use of both tools in inlay of a basemap.
Figure 6: Tool's Resulting Map Image and Plotted Data |
Results:
From the data points and features plotted on the map in the image above, the following map (Map 1) was constructed to further showcase the resulting distribution of each of the three methods and the corresponding trees which were selected in collecting data from each of the central points.
![]() |
Map 1: Putnam Drive Survey Stations, Methods and Tree Data Points |
The first station using the TruPulse 360 was the most effective tool used of the three stations for collecting distance data. It was quick, user friendly, and accurate in its measurements and could be measured with only one user. It also allowed the surveyor to collect these data points without actually having to approach any of the trees. Considering the steeply angled uphill slope of the terrain south of the trail, some of these trees were more difficult to reach by foot when necessary, as in instances when measurements were being taken at the second two surveying points. Station 3, for instance, had the convenience of using technology for measuring these distances as well, but the surveyor still needed a second person to hold the receiver at the tree's location. For this reason, most of the data points collected at the last two survey points were collected north of the trail to avoid any uphill hikes.
Conclusions:
Learning the Distance-Azimuth surveying method is an important skillset to use as a back up tool in the case of equipment or technology failure. Though the results produced are less accurate than those that may be obtained through the use of technology, the method still does a fairly good job of displaying the overall location of collected data points.
The Distance-Azimuth method can also be applied in the Point-Quarter sampling method used for determining the relative concentration of a species in a given habitat, especially those with a less defined shape as is the case with Putnam Trial. During this type of sampling, the same relative technique is used to get an estimate of the overall number of species that are within a given area. To perform sampling, a number of species in the area (in this case, trees) are sampled at random from a central point. Their correlating data is recorded and the trees are each prescribed an identifying number, just as performed in the lab detailed here. The methods begin to differ from here. In the Point-Quarter surveying method, once points are collected, a compass is used to determine and lay out four individual quadrants. The total sampled number of trees observed is multiplied by four (for four quadrants) to get the relative density of the area. This number is multiplied by the total density (calculated from the tree diameters) in order to obtain the absolute density of a species within an area, in this case, the absolute density of each tree species along Putnam Trail.
Overall, this lab equipped students with the necessary knowledge to overcome potentially critical situations that may occur in the field that will enable them to still get the job done! Despite living in an age with ever-advancing technology, learning the basics of the trade and the "old-school" methods used to collect location based data is a handy tool set to have stowed away for the occasional instances in which they just might be needed in the future.
3. Sandbox Terrain Modeling
![]() |
Model 1: Sandbox Terrain Reference |
Introduction:
Introduction:
Sampling is an effective time and resource saving tool that is often utilized in the process of data collection. Many times, a study interest may realistically be too large to undertake a thorough and detailed collection of all the existing data that is beneficial to the study. In these instances, sampling allows the data collector focus on a small-scale representation of their study interest in order to make generalizations about the larger picture as a whole. For geographers, this is an ever-familiar skill set that has been frequently exercised-- identifying key themes and patterns at local scales, and analyzing them in relation to other themes and patterns found elsewhere, in an overall attempt to make better sense of our world.
The process of sampling can be executed in a variety of ways. The three main types of sampling include random, systematic, and stratified. Somewhat self-explanatory, the process of random sampling involves the spontaneous selection of data, where all available data has an equal chance at being selected. This method is useful because it is the least bias of all three sampling techniques and can be applied to large sample populations. Systemic sampling is done according to a predetermined strategy or system. Data collected is done in even intervals along the study area. This strategy is useful since it allows for thorough coverage of the area in study. Lastly, the third type of sampling, stratified sampling, is used when the area in study is composed of smaller areas of a standard size. These smaller areas are each individually a smaller representation of the larger area, and should therefore be reflective of that. One example of this might be blocks within a given neighborhood. The neighborhood is the overarching study area, but each block may be miniature representations of what the neighborhood looks like as a whole.
For this lab, students were placed into groups of three and asked to construct an elevation surface of terrain using a sandbox approximately one square meter in size. The terrain needed to include a ridge, hill, depression, valley, and plain. Students were given tape, string, thumb tacks, and a few meter sticks in order to construct grid system and survey the terrain to be digitized in ArcMap during lab in the week to follow. While the framework for this lab is simple, the principle lessons that can be taken away from this assignment are instrumental in understanding proper surveying techniques for accurate data sampling in the field.
Methods
In beginning the project, the group determined that the best method to use for sampling the sandbox's terrain was the systemic line sampling method. This method uses the intersections on a standardized grid with uniform intervals as points for data collection across the sandbox terrain. The group felt that this was the best choice in order to have a good coverage of the sandbox terrain overall that would clearly outline the terrain features required to be used in its construction (a ridge, hill, depression, valley and plain).
![]() |
Figure 1: Group Surveying and Recording Data |
Of the two sandboxes located across Roosevelt Street from Phillips Hall, the group chose to begin constructing the terrain in the furthermost sandbox. Once each terrain feature had been constructed, the group focused on constructing the grid, pinning string to the sandbox's wooden frame at equal intervals measured by a meter stick. The intervals used were predetermined as a result of the sandbox frame size. The approximate square meter sandbox could be split into a 20x20 grid system with each square approximately 2x2 inches in size, allowing for a total of 400 data points to be collected, a sizable amount that could be conducted in reasonable time. One person in the group was responsible for recording the data while the other two alternated between rows in measuring the elevation level of the sand at the southwest corner of each grid line intersection.
![]() |
Figure 2: Taking Measurements from Terrain Grid |
In total, the operation took just over an hour and a half to complete. The string line was determined to be the surface (or sea level) of the terrain model, so most of the data collected was then negative in value. Due to the cold weather, the group decided first to transcribe these data points in a notebook to be transferred into an Excel file later on. After transferring it to the Excel file, a color scheme was added to the data plotted in the table so that the group could get a glimpse at what the terrain would look like once digitized in ArcMap. The group then also charted the data in a format that would be useful for transferring these points into ArcMap during the lab next week.
Data normalization is the process of organizing data into a table, with each column representing one attribute of the data presented. In this case, normalizing the data meant that for the table produced in Excel, each x-value needed to be entered into one continuous column labeled 'X-Values,' all y-values entered into a continuous column marked 'Y-Values,' and so on so forth so that each row was comprised of four columns (the OID, x-, y-, and z-values). When looked at as a whole, each row within the excel file would make up one coordinate point (this group had 400 coordinate points, therefore 400 rows of data points).
When done correctly, the X,Y (and Z) coordinates of an Excel file may be uploaded as data points into ArcMap as a grid system. From here, students would be able utilize tools in ArcMap to manipulate the data points and illustrate various 2D files of 3D projections of the terrain, which could then be transported into ArcScene to ultimately generate the 3D digital models of the terrain created in the sandbox during the previous lab.
Table 1: Normalizing the Data in Excel |
Table 2: Normalized Table |
Once the data has been normalized in Excel, the file containing these points is ready to be uploaded into the geodatabase stationed in ArcMap using the 'add XY data' tool option. After the table has been imported, it can be converted into a point feature class, which will reveal the grid-like pattern students previously constructed in their sandboxes onto the map. These points can be symbolized to represent individual values from each data point collected, but the image does not provide a good visual as to the varying gradations between each data point value.
To obtain the 3D visual, one can use the tools under the 3D Analysis option in ArcToolbox, where a variety of interpolation methods can be selected from to provide the 2D output of the image desired. This image is stored in the geodatabase in raster format, and can be uploaded into ArcScene to be viewed in 3D with an option to rotate it to view various angles. For the purposes of this lab, all 3D models utilized were oriented so that they would be viewed in the same direction as the sandbox landscape was while being constructed in the field.
Once the models had been generated, the symbology and orientation set, these 3D images could then be exported in 3D format as a .png file to be returned to ArcMap as raster features for mapping and scaling. Since the files re-added to ArcMap were raster features, scale could not be automatically inputted into the maps as is traditionally done. Instead, scale was reflected using the 'drawing' toolbar in ArcMap to illustrate the general size of the actual sandbox being mapped (1 x 1 meter).
Students were to explore 3D analysis using the following five interpolation methods:
1. IDW (Inverse Distance Weighted)-
The IDW interpolation method uses the assumption of Tobler's First Law of Geography (things that are nearer are more similar to one another than things at a further distance away) to fill in the gaps between the sampled data points. The points collected in the field are weighted with the greatest amount of influence and diminished directly with distance from there. The resulting 3D model, then, resembles a somewhat "lumpy" image, the collected weighted points at the peak of the lump as seen in figure one below. This method tends to only be useful if the data being modeled has minimal variations in the range between the points collected, but is normally not the "go-to" method in 3D modeling.
2. Natural Neighbors-
Also known as "area-stealing" interpolation, the natural neighbors interpolation method uses the same weighted distribution technique as in the previous method, but generally tends to do a better job of smoothing out its transitions between the weighted data points, generating a less "lumpy" as can be seen in figure two.
3. Kriging-
The kriging interpolation model is constructed using the weighted average of all nearby collected data points to fill in the unsampled points within the grid using a specific formula. This method is ideal in many instances since it prioritizes smooth transitions between sampled data points, while still providing the best unbiased prediction of the values inbetween. An example of the sandbox terrain using the Kriging interpolation model is pictured in figure three below.
4. Spline-
Like the previous kriging method, spline interpolation uses a mathematical formula that ultimately aims to reduce the overall curvature of the surface within the area sampled. As illustrated in figure four, this again results in a much smoother surface in the transition between collected data points.
5. TIN (Triangular Irregular Networks)-
The last interpolation method, TIN interpolation, creates a series of edges connecting various collected data points to form a network of triangles and reveal a general outline of the sampled surface. In areas where the surface varies more drastically, TIN is able to provide a higher resolution than it does in areas with little variance in values. A drawback to this technique, however, is its limited popularity as a result of its costs to build and process.
Results
Overall, the group managed to collect a total of 400 data points within the 20x20 grid. The minimum value in our collection was -8 inches while the maximum value was a +4 inches. Since the string line was established as sea level, most of the data points fell below the line at negative values, the most commonly occurring value being -2 inches. Given the cold temperatures and tedious measurement requirements of the lab, this sampling method proved to be really useful and seemingly effective.
The first interpolation method used, the IDW interpolation, illustrated a "lumpy" 3D image shown in Figure One, as promised in its description under the methods section. The 400 data points which were collected during the original surveying can somewhat be seen at the peaks and troughs created by the weighted points. This method, though it may be beneficial in some instances to be able to see that original data beneath the created surface, is not the most visually appealing, nor accurate in predicting the unsampled data between the sampled data points, and is therefore not a commonly selected option in 3D mapping.
Model Example 1: IDW Interpolation Model
The natural neighbors interpolation method illustrated in Figure Two generates a much more likely option between the first two methods. The weighted distribution at collected data points is still visible, but the transition between data points illustrates a much smoother surface than the first option had provided.
Model Example 2: Natural Neighbors Interpolation
Figure Three reveals the kriging interpolation method in use. Here, again, the surface is generally smooth. This option is supposed to be the least biased in generating values for its unsampled points. Perhaps this is why this option seems to reveal the least amount of color variance from the surface beginning its lowest points, to the depths of the area.
Model Example 3: Kriging Interpolation
The fourth figure utilizes the Spline interpolation method. As stated in its description, this interpolation type provides the smoothest outcome of all five methods. It limits the curvature of the surface, but still somehow manages to create the most uniform surface. Some errors can be seen in the image produced, however, in both hill areas on the surveyed display. The etched blending in the side of the hills may indicate error in measurements performed while in the field. In a follow up survey, these measurements could be taken with more care for accuracy and a smaller rounding value than the quarter inch that this group had decided upon.
Model Example 4: Spline Interpolation
The last interpolation method, TIN, is the most distinct of the five method options. The method utilizes edges and resolution to illustrate the surface model. The triangular network pattern that results from these edges connecting the data points can be clearly seen in Figure Five below, and is especially evident in the caldera-looking feature at the lower left corner of the sampled area.
Model Example 5: TIN Interpolation
Some issues did arise during the process of completing the lab. To begin, the freezing temperatures had managed to freeze much of the sand within the sandbox, and made digging the terrain for surveying difficult as it limited the areas which were soft enough to be molded. Secondly, as the group went on collecting data points, the string began to slack some in certain areas, It may have been more beneficial to double tack alternating string lines to improve its security. A third improvement could have been made in being more specific in collecting measurements. It seems the group did a lot of rounding to quarter inch markers in looking at the final data set. A better method would have been to conduct the measurements in centimeters to promote more accurate readings across the survey sample.
Conclusions
Sampling is an effective tool to utilize in spatial settings as are commonly found in the field of Geography because it allows for larger scale analysis to be done on smaller scale levels, conserving both time and resources in the process. This activity relates well to the system used by the Public Land Survey System, which also maps out land plots into squares, but at a much larger scale. Overall, the survey system utilized was a decent system given the constraints of the weather, but of course, more data is always better. Perhaps creating more rows and columns within the grid system would have benefited the group, as it would have also resulted in the collection of more data points. Also, as noted previously, the group would have done better conducting the data point measurements in units of centimeters rather than in inches for better accuracy in numbers.
Interpolation can be used for a multitude of purposes. Some of these include mapping of rainfall, watertables, chemical concentrations, noise frequencies, and soil-type distribution. There are also many more options of interpolation methods to choose from in addition to the five options previewed for lab this week. Some other types include PointInterp, Trend, and Density.
The last few weeks of lab allowed students to familiarize themselves with the practice of field sampling, data normalization, and 3D modeling with Interpolation.
Sources
http://www.rgs.org/OurWork/Schools/Fieldwork+and+local+learning/Fieldwork+techniques/Sampling+techniques.htm
http://www.spatialanalysisonline.com/HTML/index.html?kriging_interpolation.htm
http://pro.arcgis.com/en/pro-app/help/analysis/geostatistical-analyst/how-inverse-distance-weighted-interpolation-works.htm
http://resources.arcgis.com/en/help/main/10.1/index.html#//005v00000027000000
Comments
Post a Comment