More

Converting address to coordinates using OpenLayers 3?


Is it possible to convert address to coordinate using OpenLayers 3?

I can to it a bad way by taking address and search it in theplanet_osm_pointstable and getosm_idand after that search theosm_idinplanet_osm_nodetable. But this is a bad way. I am pretty sure thatOpenLyers3have to do it, but can't find anything on web.


Since you specifically mention OpenLayers 3, this may not be a duplicate. A quick Google search finds a page of OpenLayers 3 API Documentation. Within ga.Map is an entry for geocode, which states thatgeocode(text)will "Geocode using api.geo.admin.ch", text being a string of course.


The 1:250,000 topographic baseline data includes:

  • Planimetry
  • Survey control
  • 1:20,000 & 1:50,000 reference grids
  • Contours
  • Wooded areas
  • Reference data

Sheet boundaries follow those of the federal maps and may be joined or extended maps. There was a non-standard offset transformation from NAD27 to NAD83.

Unlike Terrain Resource Information Management (TRIM) products, there was no direct data compilation in the creation of this base map.

File Formats

Digital map files are intended to be used in Geographic Information Systems (GIS). The formats supported are:

  • MicroStation
  • ArcInfo
  • ESRI shapefile (Albers or UTM)
  • DGN (Albers or UTM)
  • MOEP (UTM only)
  • SAIF (UTM only)

The product includes the digital base map, the digital elevation model (DEM), or both.

How to Acquire

The NTS 1:250,000 topographic map data can be ordered from the Base Map Online Store or opened in Google Earth:

NTS 1:250,000 base map datasets can be downloaded from the B.C. Data Catalogue:

Pricing

1:250K digital base map, NTS series (base map & DEM)

1:250K digital base map, NTS series (base map only)

1:250K digital base map, NTS series (DEM only)

Delivery Times

The product will be available to download in the requested format within minutes of placing a Base Map Online Store order. You will receive an email with an FTP link. The order will be purged from the FTP server after seven (7) days.


The costless way to geocoding addresses in Excel – part 3, bulk data geocoding with Nominatim and other geocoding tools

The 3 rd part of this vast article about geocoding is slated for discussion about another free geocoding tool for MS Excel, namely the OpenStreetMap Nominatim geocoding and other remaining tools, which could be worth attention.

The usage of this small tool is very simple, even simpler than Bing Maps API geocoding, however it requires a bit of knowledge about the VBA Excel .
The situation looks analogically to the Bing Excel geocoding tool, which appears separately in the file , ready for quick work. The file is available here, along with the whole tutorial. Because everything has been explained roughly there likewise in the Bing Geocoding case I will go straight to the option, in which we can launch this tool in our own workbook . I would like just to point, that the file includes also the Google API geocoding, which is on the top. It’s not for free unlike the Nominatim tool underneath (Pic. 1), which we can use.

Pic. 1 The Geocoding tool for Excel with Google API option on the top and Nominatim option underneath (marked red).

Unlike Bing geocoding tool, the Nominatim is in my opinion not comfortable enough to use externally. However, the tool is a perfect fit for our own worksheet , because it’s basically confined to one formula . The same as previously, the VBA Excel knowledge is required to make this tool running. I am going to show you shortly how to accomplish it.
Firstly we must open the Geocode file and launch the VBA console (Pic. 2), from where you will enter the module with functions (Pic. 3).

Pic. 2 Opening VBA Excel in our Nominatim Geocode file.

Pic. 3 Nominatim geocoding VBA function

Because the Geocode file is tailored also for geocoding in Google, you will encounter the Google API geocoding functions on the top. Just move your slider down in order to see the Nominatim functions.

The second step is copying this code into our workbook . You don’t need to drag or export a whole module, as shown in the previous part of this article. You can just select the Nominatim function , marked red above (Pic. 3), and copy it into your own workbook creating a new module for it. The image below (Pic. 4) shows all these steps:

Pic. 4 The steps showing copying the Nominatim geocoding VBA function into our own worksheet.

  1. Or own worksheet with examples of location.
  2. Creating (Inserting) a new module in the VBA console.
  3. The new module (default Module1) has been created, you can see it now empty with Option Explicit on the top.
  4. Select the whole NominatimGeocode function from your Geocode file and copy it.
  5. Paste this code into a module in your workbook.

Now theoretically you can start geocoding in your own worksheet. Theoretically, but in practice, it’s still impossible unless you have your code and VBA library prepared correctly. The major thing, which you should do here is loading the specified library in order to avoid the “User-defined type not defined” error.
In your VBA Excel console, you should select the “Tools” from the main toolbar and next choose the “References” at the very top. Next, you must find the Microsoft XML v3.0 library and switch it on (Pic. 5). It will be loaded shortly.

Pic. 5 Adding XML v.3 library to your VBA project in Excel.

You can also add this reference programmatically if you are more skilled with VBA Excel. It’s quicker because you only must run the macro instead of searching in the library list.
The example of the VBA macro for attaching this library is below:

Sub XML3library()
With ThisWorkbook.VBProject.References
Application.VBE.ActiveVBProject.References. _
AddFromFile “C:WINDOWSsystem32msxml3.dll” ‘ the file of Microsoft XML v3.0 library
End With
End Sub

If you run the following XML3library macro, your reference will be added.

Next, you can carry on with validating your code . It’s still not the proper time for geocoding I am afraid, as you might get an error stating that “Variable is not defined” as per in the image below(Pic. 6).

Pic. 6 The “Variable not defined” error in the VBA Excel code.

It simply means, that there is a lack of Dim statements for some variables. We should define one variable in our code:

Function NominatimGeocode(address As String) As String
Application.Caller.Font.ColorIndex = xlNone
Dim xDoc As New MSXML2.DOMDocument
Dim vbErr As String
xDoc.async = False
xDoc.Load (“https://nominatim.openstreetmap.org/search?format=xml&q=” + WorksheetFunction.EncodeURL(address))
If xDoc.parseError.ErrorCode <> 0 Then
Application.Caller.Font.ColorIndex = vbErr
NominatimGeocode = xDoc.parseError.Reason
Else
xDoc.SetProperty “SelectionLanguage”, “XPath”
Dim Loc As MSXML2.IXMLDOMElement
Set Loc = xDoc.SelectSingleNode(“/searchresults/place”)
If Loc Is Nothing Then
Application.Caller.Font.ColorIndex = vbErr
NominatimGeocode = xDoc.XML
Else
Application.Caller.Font.ColorIndex = vbOK
NominatimGeocode = Loc.getAttribute(“lat”) & “,” & Loc.getAttribute(“lon”)
End If
End If
End Function

The red line indicates the element, which you should add to your code in order to run it properly – simply define the variable, which appears to be missing at the initial stage.

Next, you can finally use the Nominatim geocoding function. In your formula bar or in the cell you can type the NominatimGeocode() and calculate the location like you see below (Pic. 7). Since now, the Nominatim geocoding tool is ready to use in your own workbook.

Pic. 7 The Nominatim function in our Excel workbook enabling us to proceed location geocoding.

2. OTHER EXCEL GEOCODING TOOLS WORTH ATTENTION

Apart from the tools described by myself, there are other appliances useful in geocoding the addresses in MS Excel. One of them is provided by OpencageData , where you can register your API key and make 2500 requests per day . It’s very useful and much quicker than Nominatim. The geocoding can be done both with VBA Excel as well as via Google Sheets. Under these links, you can learn more about how to do it.

Pic. 8 OpenCage geocoding example in Google Sheets.

I wish I could present it in the 1 st part of this article, fully dedicated to geocoding with Google Sheets’ assistance . I didn’t, because here we are coming with an external application, written by JavaScript code and defined by custom function rather than a built-in plugin. Visit the home website and learn more about how to geocode 2500 addresses per day with OpenCage.

Another way of geocoding in Excel is the Mapcite add-in installation. You can do some geocoding for free, but registration is required . The amount of 5000 addresses monthly is not too big at all, but still, something, when you are run out of the other options. Once the plugin is installed you can see it in the main Excel ribbon (Pic 9).

Pic. 9 The Mapcite tools in Excel ribbon.

Another tool, where we can do the geocoding quickly is Maplarge.com. Attaching the .csv file with our address we can get the first 100 results for free.

Pic. 10 Maplarge.com geocoding tool.

At the end I would like to mention about LocationIQ.com, where geocoding is also possible and goes along with a nice .json code.

In these, all parts of the big article fully dedicated to costless geocoding in Excel I shown all the useful tools, which help you to do it. Personally, I think, that the best is the Bing Excel geocoding tool, which we can conveniently integrate with our workbook and geocode 10k addresses. Creating the new profile in order to gain the new Bing Maps API code and input it into our workbook won’t be a problem at all. If you wish to have no limit with your address geocoding, then Nominatim will be the best, but remember, that this tool is quite slow ( 1 request per second ), so you will have to wait. It’s good to consider the OpenCage geocoding, which gives you 2500 records daily.

This is the geocoding, where coordinates are gained from the addresses. In the nearest future, I would like to find the best and explain to you the costless reverse geocoding appliances for MS Excel, where at the basis of coordinates, the address string will be provided.


Esri: (reverse) geocoding with ArcGIS REST API and EXASolution GEOMETRY data type

One way to do deal with geocoding is to use a GIS (geographic information system) service like ArcGIS (www.arcgis.com) from Esri.

The following website shows different use cases when such a service might be useful:
https://developers.arcgis.com

In this tutorial, we would like to show you how to use the ArcGIS REST API from Python:
http://resources.arcgis.com/en/help/arcgis-rest-api

In practice, you might face the following challenges:

  • Connect to a REST API from an EXASolution UDF language (Python, Java, R, Lua).
  • If you want to benefit from EXASolution's GEOMETRY data type (e.g. in order to run geospatial functions), you need to think about converting this data to Python strings and back.

This solution uses Python and its package requests (python-requests.org) to connect to the REST API.
As an example, we only demonstrate geocoding and reverse geocoding.


Degrees/Minutes/Seconds (DMS) vs Decimal Degrees (DD)

For positioning, we can find any location on Earth using latitude and longitude coordinates.

And we measure those coordinates using decimal degrees or degrees/minutes/seconds.

While latitude lines range between -90 and +90 degrees, longitude coordinates are between -180 and +180 degrees.

Do you notice how we use degrees for latitude and longitude coordinates? Let’s start with some key examples how come we use angular units.

A Review on Geographic Coordinate Systems

In a geographic coordinate system (GCS), we can reference any point on Earth by its longitude and latitude coordinates. Because a GCS uses a sphere to define locations on the Earth, we use angles measured in degrees from the earth’s center to any point on the surface.

The coordinates (0°N, 0°E) is where the equator and prime meridian cross. The funny thing is that if you look at this location on a map, it’s all ocean. But because GIS professionals sometimes mistakenly define their project when adding XY coordinates, (0°N, 0°E) had turned into a fictional location called “null island”.

When we move northward along the prime meridian, the longitude value stays fixed at 0°. But the latitude angle and coordinate increases because we move northward.

If we move northwards at an angle of 51.5°, this positions you on the Royal Observatory in Greenwich, England as pictured below. Actually, this is why the 0° line of longitude is the reference starting point. From the Greenwich Meridian, we can find positions east and west.

Because the prime meridian is the 0° starting point for longitude coordinates, everything is referenced from here.

For example, we can change the angle to 80.4° west. This moves us 80.4°W away from the prime meridian. And just by chance, Pittsburgh is located on this line of longitude at about (40.4°N, 80.4°W)

The equator is 0° latitude where we measure north and south. This means that everything north of the equator has positive latitude values. Whereas, everything south of the equator has negative latitude values.

Alternatively, the Greenwich Meridian (or prime meridian) is a zero line of longitude from which we measure east and west.

Decimal Degrees vs Degrees/Minutes/Seconds

One way to write spherical coordinates (latitudes and longitudes) is using degrees-minutes-seconds (DMS). Minutes and seconds range from 0 to 60. For example, the geographic coordinate expressed in degrees-minutes-seconds for New York City is:

LATITUDE: 40 degrees, 42 minutes, 51 seconds N
LONGITUDE: 74 degrees, 0 minutes, 21 seconds W

But you can also express geographic coordinates in decimal degrees. It’s just another way to represent that same location in a different format. For example, this is New York City in decimal degrees:

LATITUDE: 40.714
LONGITUDE: -74.006

Although you can easily convert coordinates by hand, the FCC has a DMS-Decimal converter tool that can convert between decimal degrees and degrees/minutes/seconds.

Try For Yourself

When you put two coordinates together as a pair (X, Y), you can locate anything on Earth with a geographic coordinate system.

You can express coordinates in primarily two different ways. For example, you can use decimal degrees or degrees-minutes-seconds. But there are even new growing ways to of addressing the world such as What3Words.

After your locations are in a GCS, geographers often project their locations in a Projected Coordinate System (PCS). PCS like the State Plane Coordinate System (SPCS) or Universal Transverse Mercator (UTM) are always based on a GCS that is based on a sphere.


Above the table that you provided in the screenshot, is a caption that says what these numbers mean:

I don't blame you for coming here to ask this, because the notation ( $ imes 10^4$ ) can be confusing for someone that's not used to it (and I don't think it's taught in school, generally). But what it means is that numbers like 1383, which is the $x$ -coordinate (along the $a$ axis of the unit cell) given for the 6th carbon atom, C(6), actually mean 0.1383. The number 1383(3) means (0.1383 +/- 0.0003). Similarly, the $y$ and $z$ coordinates are actually fractional lengths along the $b$ and $c$ axes of the crystal unit cell.

I also don't blame you if you're confused by what these numbers mean, because the table's caption says "atomic positional parameters" which seems (at least to me) as an unconventional way of saying "atomic coordinates". At least a different part of the exact same paper describes this table as listing atomic coordinates though:

Table 2 also has the "internal coordinates" that you would use if your modeling program requires input in ZMAT format.

In summary: The table in your question corresponds to the fractional coordinates for the atoms with respect to the unit cell (although it also has uncertainties on each coordinate value). You mentioned in the comments that you're used to seeing exact fractions like 1/4 and 3/4, but in this case the molecule is not so simple like textbook lattice structures such as NaCl. So, when they are measured experimentally, they won't correspond to simple fractions and will have an associated uncertainty.

In the above case the fraction would be approximately 1383/10000, but remember that there's an uncertainty, so in fractional form the $x$ -coordinate would actually be:


Usage Notes

As DataTri information has been collected using different procedures, this compilation may contain some inherent biases that should be addressed when the data are intended to be used.

Most of the data were obtained from papers published in scientific journals, accompanied by those provided by colleagues. Although data span 24 countries, there were some countries such as Brazil, Mexico and Argentina for which the volume of data was higher than for the rest. In the case of Mexico and Brazil the number of occurrence data per country included in DataTri seems to be mainly influenced by two factors: (i) the number of triatomine species present in each country (both countries have the highest number of triatomine species), and (ii) by the number of occurrence data published and provided by colleagues, (also Mexico and Brazil are the countries with the largest amount of data collected) an explanation for the latter factor goes beyond the goal of this paper. In the case of Argentina, there is also a large number of occurrence data, but in this case, this is because the DataTri initiative arose from Argentinian researchers with a great occurrence data contribution. With regards to habitat sampling we recognize that there is a potential bias in favor of the domiciliar and peridomiciliar habitats because those are the habitats of major epidemiological importance and the target of the vector control campaigns. Additionally, the paucity of sylvatic habitat data also results from the difficulty of sampling procedures in the large variety of sylvatic habitats used by the triatomines. Finally, it should also be clarified that the date information is not available in 35% of the records, thus, we recommend that any analysis based on this dataset should use methods that take such biases into account.

Despite the information biases described above, DataTri constitutes a valuable compilation of American triatomines geographic data that is as complete, updated and integrated as possible. Currently, compared to other public biodiversity databases, DataTri triples the number of records of triatomine data found in the GBIF database, and its volume is even higher when compared with other public databases such as BISON (https://bison.usgs.gov/#home), INaturalist (https://www.inaturalist.org/) or Museums websites. Thus, DataTri has a better data representativeness regarding to the number of species, the number of countries and that each record has a location with an accurate geographic coordinate.

An accurate spatial information based upon geographic coordinates also allows to link and complement with other databases such as VectorBase, that provides data on vector genetic information, and another dataset published in Scientific Data 19 that provides data on Trypanosoma cruzi occurrence/prevalence in humans, alternative hosts and triatomines. In addition, as this dataset is hosted in an open and public repository, we hope that it will contribute to fulfill national and international goals such as promoting the exchange of biological information, increasing and improving accessibility of such information, providing biological data produced and compiled in several countries, and enhancing knowledge of both the biodiversity and epidemiological data related to Chagas disease.


In this article, I demonstrate the proper computations used when working with conventional surveys that use state plane coordinates. I purposefully selected a traverse in Pennsylvania that climbs about 1100 ft in its 2-mi length. By examining this particular site as an example, the reader can see the effects of elevation on the computations and the traverse closure errors. Finally, I present simpler methods that can be used when elevation differences in the traverse are not as great.

An Example Traverse

Figure 1 depicts the path of a traverse that runs primarily South to North in the Zone 3701 of the Pennsylvania State Plane Coordinate System 1983 (SPCS 83). This traverse follows a road that runs from the intersection of two state highways and terminates near the entrance of a state park. It also climbs a hill that varies in elevation by about 1100 ft from the beginning to its terminus.

The purpose of demonstrating the computations for this traverse are two-fold. First, it demonstrates the methods used to properly reduce the observed distances to their equivalent grid lengths. Second, it demonstrates the importance of reducing distances using both the scale factors and elevation factors. The traverse is approximately 2.25 mi long (10,951 ft between endpoints), yielding an average grade of about 9.9% over its entire length.

The observations for this link traverse are shown in Table 1 . The approximate elevation (H) at each station was obtained from a map. These approximate values are sufficient to determine an approximate elevation factor for later distance reductions. The geoid height (N) was obtained using approximate coordinates (discussed later) and the National Geodetic Survey (NGS) Geoid 12A model ( www.ngs.noaa.gov/GEOID/GEOID12A/ ). The approximate geodetic height in meters for the stations was derived using Equation (1). The angle at the Station 2 was to an azimuth mark, which had a geodetic azimuth of 171°10󈧡.6″. The horizontal distances were recorded in units of feet.

The SPCS 83 NE coordinates of the Station 2 in feet are (413892.42, 2,366,343.66) and for Station 12 are (423,389.71, 2,366,191.75). The geodetic azimuth to the last azimuth mark of the traverse is 359°19󈧏”.

Using the NGS Geodetic Toolkit software ( www.ngs.noaa.gov/cgi-bin/spc_getgp.prl ) and converting the station coordinates to meters, we see that the convergence angle at the Station 2 is 0°57󈧠.13″, and its scale factor is 0.9999592. Inversing the coordinates at Station 12 on the same web page, we find the convergence angle at this station to be 0°57󈧠.19″ with a scale factor of 0.9999583. From the initial 57󈧠” convergence angle combined with the fact that the terminal stations are 10,951 ft apart, we can see that if no correction to the geodetic azimuths were made, all the station coordinates would be rotated and the final computed station coordinates would disagree with the given coordinates by more than 183 ft at Station 12.

Thus, to start the computations we must first convert our geodetic azimuths to grid azimuths using Equation (2) where Az is the grid azimuth of the line, α the geodetic azimuth of the line, and γ the convergence angle at the station with a longitude of λ, which is derived in a Lambert Conformal Conic mapping zone by Equation (3) where λCM is the positive western longitude of the central meridian, which is 77°45′ W in Zone 3701, and n (also known as the sin φ0) is a derived zone constant, which is 0.661539733812. From Equation (2) we see that the grid azimuth to the first azimuth mark is 171°10󈧡.6″ − 0°57󈧠.13″, which is 170°13󈧅.5″. Similarly, the grid azimuth from the last station to its azimuth mark is 359°19󈧏” − 0°57󈧠.19″, or 358°21󈧮.8″.

From the difference in the scale factors at the two endpoints, we see that there are five common digits (0.99995) and one that varies, yielding six significant figures in the scale factors. Because our longest observed distance contains only six digits, this implies that we may be able to get by with a single scale factor to reduce our observed horizontal distances. However, this assumption would be incorrect because the elevation differences of nearly 1100 ft over the entire length of the traverse yield elevation factors that vary significantly in the fifth decimal place.

For example, at the first station with an approximate geodetic height of 1165 ft (355.225 m), we find the elevation factor (EF) is 0.99994425 from Equation (4) where Re is the average radius of the Earth, which is 6,371,000 m. At Station 12 with an approximate geodetic height of 684.761 m, the elevation factor is 0.99989253. Notice that at Station 12 the elevation factor yields a distance precision of only 1:9300 regardless of the scale factor.

At this point we are almost ready to perform standard plane traverse computations. We have our starting and ending azimuths in the PA North zone map projection system along with the initial and closing coordinates. However, the distances must still be reduced to the map projection surface.

As shown in Figure 2 , this reduction takes two steps. First, the observed horizontal distance (Lm) must be reduced to its geodetic equivalent (Le). This is done using the average elevation factor for each traverse course. The second reduction involves taking the geodetic length of the line and reducing it to map projection surface (Lg). This reduction is done using a function of the scale factors on each course.

For typical surveying work where the length of the lines are relatively short (as shown in this example), an average scale factor using the two endpoints of the lines may be used with Equation (5) where k1 and k2 are the scale factors of the stations at the ends of the line.

However, for longer distances, a weighted average of the scale factors should be based on Equation (6) where kmid is the scale factor at the midpoint of the line. The typical distances observed by surveyors rarely warrants using Equation (6). The deciding factor will be the number of common digits in k1 and k2 as compared to the length of the lines.

For example, if a distance could be observed between Stations 2 and 12 then, as previously mentioned, the scale factors vary in the sixth decimal place (0.9999592 and 0.9999583). Because the overall distance between these two stations is about 10,951.00 ft, which has seven digits, it would be wise to use Equation (6). However, in this example our longest observed horizontal distance is 2230.58 ft, so we can reasonably assume that Equation (5) is sufficient. As can be seen in Table 2, the scale factors at the endpoints of each line never vary before the seventh decimal place, which justifies using Equation (5) instead of Equation (6) because our distances only have six digits at most.

To create Table 2 , we need to know approximate elevations at the endpoints of each distance. These were obtained from a map sheet. To obtain the scale factors (k), approximate station coordinates are also needed, as was previously mentioned, to obtain geoid heights at each station. They are computed using standard traverse computations with observed horizontal distances and grid azimuths.

These approximate station coordinates are then used to determine the scale factor and geoid height at each station using appropriate software, such as that provided by the NGS. These values are shown in the column headed by k in Table 2. The elevation factor at each station is obtained using Equation (4) and an average radius of the Earth. Because these scale factors are unitless, the distances can have any units. However, it is important to remember when computing Equation (4) to have Re, h, H, and N in the same units, which is why I chose to convert my station elevations to meters.

Once the scale factor (k) and elevation factor (EF) are determined for each station, they are averaged. Another approach is to average the geodetic heights of the endpoints to obtain the average elevation factor directly from Equation (4).

The combined factor (CF) for each course is then computed as Equation (7). This combined factor is multiplied by the respective observed horizontal distance to obtain the equivalent map projection (grid) distance, or Equation (8).

Using grid azimuths and grid distances, the traverse is computed using standard plane computational techniques. If the traverse in this example had been computed using the observed horizontal distances and grid azimuths, the traverse misclosure would be 1.25 ft with a relative precision of 1:9400. However, using the grid distances, the linear misclosure of the traverse is 0.32 ft with a relative precision better than 1:36,000.

The Problem with Ground Coordinates

Even though there is no recognized coordinates system on the surface of the Earth, some surveyors insist on trying to perform state plane coordinate computations on the ground. These surveyors realize that if they simply divide the state plane coordinates at each station by the combined factor for the station, they can obtain ground coordinates. While this much is true, caution must be taken when using ground coordinates in any computations or layout situations. The reason for this is the elevation of the stations.

Using the preceding example, we have a combined factor at Station 2 of 0.99990340. If this is divided into the given northing and easting coordinates for Station 2, we obtain NE ground coordinates of (413,932.405, 2,366,572.265). At station 12, we have a combined factor of 0.99985080, which when divided into the coordinates for Station 12 yields NE ground coordinates of (423,452.887, 2,366,544.828).

Using the observed horizontal distances, Table 3 shows the initial traverse computations. The difference in the ground coordinates of Stations 2 and 12 are 9520.482 in latitude and -27.438 in departure. As can be seen in Table 3, these values differ from the sum of the latitudes and departures computed with observed horizontal distances by -22.012 ft and -124.891 ft, resulting in a linear misclosure of 126.382 ft and a relative precision for the traverse of 1:100.

The reason for these large discrepancies is due to the difference in elevation between stations, which is why this particular traverse was chosen. As can be seen in Figure 3 , the grid coordinates lie on a plane surface. However, the ground coordinates are projected radially from the center of the Earth through their grid locations to the topographic surface of the Earth. Thus, the horizontal distance does not project to the second station correctly because the horizontal plane at any station is not parallel with the map projection surface. This difference is seen in the computational example. This error will also occur during layout. Thus, it is not prudent to use ground coordinates to lay out alignments or structures. They simply do not match the theory of map projections.

Now, some may state that they did not see any unreasonable error when using ground coordinates. This may be true if the area of the survey is reasonably flat. In this case the difference in elevations between stations is small and, similarly, the error caused by using ground coordinates may be small. But this is living very dangerously and could not be supported in a court of law if an error does occur.

The Project Combined Factor Method

The computations shown in this article are not complicated, but they certainly do require more effort than simply using the observed horizontal distance. If only the Earth were flat rather than round, this would not be a problem. Well, it is not, so we have a choice. We can use geodetic computations when laying out or observing a station, or we can use a map projection and pretend the Earth is flat. The latter will provide the same results as geodetic computations but requires that all observations are on the map projection surface.

To simplify the computations, manufacturers have developed routines in their survey controllers to handle map projection surfaces. In fact, one obvious method is displayed each time a project is started. Part of the settings for a project is a scale factor. When using state plane coordinates, this scale factor should be set to an average combined factor for the project.

Using the combined factors for Stations 2 and 12, we see that the average combined factor is 0.99987712. If the observed horizontal distances are all reduced using this average combined factor, the traverse misclosure would be 0.40 ft for a survey relative precision of 1:29,000. This differs from the correct computations by only a 0.08-ft difference in the 11,758-ft traverse.

The reader should remember that this survey was selected to demonstrate the proper procedures in reducing observations to a map projection surface. In typical surveys, this large of a difference is seldom seen. In fact, it is more common to see differences only in the 0.001 ft between a single combined factor and proper methods. However, even with that said, it is only 0.08 ft in 11,758 ft or 1:147,000. It is up to the professional to decide if this difference is acceptable for the purpose of the survey.

During layout, the survey controllers will use the single scale factor to determine the proper ground distance. To do this it inverses the map projection coordinates to determine the distance on the map projection surface between the two stations. It then uses a rearranged Equation (7) and the project scale factor to determine the horizontal length at the surface between the two layout stations. Simply put, today’s survey controllers can handle map projection computations both in the direct and inverse modes when a proper combined factor is supplied for the project, thereby removing the drudgery of doing these computations by hand and simplifying the survey.

The bottom line is that only map values should be used to compute mapping coordinates if you wish to compute state plane coordinates and maintain the accuracy of your survey. However, as was shown in this article, it is possible to minimize this work if a single combined factor can be used in distance reductions. When this is the case, a project scale factor is determined for the survey and entered into your survey controller project. The software will then correctly perform the computations whether it is an observation or stake-out situation. This project factor should be part of the metadata of the survey and retained for future reference, just as the original observations are retained.

In areas where geographic information systems (GIS) are being implemented, the state plane coordinate system provides a common reference plane, which allows surveys of various types to be combined into a cohesive map. However, if observations are not properly handled in the computations of the coordinates, the result will be a mismatch of mapping elements that are located by various surveys and, in the end, a map of little or no value.


1. Introduction

Rapid access to desired geospatial data is the key to data-driven geography in the context of big Earth data (Miller & Goodchild, 2015 ), which requires efficient geospatial data integration and sharing. Unfortunately, the processes of integration and sharing face many challenges. One of the challenges is semantic heterogeneity caused by the characteristics of multiple sources, types, and forms of geospatial data. Many efforts for geospatial data integration and sharing, based on the idea of metadata standards, have been made in the past. Examples include the Federal Geographic Data Committee (FGDC) 1 , the National Spatial Data Infrastructure (NSDI) 2 , the International Organization for Standardization (ISO) 19115 3 , and the National Map from the United States Geological Survey (USGS) (Budak Arpinar et al., 2006 ). However, these efforts only partially addressed the semantic issues (Baglioni, Giovannetti, Masserotti, Renso, & Spinsanti, 2008 Lutz & Klien, 2006 ).

A more promising approach to solve heterogeneity problems is to develop and use ontologies (Baglioni, Giovannetti, Masserotti, Renso, & Spinsanti, 2008 Hu, 2017 ). Ontologies are formal and explicit specifications of shared concepts in a machine-readable way (Gruber, 1993 Studer, Benjamins, & Fensel, 1998 ). Ontologies can be used to provide a semantic description for geospatial data and help computers to understand the semantic meaning implied in the content of geospatial data. Ontologies can also be used to describe the relationships between semantic entities, and the reasoning mechanism of ontologies can help to discover more implicit relationships (Peuquet, 2002 ). Thanks to these advantages, ontologies are among the best solutions to implement geospatial data integration and sharing on the semantic level.

In the 1990s, ontologies were introduced in the domain of geographic information science (Deliiska, 2007 Hahmann & Stephen, 2018 Hu et al., 2013 Li et al., 2017a Liu, Li, Shen, Yang, & Luo, 2018 Reitsma, Laxton, Ballard, Kuhn, & Abdelmoty, 2009 Stock et al., 2012 Sun, Wang, Ranjan, & Wu, 2015 Winter, 2001 ). These past decades have witnessed a rapid development of geo-ontologies, namely extensions of ontologies in geographic information science (GIScience). The insights from the previous studies can be organized into two aspects (Agarwal, 2005 ). The first aspect is trying to provide a unified framework of shared geographic concepts by constructing a top-level geo-ontology, such as the seven-level hierarchy of geo-ontology (Couclelis, 2010 ), 5-tier geo-ontology (Frank, 2001 ), and a semantic reference system (Kuhn, 2003 ). The second aspect is to develop a domain-level geo-ontology to facilitate specific tasks. Many task-oriented geo-ontologies focused on different applications, including information integration (Bittner, Donnelly, & Smith, 2009 Buccella, Cechich, & Fillottrani, 2009 Buccella et al., 2011 Chen, Sabri, Rajabifard, & Agunbiade, 2018 Comber, Fisher, & Wadsworth, 2011 Fonseca, Egenhofer, Agouris, & Camara, 2002 Hong & Kuo, 2015 Uitermark, van Oosterom, Mars, & Molenaar, 2005 Wang, Ma, & Chen, 2018 Zhao, Zhang, Wei, & Peng, 2008 ), information retrieval (Baglioni et al., 2008 Gui et al., 2013 Li, Goodchild, & Raskin, 2014 Lutz & Klien, 2006 Peachavanish & Karimi, 2007 Sun et al., 2015 Wiegand & García, 2007 ), data interoperability (Brodeur, Bedard, Edwards, & Moulin, 2010 Kuo & Hong, 2016 ), semantic representation of geospatial data (Dean, 2007 Fonseca & Llano, 2011 Schuurman & Leszczynski, 2006 Sun et al., 2015 ), geospatial clustering (Wang, Gu, Ziebelin, & Hamilton, 2010 ), spatial decision support (Li, Raskin, Goodchild, & Janowicz, 2012 ), disaster management and response (Xu, Nyerges, & Nie, 2014 Zhang, Zhao, & Li, 2010 ), web service discovery (Chen, Chen, Hu, & Di, 2011 Klien, Lutz, & Kuhn, 2006 ), composition of geoprocessing services (Li et al., 2015 Lutz, 2007 ), urban environment analysis (Fonseca, Egenhofer, Davis, & Borges, 2000 ), and environment modeling (Fallahi, Frank, Mesgari, & Rajabifard, 2008 ). This paper mainly focuses on the studies relevant to geospatial data integration and sharing.

An integration process includes two steps: semantic enrichment and mapping discovery (Buccella et al., 2009 ). Semantic enrichment is to annotate the data with essential semantic information. Mapping discovery is to find the mappings of semantic annotations of different data. According to the roles of ontologies in these steps, ontology-based methods for data integration can be classified into three categories: single-ontology methods, multiple-ontologies methods, and hybrid methods (Wache et al., 2001 ). A single top-level ontology was developed to describe all the relations between different kinds of basic entities that were frequently used to annotate the relations between data in the integration process (Bittner et al., 2009 ). Hong and Kuo ( 2015 ) established multiple bridge ontologies to determine the relations of concepts for cross–domain geospatial data integration. Chen et al. ( 2018 ) also transformed domain knowledge into multiple domain ontologies that were used to bond the data and ontology via semantic enrichment. Buccella et al. ( 2011 ) annotated geospatial data with multiple domain ontologies at the semantic enrichment step, and then a global ontology was introduced to complete the semantic mapping by combining the domain ontologies.

To enable geospatial data sharing, ontologies were used to implement geospatial data discovery at the semantic level in the previous studies. Generally, an ontology was developed to provide a formal and hierarchical semantic annotation for geospatial data, and users can carry out data discovery on the deeper semantic details of geospatial data based on the developed ontology (Stock et al., 2013 ). Lutz and Klien ( 2006 ) first proposed a relatively complete framework, including components for creating an ontology, registration mappings to describe the relationships between the ontology and feature types additionally, a user interface was implemented so that the users could conduct data discovery. Andrade, Baptista, and Davis ( 2014 ) also used an ontology to improve data discovery in terms of space, time, and theme. Wiegand and García ( 2007 ) formalized the relationships between different tasks and information on data sources for each task using an ontology thus, data sources required by a specific task could be found by reasoning on the ontology.

Another focus of geospatial data sharing is to enhance semantic interoperability of geospatial data (Fallahi et al., 2008 ). An ontology-based conceptual framework to describe the different configurations involved in geospatial data interoperability was proposed by Brodeur et al. ( 2010 ), which included five ontological phases (including reality, a cognitive model of reality, and a set of conceptual representations, etc.). Kuo and Hong ( 2016 ) leveraged a bridge ontology to generate semantic mappings of cross-domain concepts to facilitate geospatial data interoperability.

Geospatial metadata was also an important aspect for data sharing. Some studies focused on improving the quality of geospatial metadata using ontologies. Sun et al. ( 2015 ) initially quantitatively measured two types of metadata uncertainty (incompleteness and inaccuracy) via possibilistic logic and probabilistic statistics subsequently, an ontology that includes these uncertainties was developed to improve the quality of metadata. Schuurman and Leszczynski ( 2006 ) proposed ontology-based geospatial metadata, which added some non-spatial fields to the existing metadata schemas to describe geospatial data, such as sampling methodologies or a measurement specification.

The aforementioned studies greatly facilitated the progress in this field. However, most of these studies developed general ontologies for GIScience domain. The ontologies developed for geospatial data integration and sharing did not consider all aspects of semantic heterogeneity. Only spatial, temporal, and thematic information of geospatial data was used, ignoring the information of provenance and morphology. Hence, integration and sharing of heterogeneous geospatial data require a specialized geospatial data ontology to help deal with the existing semantic issues.

a systematic analysis of the semantic problems in geospatial data integration and sharing

an integral general framework to provide the overall structure and composition of GeoDataOnt and

a multilayer modular GeoDataOnt base that includes the semantic knowledge about geospatial data.

The remainder of this paper is structured as follows. Section 2 describes the characteristics hierarchy of geospatial data. Section 3 presents a systematic analysis of the semantic problems in geospatial data integration and sharing. The general framework of GeoDataOnt is presented in Section 4. Section 5 shows the detailed design and implementation of GeoDataOnt. Section 6 discusses the key limitations, challenges, and broad applications of GeoDataOnt. Section 7 summarizes this paper.


2. Basic idea of the activity design for participatory geo-analysis

The activity that could constitute the problem-solving pathway is the key point of this proposed method for participatory geo-analysis (Voinov et al. 2018 ). These activities can generally be classified into four categories according to their different purposes: awareness-related activities, data-related activities, model-related activities, and application-related activities (Robson et al. 2008 Laniak et al. 2013 Blocken and Gualtieri 2012 ).

(1) The awareness-related activities include a series of tasks to gain awareness about how to conduct a geo-analysis. During these activities, geographic problem-related resources are collected to enhance our understanding of geographic phenomena in geographic problems. Additionally, the background, limitations, accessible methods, and other valuable information about the problems need to be clarified. Therefore, it is important that interdisciplinary participants communicate about ideas and demands, prepare sufficient resources, and sufficiently understand the geographic problems (Badham et al., 2019 ).

(2) The data-related activities include tasks that are primarily relevant to data operations (e.g., converting data formats and editing data). Data are crucial for demonstrating geographic phenomena and processes. However, due to the heterogeneity of data, it is sometimes infeasible to use data directly. The data-related activities aim to prepare appropriate data and discover valuable information by changing the format, structure, attribute, or representation of data. Through these activities, participants can share knowledge, communicate about data processing methods, and conduct data-related operations collaboratively.

(3) The model-related activities involve different tasks and operations that are related to models, such as conceptual model building, model calibration, and model evaluation. A geographic model is an abstract representation of knowledge on geographic systems, and this type of representation can be used in the simulation and analysis of geographic processes (Badham et al., 2019 ). However, using an appropriate method to build a qualified model is usually difficult. Therefore, these activities are needed during modeling practice to generate credible models. In the model-related activities, participant engagement can improve understanding of the interactions in the geographic environment and lead to better modeling outcomes (Jakeman, Letcher, and Norton 2006 ).

(4) The application-related activities involve the application of prepared data, models, and methods to address geographic problems in human life directly. Appropriate data and models that were previously processed or built are used in these activities. For example, a forest growth model was built, and the field data were processed. In the application-related activities, different tasks are required to use these data and models to forecast growth and yield for forest management. Furthermore, participatory application-related activities can help participants, including managers and stakeholders, to balance different viewpoints and obtain better outcomes while addressing geographic problems (Jakeman, Letcher, and Norton 2006 Jones et al. 2009 ).

To help manage different tasks and represent the geo-analysis process during participatory geo-analysis practices, eight core activities are defined from these categories, namely context definition and resource collection, data processing, data analysis, data visualization, geo-analysis model construction, model effectiveness evaluation, geographical simulation, and decision making, as shown in Figure 1 (Jakeman, Letcher, and Norton 2006 Elsawah et al. 2017 Badham et al., 2019 ). These activities are designed in accordance with the purpose of the different tasks in the geo-analysis process. For example, both context definition and resource collection are preparatory tasks for geo-analysis.


Watch the video: Geolocation API Tutorial - Get User Location with Javascript (October 2021).