I'm looking for a reference/text for an explicit formula to calculate a satellite position given pseudorange measurements from later epoch times. I've seen a lot of matlab code on the internet but I was hoping to have these formulas laid out nicely.
Can anyone point me to the right direction?
Algebraic solutions are not very common for GPS, however, several do exist. The most well-known one is probably Bancroft's. Others are Abel's and Kleusberg's.
If you look for some libraries and test environment in c++ or python you could have a look at the piksi swift software tools. It is open source (LGPL) and the code is well documented in terms of function description + literature sources.
The source code documentation of the function
gnss_analysis/solution.py/singe_point_positionaddresses the literature Kaplan, E… Understanding GPS - Principles and applications. 2nd edition Section 2.4.2 for example and implements the procedure.
def single_point_position(obs, max_iterations=15, tol=1e-4): """ Computes the single point position by iteratively solving linearized least squares solutions Parameters ---------- obs : pd.DataFrame A DataFrame that holds 'pseudorange' observations (corrected for satellite clock errors), satellite position ('sat_x', 'sat_y', 'sat_z') all in ECEF coordinates at the time of transmission and 'tow' variable which corresponds to the time of arrival. All observations are assumed to have been propagated to a common time of arrival. max_iterations : int (optional) The maximum number of iterations, defaults to 15. tol : float (optional) Tolerance for convergence (same is used for both position and time), defaults to 1e-4. Returns -------- spp : dict A dictionary holding the single point position. Keys included: pos_ecef : a length three array representing the position in ECEF coords. time : the gps system time at the solution clock_offset : the receiver clock error converged : a boolean indicating convergence. Reference: Kaplan, E… Understanding GPS - Principles and applications. 2nd edition Section 2.4.2 """…
Calculating new satellite position given pseudorange measurements - Geographic Information Systems
Geospatial data capture depends on the ability to attach accurate real world coordinates to a feature. Creating today’s spatially enabled environment was only possible through a remarkable series of technological advancements and some enlightened public policies. As was described in 2009 in the Changing Geospatial Landscape report by the National Geospatial Advisory Committee:
Nearly all the data, technology, and applications we see today can be traced to innovative policies and government practices of the past. As such we require similar innovative policies now to keep pace with this remarkable sea change. Government-based geographic information providers can no longer think of themselves as a player outside of or immune from the community of private sector, state, local or even public stakeholders. (National Geospatial Advisory Committee 2009)
Clearly, the world of geospatial data capture was permanently altered when President Clinton opened the non-degraded GPS constellation for civilian use. Now, even an inexpensive smartphone knows where it is and will geotag a photo with coordinates that are within a few meters of the actual position. In fact, for a huge list of applications, the modern smartphone is a fairly accurate surveying device that can accurately place a user in the context of a detailed map or air photo. GPS technology also powers real-time passive and active sensor systems that inventory and monitor a limitless variety of terrestrial, atmospheric, oceanic, seismic, and hydrological conditions. Fleets of commercial and governmental satellites now provide daily high-resolution coverage of the world capable of finding missile sites or extracting the footprint of a building. The busiest airport in the world has created a complete unified three dimensional interior and exterior geospatial model to support scores of decisions. In other words, the Internet of things has an unlimited geospatial footprint. It is important to place the evolution of geospatial data capture in the wider context of GIS technology and use. As Goodchild (2011) suggested,
the entire Internet is quickly becoming one vast GIS. Over the past two decades, however, the widespread availability of GPS and mapping software has changed the balance in this equation, making it possible to create maps of virtually anything for almost nothing…. In the future, GIS will involve much more real-time situation monitoring and assessment and will need new kinds of tools that treat information as continually changing. (Goodchild 2011).
Data for input for GIS applications comes from many sources. The most common format is raster or vector representation of a feature on the Earth’s surface with associated geographic coordinates that are be linked directly or defined relative to fixed positions. A raster is simply a geographically registered matrix of values with a system of registration to real world coordinates. Vector data maintain the fidelity of geographical features as points, lines, or polygons. Historically, the need to integrate multiple data themes resulted in a competition between the two data models. The raster camp generated a matrix of numbers assigned to cells of a fixed size. It is a perfect structure for data acquired from sensors and the representation of continuous surfaces such as elevation. However, considerable information is lost through the generalization process and grid cells are not suited for representation of linear features. Over the past thirty years we have witnessed a coalescence of the raster and vector camps. The choice of data structure and analytical tools is governed by the requirements of the application (Figure 1).
Figure 1. 1978 photo interpreted land use polygons output as vectors on pen plotter and raster on electrostatic plotter. Source: authors.
In the broadest sense geospatial data is either captured directly by an instrument capable of determining its position or indirectly through a manual or automatic procedure. A surveyor’s recording of a benchmark’s latitude and longitude is an example of GIS ready data. This also means that a smart phone with GPS in the hands of a citizen is also directly capturing GIS ready data. A huge range of devices such as a digital cameras are passive sensors that directly capture raster geospatial data. Other active sensors such as LiDAR and Radar also acquire huge clouds of points that measure a wide range of characteristics.
Indirect geospatial data capture requires that a manual or automated procedure be used to create GIS compatible data. The field of photogrammetry has perfected ways to calculate three dimensional positions for features on air photos. Contour lines and digital elevation models are interpolated from surveyed measurements. Digital orthophotos produced with softcopy (digital) photogrammetry enable a user to capture GIS ready features through heads-up digitizing Alternatively, using georeferencing algorithms traditional analog maps and photos can be adjusted to sources with known coordinates. In the early days of digital data creation, the capture of vector features from source materials required specialized digitizing tables that functioned like a large piece of digital graph paper (Figure 2). An operator mounted an existing map or photo on the digitizing table and used a stylus to trace points, lines, or polygons. Such devices were limited to large organizations or companies that provided commercial digitizing services. As GIS applications expanded and demand for digital data increased, these tables also became a common fixture in research labs and governmental organizations. Initially, operators traced features without any preview or editing capability. Over time, special workstations enabled the operator to interactively display and edit features on monochrome storage tubes. In today’s modern GIS environment, data capture is conducted with desktop tools that support on screen or heads-up digitizing with a mouse on standard color computer displays. Sophisticated editing tools support automated edge following, snapping, polygon completion and enforcement of topological rules.
Figure 2. 1987 Manual digitizing process at the National Wetlands Research Center, USFWS Slidell LA, Image Source: James D. Scurry, used with permission.
The more than 55,000 scanned USGS topographic quadrangles) are an example of a geographically rectified source material. Digitizing can be performed by mounting the material on a digitizing table or working with a scanned version. This procedure usually requires air photo interpretation to identify and select features. In effect, scanned versions of air photos have the same properties as any raster data acquired by a passive sensor such as a multispectral Earth observation satellite. That means the values of the pixels can be classified into categories such as land cover. These raster data sets can also be used to extract features such as roads and rooftops. Today, they can be analyzed with sophisticated machine learning tools to perform automated feature extraction. GIS software is ready to incorporate them as either raster or polygons.
Other forms of indirect capture are also critical to creating geospatial data. In many cases new features are linked to existing rectified features. This process can be performed interactively by snapping to relevant features and adding new attributes. It can also be performed through a spatial search that transfers attribute information from the closest point or within a buffer. Reverse geocoding that finds all the people who need to be notified in an emergency is an example of the latter. The surveyor’s records of metes and bounds can also be used to create polygons of land ownership and generate an authoritative digital tax map. Using the site address these parcels also provide a basis for automated address geocoding that links coordinate points to an address. In a similar manner, the US Census Bureau interpolates the coordinates of a street address from the address ranges in its TIGER line files. NOAA’s Coastal Mapping Program demonstrates how data from active and passive sensors can be fused to create a comprehensive representation of the coast. For example, a handheld Analytical Spectral Device that collects eighty spectral signatures is used in the field to capture coastal features that are integrated with LiDAR and multispectral imagery to create a comprehensive depiction of the coast. This integrated multi-layer data is used to monitor conditions and can be updated on demand.
Over the past half a century the process of data capture has changed dramatically, both through introduction of novel technologies and significant improvements in existing ones. The evolution of GIS as essential technology benefited directly from the general advancements in the power of processors, the magnitude of storage systems and the capabilities of IT network infrastructure. In terms of software tools cartography, surveying, geodetic surveying, photogrammetry, computer aided design, remote sensing, and machine learning can all be accommodated under the umbrella of GIS. That means that there are pathways between these applications that enable a user to capture geospatial data from many sources, integrate them with other themes, convert them between data structures, use analytical procedures to create new data from existing features such as interpolation or spatial search. In addition, numerous specific advancements have directly impacted the way geospatial data are now created, including:
- GNSS (Global Navigation Satellite System). Examples include Europe’s Galileo, the USA’s NAVSTAR, Russia’s GLONASS, and China’s BeiDou. These have enabled even smart phones to independently record coordinates, and has revolutionized surveying and enabled crowd sourced data capture.
- Digital photography (passive sensors). These directly generate raster data by assigning reflective and emitted light values to pixels. They can be mounted on a range of platforms from small UAVs to Earth Observation satellites.
- LiDAR & other active sensors. These create huge point clouds of values returned from emitted pulses and have revolutionized the way that elevation data is created and manipulated.
- Digital orthophotographs produced from softcopy photogrammetry. The availability of such imagery means it is the preferred photo source for heads-up manual digitizing. They have eliminated the need for digitizing tablets.
- Oblique air photos. These multiple directional images provide 3D perspectives and support and enhance data extraction.
- Classification algorithms for raster data. These include supervised and unsupervised procedures for generating land cover and other characteristics. They provide a way to update maps and automatically create new data themes.
- Automated feature recognition and extraction. Image processing algorithms that identify and separate edges of features and extract polygons.
- Address geocoding. These procedures assign coordinates to a textual string such as a street address, achieved through a direct database join or by interpolation from address ranges. Improvements in accuracy and completeness mean this approach has become an essential capability for data capture.
- Creation of accurate global datums and software to handle geographic and projected coordinates. Improved measurements and integration of these into GIS has facilitated the development of data themes.
- Spatial metadata standards. Advances in the development and deployment of these had improved interoperability, development of new applications, and assessment of suitability of data use.
- City Geographic Markup language (CityGML). This open data model provides a mature semantic information model for the representation of 3D urban objects at different levels of detail (LOD).
- GeoTif and Geopdf file structures. These common formats facilitate the transfer of traditional digital data. They have provided GIS and non-GIS users tools to calculate coordinates, separate themes, and make measurements.
- Geographic Search Engines. These web-based tools provide easier search, discovery, access, and dissemination of geospatial data
- Robust networks. The existence of robust networks has facilitated virtual connections rather than relying on more cumbersome file transfer methods.
- Web 2.0. This has enabled sharing and collaboration for extensive applications, including geospatial ones.
Certain advances in technologies have enabled GIS in particular to flourish. This section elaborates on a few of these innovations.
4.1 Direct Capture in the Field / Surveying
For centuries, surveyors have utilized specialized measurement and optical devices to estimate coordinates in the field. They created an elaborate framework of benchmarks for registration of additional data themes. Fortunately, we have witnessed a revolution in the way coordinates are now associated with features. Even before the establishment of the Global Positioning System (GPS) laser range finders and total stations had modernized the surveying profession. Theodolites and electronic distance measurement devices enabled surveyors to measure long distances and calculate angles based on line-of-sight rather than direct access. However, the deployment of the GPS network was a game changer. This was a remarkable achievement that opened the door to the geographically enabled world that we now enjoy. The GPS network enabled a receiver to interpret latitude and longitude without linking to a network of existing control points. However, it took President Clinton’s executive order in 2000 to remove selective availability for the civilian sector. The impact was immediate. Usery et. al. (2018) provide an excellent discussion of the impact of GPS.
However, with the initial availability of GPS location signals in the 1980s, and particularly with the decryption of the signal and public availability in the 1990s, GPS and its counterpart global positioning systems in Russia and Europe and regional systems in a host of countries (China, India, and others) have become the standard of geolocation and have launched thousands of new location-based services, many of which changed the fabric of our social and business systems. (Usery, Varanka & Davis 2018, 386).
Over the past twenty years, the GPS network became part of an international Global Navigation Satellite System, (GNSS). This umbrella organization includes Russia’s GLONASS and the European Union’s Galileo constellations. The larger constellation greatly increases the opportunity to locate the desired number of satellites.
While GPS receivers can independently acquire latitude and longitude positions anywhere on Earth, they do not all generate the same level of accuracy. Improvements in precision are related to whether the receiver can accept more than one signal, can synchronize with ground-based stations and the length of time the receiver collects data. The US government, foreign partners and the private sector have provided the tools to refine the accuracy of the coordinates. These devices are used by licensed surveyors as well as public employees and scientists for direct field data capture. These systems often refine their data to within a meter by linking to a land based Continuously Operating Reference Stations (CORS). When precision work in construction and engineering is required surveyors collect at least two hours of data on a dual frequency receiver and send it to the National Geodetic Survey’s (NGS) Online Positioning User Service (OPUS). This service adjusts the data from three National CORS sites to refine the horizontal and vertical coordinates. Every month, users voluntarily upload tens of thousands of these highly accurate data files. Via email the user receives a location information with centimeter level position. This system is used thousands of times each month by surveyors who also contribute their data to densify the National Spatial Reference System. This system is designed to tie together all geographic features to common, nationally used horizontal and vertical coordinate systems” or to make sure that “everything matches up.”
The GNSS is also improving our ability to accurately measure the shape of the Earth. This impacts the establishment of local and global datums and reference frames. From the perspective of the GIS community, access to these datums established by the National Geodetic Survey (NGS) and software to handle more than 200 map projections has greatly simplified the integration of geospatial data.
While improvement of in surveying technology has had an enormous impact on commercial and scientific endeavors, the real game changer was the incorporation of a GPS receiver on billions of smartphones. In 2009 Apple turned the iPhone 3gs into a surveying device. Instantaneously, iPhone users are never lost, and they could find and navigate to positions anywhere in the world. While a smartphone with a single receiver can estimate a location within a few meters, the positional accuracy improves to a few centimeters when collecting many records on a modern dual-frequency phone. These new phones can compensate for interference from buildings and connect to the full range of GNSS satellites. This means that for some applications, the next generation of smartphone may compete with dedicated GPS instruments. Even today a common smart phone can directly capture the location of an unlimited number of features such as trees, trash bins, fire hydrants, urban furniture, bike racks, signs, poles, and meters.
For example, even the simple tools on an iPhone and Google maps can capture very useful data such as an inventory of fire hydrants (Figure 3). It is interesting to note that HazardHub, a property risk company, has outsourced the capture of fire hydrants to a group in southeast Asia, nine analysts have identified thousands of hydrants by manually scanning Google Street view images. They found that this manual process was better than an Artificial Intelligence Bot (Foust 2020).
Figure 3. Latitude and Longitude of a fire hydrant with information from iPhone photo displayed on Google Maps. Source: authors.
These same devices are being used by citizens to crowd source the location of potholes, vandalism, and many other areas of concern. A good example of volunteered geographic information occurred in 2017 when 700 volunteers set a Guinness world when they used their phones to “survey” the King Tide in Virginia (Virginia Institute of Marine Science, 2017. However, the most remarkable application has been the creation of Open Street Map (OSM) created by volunteers who capture traces of streets and points of interest during mapping parties. In many areas OSM is the best available map for civilian use. Furthermore, the USGS uses a a group of volunteers called the National Map Corps, that “have made more than 500,000 edits to over 400,000 structure points” (Usery, 2019).
4.2 Georeferencing Photos
The history of aerial photography includes a wide range of options from hot air balloons to orbiting satellites. The importance of imagery as the foundation for data capture cannot be overstated.
Air photos were a critical asset in World War I. Following the war several aerial photography programs were initiated by the Department of Agriculture and the USGS to map and monitor changes on the Earth’s surface. Over time, photographs evolved from black and white, to color taken from manned aircraft, to output from multispectral data sensors captured by satellites. From the perspective of GIS data capture vertical photographs taken directly below an aircraft are the most useful format. However, the scale on the photo changes outward from the nadir position. Special optical devices such as a zoom transfer scope were used to manually align features on a photo to the base map. In a digital environment, photos and maps can be georeferenced to compensate for differences in scale and orientation. The term “rubber sheeting” is often used to describe the process. The accuracy of this georeferencing process depends on the selection of a good set of photo identifiable control points with known coordinates. The process involves creating a series of links between the features on the image and corresponding features on the ground. The best sources of these ground control points are surveyed targets such as a white X in an open field or road. In other cases, street corners, hedges, building edges, roads, docks, and other features have been used. A set of links spaced across the image is used to calculate the parameters for a transformation that shifts and warps each pixel to a new position. Georeferencing is an iterative process that allows the operator to adjust the control points, links, and choice of transformation. Most software provides a choice of transformations such as polynomial, spline, projective, or similarity. An error table is generated to evaluate the accuracy of the transformation. Ultimately, the operator will visually inspect the results of the transformation and accept a level of error. The final step is to rectify the georeferenced image to standardized coordinate systems. The rectified source material is then ready for display or to serve as a base map for extraction or update of features. Over several decades, countless photos have been georeferenced.
The tools to georeference images are so straight forward that the Leventhal Map & Education Center of the Boston Public Library operates a web-based application that allows users to interactively georeference historical maps and photos. These georeferenced air photos and maps are ready for a analysis and visualization with the popular swipe tool that graphically portrays two views of the same area. They are a quick way to illustrate change (Figure 4).
Figure 4. Comparing a georeferenced historical map of Washington, D.C., over aerial imagery, via Esri's swipe application. Image source: authors.
The orthophoto process warps the source image so that distance and area correspond with real-world measurements. This allows photos to be used directly in mapping applications as it mitigates distortion otherwise present in aerial photography. Developed by photogrammetrists in the 1960s the process takes overlapping stereo images and a digital elevation model to adjust for variations in terrain and tilt of the aircraft (Figure 5).
Figure 5. Comparison of georeferenced aerial photograph and an orthophoto. Note the difference in the representation of the straight pipeline. Source: USGS.
4.4 Automated Classification and Feature Extraction
The evolution of data capture has been highlighted by the conversion from manual to automated procedures. In terms of raster data from cameras remote sensing researchers have devoted their careers to finding the best combination of spectral signals for identifying different types of land cover and monitoring conditions on the earth. The classification algorithms are critical tools for finding new features e.g. wetlands, measuring characteristics e.g. crop yields or detection change e.g. urban sprawl. These new tools have evolved from the broader field of image processing. Much of the current work on pattern recognition, artificial intelligence and machine learning focuses on automated feature extraction. In surveillance applications machine learning tools can be used to find specific targets. Radiologists employ the same tools to find tumors. In domestic applications these tools can find specific objects like swimming pools, additions to buildings, or areas of unhealthy vegetation. In a similar fashion LiDAR elevation and intensity values can also be analyzed to isolate trees, powerlines, structures, and other features on the surface (Figure 6).
Figure 6. Extraction of individual structures and trees from LiDAR. Image source: authors.
Many image processing routines will generate polygons from the classification images. A recent analysis by Microsoft highlights the status of automated feature extraction. Microsoft analysts accessed five million satellite images to capture the edges of buildings and create polygon footprints for 125,192,184 buildings in the United States (Figure 7). It is noteworthy that a major IT company wanted that information and then shared it. From a technical viewpoint the analysis puts a new marker on the phrase “even visible from space” and certainly indicates that geospatial data capture is ready to handle big data.
Figure 7. Microsoft building footprints for a section of Columbia, South Carolina. Data source: Esri's Living Atlas. Image source: authors.
The Microsoft rooftop example illustrated above demonstrates the current state of the art in terms of the supply and demand for geospatial data. However, it is only part of a larger movement that impacts the way individuals live, work, and play in a geospatially-enabled society. As a result of technological advancements, demands for and investments in many programs exists, as they are able to draw upon a remarkable storehouse of geospatial data. Numerous geospatial platforms have emerged that provide easy-to-use tools for locating existing geospatial resources based on standardized metadata. These platforms include the Esri Living Atlas that hosts thousands of authoritative layers contributed by its user community. Many of the nation’s major research institutions have built and shared geographic data discovery platforms using GeoBlacklight, a “multi-institutional open-source collaboration building a better way to find and share geospatial data.” These search engines provide a experiences for users to discover, browse, and download geospatial data for a specific area of interest. Part 2 of this entry on the changes to geospatial data capture will focus on the implications of these changes, including the existence of vast new collections of data.
Foust, B. (2020). Personal communication.
Goodchild, M. (2011). Looking Forward: Five Thoughts on the Future of GIS. Esri ArcWatch https://www.esri.com/news/arcwatch/0211/future-of-gis.html
National Geospatial Advisory Committee (NGAC). (2009). NGAC Report: The Changing Geospatial Landscape. https://www.fgdc.gov/ngac/NGAC%20Report%20-%20The%20Changing%20Geospatia.
Usery E. L., Varanka D. E., and Davis L. R. (2018). Topographic Mapping Evolution: From Field and Photographically Collected Data to GIS Production and Linked Open Data. The Cartographic Journal. 55:4, 378- 390, DOI: 10.1080/00087041.2018.1539555
Calculating new satellite position given pseudorange measurements - Geographic Information Systems
The Global Positioning System (GPS) consists of 24 Earth-orbiting satellites. These satellites allow any person who owns a GPS receiver to determine his or her precise longitude, latitude and altitude anywhere on the planet. For as little as $100, you can know exactly where you are and where you have been. For anyone who has ever been lost -- while hiking in the woods, boating in the ocean, driving in a unfamiliar city or flying a small airplane at night -- a GPS receiver is a miracle. When you use GPS receiver, you're never lost.
How is this possible?Now, we'll look at the details of how the GPS satellites and GPS receivers work together to pinpoint a location. You'll find that the GPS system is an amazing technological tour de force!
You may want to start with How They Work to learn how the GPS satellites and your receiver work together -- it's an amazing system! If you've never used a GPS receiver before, try What They Can Do to learn about what GPS receivers can do for you. Features tells you about all of the features you find on GPS receivers so you know what you are talking about if you are looking to buy one. Just click on the different areas of the map to learn all about these amazing devices!
The following map leads you to all of the available information!
If you don't own a GPS but know someone who does, encourage him or her to enter a review for the rest of us to read.
If you don't own a GPS but would like to know what other people think of theirs before buying one, click here!
Let's look at an example to see how trilateration works.
Let's say that you are somewhere in the United States and you are TOTALLY lost -- you don't have a clue where you are. You find a friendly-looking person and ask, "Where am I?" and the person says to you, "You are 625 miles from Boise, Idaho." This is a piece of information, but it is not really that useful by itself. You could be anywhere on a circle around Boise that has a radius of 625 miles, like this:
--> If you know you are 625 miles from Boise, you could be anywhere on this circle.
So you ask another person, and he says, "You are 690 miles away from Minneapolis, Minnesota." This is helpful -- if you combine this information with the Boise information, you have two circles that intersect. You now know that you are at one of two points, but you don't know which one, like this:
--> If you know you are 625 miles from Boise and 690 miles from Minneapolis, then you know you must be at one of two points.
If a third person tells you that you are 615 miles from Tucson, Arizona, you can figure out which of the two points you are at:
--> With three known points, you can determine that your exact location is somewhere near Denver, Colorado!
With three known points, you can see that you are near Denver, Colorado!
Trilateration is a basic geometric principle that allows you to find one location if you know its distance from other, already known locations. The geometry behind this is very easy to understand in two dimensional space.
This same concept works in three dimensional space as well, but you're dealing with spheres instead of circles. You also need four spheres instead of three circles to find your exact location. The heart of a GPS receiver is the ability to find the receiver's distance from four (or more) GPS satellites. Once it determines its distance from the four satellites, the receiver can calculate its exact location and altitude on Earth! If the receiver can only find three satellites, then it can use an imaginary sphere to represent the Earth and can give you location information but no altitude information.
- The location of at least three satellites above you
- The distance between you and each of those satellites
GPS satellites send out radio signals that your GPS receiver can detect. But how does the signal let the receiver know how far away the satellite is? The simple answer is: A GPS receiver measures the amount of time it takes for the signal to travel from the satellite to the receiver. Since we know how fast radio signals travel -- they are electromagnetic waves and so (in a vacuum) travel at the speed of light, about 186,000 miles per second -- we can figure out how far they've traveled by figuring out how long it took for them to arrive.
Measuring the time would be easy if you knew exactly what time the signal left the satellite and exactly what time it arrived at your receiver, and solving this problem is key to the Global Positioning System. One way to solve the problem would be to put extremely accurate and synchronized clocks in the satellites and the receivers. The satellite begins transmitting a long digital pattern, called a pseudo-random code, as part of its signal at a certain time, let's say midnight. The receiver begins running the same digital pattern, also exactly at midnight. When the satellite's signal reaches the receiver, its transmission of the pattern will lag a bit behind the receiver's playing of the pattern. The length of the delay is equal to the time of the signal's travel. The receiver multiplies this time by the speed of light to determine how far the signal traveled. If the signal traveled in a straight line, this distance would be the distance to the satellite.
The only way to implement a system like this would require a level of accuracy only found in atomic clocks. This is because the time measured in these calculations amounts to nanoseconds. To make a GPS using only synchronized clocks, you would need to have atomic clocks not only on all the satellites, but also in the receiver itself. Atomic clocks usually cost somewhere between $50,000 and $100,000, which makes them a little too expensive for everyday consumer use!
The Global Positioning System has a very effective solution to this problem -- a GPS receiver contains no atomic clock at all. It has a normal quartz clock. The receiver looks at all the signals it is receiving and uses calculations to find both the exact time and the exact location simultaneously. When you measure the distance to four located satellites, you can draw four spheres that all intersect at one point, as illustrated above. Four spheres of this sort will not intersect at one point if you've measured incorrectly. Since the receiver makes all of its time measurements, and therefore its distance measurements, using the clock it is equipped with, the distances will all be proportionally incorrect. The receiver can therefore easily calculate exactly what distance adjustment will cause the four spheres to intersect at one point. This allows it to adjust its clock to adjust its measure of distance. For this reason, a GPS receiver actually keeps extremely accurate time, on the order of the actual atomic clocks in the satellites!
One problem with this method is the measure of speed. As we saw earlier, electromagnetic signals travel through a vacuum at the speed of light. The Earth, of course, is not a vacuum, and its atmosphere slows the transmission of the signal according to the particular conditions at that atmospheric location, the angle at which the signal enters it, and so on. A GPS receiver guesses the actual speed of the signal using complex mathematical models of a wide range of atmospheric conditions. The satellites can also transmit additional information to the receiver.
- The Global Positioning System needs 24 operational satellites so it can guarantee that there are at least four of them above the horizon for any point on Earth at any time. In general, there are usually eight or so satellites "visible" to a GPS receiver at any given moment.
- Each satellite contains an atomic clock.
- The satellites send radio signals to GPS receivers so that the receivers can find out how far away each satellite is. Because the satellites are orbiting at a distance of 12,660 miles (20,370 km) overhead, the signals are fairly weak by the time they reach your receiver. That means you have to be outside in a fairly open area for your GPS receiver to work.
Finding the Satellites
The other crucial component of GPS calculations is the knowledge of where the satellites are. This isn't difficult because the satellites travel in a very high, and predictable orbits. The satellites are far enough from the Earth (12,660 miles) that they are not affected by our atmosphere. The GPS receiver simply stores an almanac that tells it where every satellite should be at any given time. Things like the pull of the moon and the sun do change the satellites' orbits very slightly, but the Department of Defense constantly monitors their exact positions and transmits any adjustments to all GPS receivers as part of the satellites' signals.
The most essential function of a GPS receiver is to pick up the transmissions of at least four satellites and combine the information in those transmissions with information in an electronic almanac, so that it can mathematically determine the receiver's position on Earth. The basic information a receiver provides, then, is the latitude, longitude and altitude (or some similar measurement) of its current position. Most receivers then combine this data with other information, such as maps, to make the receiver more user friendly. You can use maps stored in the receiver's memory, connect the receiver up to a computer that can hold more detailed maps in its memory or simply buy a detailed map of your area and find your way using the receiver's latitude and longitude readouts.
Geographers have mapped every corner of the Earth, so you can certainly find maps with the level of detail you desire. You can look at a GPS receiver as an extremely accurate way to get raw positional data, which can then can be applied to geographic information that has been accumulated over the years. GPS receivers are an excellent navigation tool, with seemingly endless applications!
This could certainly be a sensible purchase, when you consider all of the things a GPS receiver can do for you. The basic function of a GPS receiver is to figure out its location on Earth. To everyone who's ever lost their way in the woods, driven off course on a cross-country trip or gotten turned around while piloting a boat or airplane, the advantages of this simple function are obvious. But most GPS receivers go far beyond providing this simple navigational data. They can act as an interactive map, and they have a number of recreational applications.
At its heart, a GPS receiver is simply a device that can locate itself on Earth. It does this by communicating with at least four satellites overhead (see this page for details). For this reason, a GPS receiver is limited as to where it can function. It has to be able to "see" the satellites to calculate latitude and longitude, which means it usually won't work inside. So, one of the basic characteristics of GPS receivers is that they find your location only when you are outside.
The simplest GPS receiver would give you just the coordinates of your location on Earth in latitude, longitude and altitude. Latitude and longitude are basically X and Y axes of a big imaginary grid wrapped around the planet, and altitude is a measure of your distance above sea level. If you had a GPS receiver that gave you these simple coordinates, and you had a map of your area that used this same coordinate system, you could find your location simply by reading the map. In this regard alone, a GPS receiver is a amazing device. Without a GPS receiver, you would have to find your position based on the position of the stars in the sky, using complicated tools and calculations. And you wouldn't have near the same level of accuracy!
But today's handheld GPS receivers give you much more than this raw data. Even low-end receivers have some sort of electronic map stored in memory, so you don't have to carry around a bunch of paper maps. The receiver takes the coordinate information and applies it to its electronic map, graphically pointing out to you where you are in relation to roads, bodies of water, etc. Maps vary a great deal in the level of detail they offer but the basic idea behind this function is to give you a map that automatically marks your location, without you having to consider your coordinates. This is a great convenience any time you need to use a map, and is extremely helpful at times when you can't take the time to find your location on a map, such as when you're driving down the highway.
- How far you've traveled (odometer)
- How long you've been traveling
- Your current speed (speedometer)
- Your average speed
- A "breadcrumb" trail showing you exactly where you have traveled on the map
- The estimated time of arrival at your destination if you maintained your current speed
- You can tell the receiver to record its coordinates when you are at that location.
- You can find the location on a map (the internal map or another one) and enter its coordinates as a waypoint.
- Good camp sites
- Favorite road-side shops
- Excellent fishing spots
- Scenic overlooks
- Where you left your car
Receivers with route capabilities will let you save a certain number of waypoints to memory so that you can use them again and again. If the receiver has a data port, you can also download your routes to a computer, which has much more storage memory, and then upload them again when you plan to follow those routes.
Because they have so much more storage capability, computers can do a lot more with GPS location data than your average receiver. A receiver with a data port can feed the raw coordinates of your location into a computer running more complicated software. There are a number of available software applications that can place you on detailed maps of particular areas. If you want to use your receiver for complicated navigation, down backroads for example, this capability will help you out tremendously. You can also update your computer maps, so that they include any surveying adjustments or changes in an area, whereas a receiver's onboard map usually can't be changed. When you use your receiver in conjunction with your computer, you increase the receiver's capabilities considerably. Also, your receiver won't be outdated as quickly, because in conjunction with a computer, all it needs to do is provide coordinates -- your computer does the rest.
Some recent receivers let you download detailed maps of an area into the GPS, or supply detailed maps with plug-in map cartridges. These maps can give you street-level detail in cities and the receiver may even provide driving directions as you drive!
GPS receivers have been a favorite of hikers, boaters and pilots for years, and are now becoming commonplace as prices fall. Check out the handy feature chart to help you decide which features you need!
|Receiver||Multiplex||Multiplex receivers have only one channel. They pick up one satellite signal at a time, cycling through a few satellites. They work much better in open environments, as their connection can easily be disrupted by buildings or other obstacles. The most affordable models use multiplex receivers.|
|Parallel-channel||Parallel-channel receivers have several channels, and lock onto many satellites at the same time. They don't lose satellite connections very easily and they can pinpoint the location more exactly. These receivers were once fairly expensive, but there are several affordable models now on the market. If you plan to use your receiver in a big city or mountainous area, you should probably get one with parallel channels.|
|Antenna||Quadrifilar||Quadrifilar antennas are a length of coiled wire in a plastic housing that protrudes from the receiver. You may want to look for a model with a removable quadrifilar antenna, so you can place the antenna on your dashboard for a better "view" of the satellites. Quadrifilar antennas are best at receiving transmissions from satellites near the horizon, and not so adept at receiving signals from satellites overhead.|
|Patch||Patch antennas are flat, and they are usually built in to the receiver. They have the reverse strengths and weaknesses of a quadrifilar antenna: They are better at detecting satellites that are directly overhead and not as good at detecting satellites near the horizon.|
|Power||Battery||Hand-held receivers will use batteries as a power source. This means portability. Be sure to find out what kind of batteries a hand-held unit uses, and how long they typically last.|
|External Source||Some handheld receivers can accept external power, which is handy if you plan to be driving all day with your GPS on and don't want to drain the batteries. Car, boat or airplane in-dash GPS receivers will run on an external power source provided by the larger unit it's hooked up to. These devices are not mobile.|
|Display||LCD Panel||All GPS receivers display information on an LCD panel.|
|Color LCD Panel||These displays make it easier to read maps on the receiver and help you distinguish between different routes you have created in the same area. Color panels often use more power than B&W panels, so they drain batteries faster.|
|Map Datum||WGS 84||WGS 84 is the default map datum for any GPS receiver. It is a system developed around the emergence of GPS technology and is standardized for universal use.|
|Additional||Eventually maps of the whole world will be converted to WGS84, a GPS standard datum. In the meantime, check to see that the GPS receiver recognizes the map datums used in your area, or areas you plan to travel to.|
|Internal Maps||All receivers will give you your latitude, longitude and altitude, but they don't all show you your location on a detailed map. When you're shopping for a receiver, decide what kind of map you'll need and make sure the receiver you purchase offers that type of map. Many receivers contain a general map for the world in memory, but this map may only show you major roads and bodies of water. Some receivers have a wide array of other maps stored in memory or can download detail maps.|
|Map Cartridges||Some receivers accept special map cartridges with more detailed maps of specific areas.|
|Download Maps||Some of the newer GPS receivers have download capability that allows you to download maps stored in your computer into your receiver.|
|Way Point Capability||With this feature, you can record certain way points -- locations along your path or on a map -- and arrange them in a route. Your receiver will then guide you from way point to way point along your route. This route mapping is handy because you can record the way you got somewhere so you can easily backtrack. You can also plan routes on detailed maps before you leave for a trip, and record all the information you need on your hand-held receiver.|
|Track Logging||Receivers with a track logging feature can record your path as you move. This is useful if you want to backtrack or document your exact route for future use. It's also helpful to view your progress this way while you are traveling.|
|Storage Memory||If you plan to use route-mapping and track logging extensively, you'll want to find a receiver that has enough memory. Consider how many way points you would want to store and find out what a receiver's maximum storage capability is. Also, look for a receiver with a backup system that will hold onto your information while you change the receiver's batteries.|
|Data Port||One way to place yourself on a detailed map is to hook the receiver up to a computer (desktop, laptop or PDA). A data port provides such a connection so that you can use GPS data in conjunction with a number of software applications. Receivers with computer connection capability may also be able to download information to the computer. This is a good feature if you want to keep a collection of route maps (favorite hiking paths, tricky driving directions, good fishing spots). A receiver has limited memory, but you can store an entire catalog of route maps on your computer.|
|Sunrise/Sunset Times||Some receivers can give you the times for sunrise and sunset at any particular location. This helps you plan your trip so you don't have to travel in the dark, which can be very useful to hikers, sailors, and pilots alike.|
|Odometer||In most modern receivers, you can track how far you have traveled. Just like the odometer in your car, this feature can be useful in any number of ways.|
|Speedometer||Most GPS receivers these days can track how fast you are moving. This is extremely helpful for estimating how long it will take you to get to your destination. Most receivers with speedometers will also give you an ETA.|
|Measurement Units||Make sure a receiver can display the measurement units you will be using. If you will use the receiver in sailing navigation, for example, you will probably want a receiver that can give you measurements in nautical miles. Another feature to look for is the ability to display multiple measurement systems at a time, so that you could have elevation in feet, say, and geographical distance in kilometers.|
|Accuracy Warning||Most receivers have some sort of system that tells you when something may be causing inaccurate positioning. This could either be due to poor satellite reception or to a receiver malfunction. In a lot of GPS applications, accurate positioning is critical, so be sure to find a receiver that will tell you when there is an accuracy problem.|
|Differential GPS||Differential GPS is a technique that utilizes a second GPS receiver at a known location to correct for satellite signal inaccuracies. If a receiver already knows its exact location, it can check the accuracy of the signals it is receiving. This second, stationary receiver then broadcasts any accuracy adjustments to your receiver.|
|Built-in Database||GPS receivers designed specifically for airplanes or boats may have way points, or landmarks, already programmed into them. These might include airports and ports.|
|Rotatable Screen||Some GPS receivers have a display that rotates from a vertical position to a horizontal position. This feature might be useful if you plan to mount your receiver horizontally in your car some of the time and carry it vertically in front of you at other times.|
|User-changeable Fields||Receivers with this feature give you some extra control over how you look at information. Basically, you can customize different fields so they show you only the information you need for a particular activity.|
|Waterproofing||If you will be using GPS on a boat or while hiking, you should look for a receiver with good waterproofing. Some receivers are sealed so that they are completely waterproof while others are merely constructed so they resist water. Consider the conditions in which you will be using your receiver, and look for an adequate amount of weatherproofing. |
When You Shop
We've created a GPS Receiver Feature Comparison chart for you to use as you research various receiver models. Read through the Features page, and then take this chart to the store with you. Fill in the blanks for each model you are interested in and you will find it is much easier to compare them! You may also want to keep an additional copy near your desk as you research GPS receiver models on the Internet.
The feature comparison chart is available to you as a PDF file. You will need the free Adobe Acrobat Reader to view it.
Here are just a few of the GPS receivers you can buy:
If you don't own a GPS but know someone who does, encourage him or her to enter a review for the rest of us to read.
If you don't own a GPS but would like to know what other people think of theirs before buying one, click here!
Some GPS receivers have speed limits.
GPS receiver manufacturers sometimes program speed limits into the devices, so that if the device is moving above a certain speed, it will not work properly. A receiver meant to be used in a car may not work on an airplane, which travels much faster than a automobile. This is more often the case in car, airplane or boat-mounted receivers than in the hand-held models.
GPS receivers have temperature limits.
Like most electronic devices, especially those with LCD screens, GPS receivers may not function properly above or below certain temperatures. If you plan to use your receiver in any extreme temperature situations, such as mountain climbing or hiking in the desert, you should check to make sure the receiver model can function in those conditions.
If you don't own a GPS but know someone who does, encourage him or her to enter a review for the rest of us to read.
If you don't own a GPS but would like to know what other people think of theirs before buying one, click here!
- What exactly are "longitude" and "latitude" measurements?
Geographic latitude and longitude are two axes in an imaginary coordinate system that covers the Earth. Your latitude tells you your distance from the equator. It is actually the degree of the angle formed by a line from your location to the center of the Earth and a line from the equator to the center of the Earth. Longitude is the degree of the angle formed by a line from your position to the center of the Earth and a line from the prime meridian to the center of the Earth. The prime meridian is an imaginary line running north to south around the Earth. It passes through the north and south poles and Greenwich, London. With both latitude and longitude you know how far north or south you are from the equator and how far east or west you are from the prime meridian, so you can exactly pinpoint your location on Earth.
For people living close to busy roads, traffic is a major source of air pollution. Few prospective data have been published on the effects of long-term exposure to traffic on the incidence of coronary heart disease (CHD).
In this article, we examined the association between long-term traffic exposure and incidence of fatal and nonfatal CHD in a population-based prospective cohort study.
We studied 13,309 middle-age men and women in the Atherosclerosis Risk in Communities study, without previous CHD at enrollment, from 1987 to 1989 in four U.S. communities. Geographic information system–mapped traffic density and distance to major roads served as measures of traffic exposure. We examined the association between traffic exposure and incident CHD using proportional hazards regression models, with adjustment for background air pollution and a wide range of individual cardiovascular risk factors.
Over an average of 13 years of follow-up, 976 subjects developed CHD. Relative to those in the lowest quartile of traffic density, the adjusted hazard ratio (HR) in the highest quartile was 1.32 [95% confidence interval (CI), 1.06–1.65 p-value for trend across quartiles = 0.042]. When we treated traffic density as a continuous variable, the adjusted HR per one unit increase of log-transformed density was 1.03 (95% CI, 1.01–1.05 p = 0.006). For residents living within 300 m of major roads compared with those living farther away, the adjusted HR was 1.12 (95% CI, 0.95–1.32 p = 0.189). We found little evidence of effect modification for sex, smoking status, obesity, low-density lipoprotein cholesterol level, hypertension, age, or education.
Higher long-term exposure to traffic is associated with incidence of CHD, independent of other risk factors. These prospective data support an effect of traffic-related air pollution on the development of CHD in middle-age persons.
Several prospective cohort studies suggest that long-term exposure to outdoor air pollution is associated with increased mortality from cardiopulmonary diseases (Abbey et al. 1999 Chen et al. 2005 Dockery et al. 1993 Filleul et al. 2005 Miller et al. 2007 Pope et al. 2002, 2004). Road traffic is a major contributor to outdoor air pollution in industrialized countries, contributing fine particulate matter (PM), carbon monoxide, oxides of nitrogen, and other pollutants. Assessment of traffic exposure can enhance studies of health effects of outdoor air pollution because local sources are important and because few people live close to the monitoring stations, which are often purposefully located away from local sources such as busy roads.
Recent studies have shown associations of long-term and short-term exposure to traffic air pollution with cardiovascular mortality, morbidity, and subclinical parameters (de Paula Santos et al. 2005 Finkelstein et al. 2004 Hoek et al. 2002 Hoffmann et al. 2006, 2007 Lanki et al. 2006 Peters A et al. 2004 Schwartz et al. 2005 Tonne et al. 2007 Volpino et al. 2004). In contrast, few prospective studies have examined traffic air pollution and coronary events. A recent study of survivors of myocardial infarction in Rome lacked information on smoking, an important potential confounder (Rosenlund et al. 2008). Two other prospective studies, one in Canada (Finkelstein et al. 2004) and the other in the Netherlands (Hoek et al. 2002), assessed only mortality. We need more prospective data on coronary events in healthy general populations, with detailed data on potential confounders, including smoking, collected at the individual level, to address the hypothesis that long-term traffic exposure influences the development of coronary heart disease (CHD).
In the present study, we examined the association between long-term residential traffic exposure and incident CHD events among participants in the Atherosclerosis Risk in Communities (ARIC) study, a prospective population-based cohort of middle-age men and women. This study included data on a wide range of risk factors for CHD collected prospectively at the individual level.
Materials and Methods
We studied participants from the ARIC study, which was designed to investigate the natural history and etiology of atherosclerosis and its sequelae. Details of the design, objectives, and quality control activities of the ARIC study have been previously reported (ARIC Investigators 1989). A probability sample of 15,792 residents 45–64 years of age was recruited in 1987–1989 from four U.S. communities: Forsyth County, North Carolina Jackson, Mississippi northwest suburbs of Minneapolis, Minnesota and Washington County, Maryland. The Jackson sample was 100% African American, and the other three were predominantly white. The institutional review boards of the four participating centers approved the study, and all participants gave written informed consent before the study.
Ascertainment of events
Study participants were followed for incident CHD until December 2002. Potential events were identified via annual telephone calls, community-wide hospital surveillance, and linkage with local and national death-certificate registries. We investigated events and deaths we validated events using hospital records, and deaths using physician records and next-of-kin interviews. We defined incident CHD on the basis of published criteria as the first definite or probable myocardial infarction, silent myocardial infarction by electrocardiography, definite CHD death, or coronary revascularization (White et al. 1996). We classified events by a combination of computer algorithm and independent review by one or two physicians of medical record abstractions and discharge summaries.
We geocoded participant addresses using a commercial service (Mapping Analytics LLC, Rochester, NY), which assigned a latitude and longitude coordinate to each address. Geocoding was performed with the Centrus Enhanced Database, which was primarily based on the Topologically Integrated Geographic Encoding and Referencing (TIGER) system data.
We quantified small-scale spatial variations of traffic exposure by two measurements: geographic information system (GIS)–mapped traffic density assignments at place of residence, and the distance from place of residence to nearest roadways of various types. We used the participant’s address at the baseline visit (1987–1989) as the basis for calculating both exposure measures.
We obtained the roadway locations and annual average daily traffic volumes from Geographic Data Technology (GDT now Tele Atlas Global Crossroads, Boston, MA). We selected GDT roadway geometry data because they provide 100% roadway coverage in the four communities and are the most extensively georeferenced (or repositioned), using aerial imagery to match real-world locations. It is estimated that most GDT roads in populated areas are located with ±12 m “position accuracy.” GDT bases the traffic volumes on state and county agency traffic counts on highways, arterials, and collector streets with more than approximately 1,000 vehicles per day. They assign traffic counts to neighboring roadway links with similar capacity.
We used the link-based traffic volumes to generate maps of traffic density with 10 × 10 m resolution using the ARCInfo Spatial Analyst software (Kan et al. 2007 Peters JM et al. 2004). We created traffic density maps with 300 m circular search radii that produce densities decreasing by approximately 90% between the edge of the roadway and 300 m away (perpendicular) from the roadways, which is consistent with the characteristics observed by Zhu et al. (2002, 2006). We used identical mapping procedures in all the communities so that the results are comparable across communities. The densities reflect proximity to traffic without consideration of differential exposures caused by meteorology. This method accounts for the combined relative influence of several roadways (and road types) with various traffic activity levels at different distances from each residence location. This metric generally behaves like an inverse-distance–weighted traf-fic volume, except that it specifically considers intersections and multiple roadways more accurately. Therefore, these density values provide a relative indication of which residence locations are likely to be most exposed to traffic activity and, as such, are dimensionless indicators of proximity to traffic volume.
Because the available traffic density data were from 2000, we back-extrapolated to the study period (1987–1992) based on change in population density using county-level census population data. Changes in traffic volumes over time are correlated with changes in population density (Polzin 2006).
Distance to major roads
To estimate qualitatively the distribution of the distance from residence locations to roadways, we calculated straight-line distances. The distance-to-roadway data include the distance (in meters) from each unique residence location to the nearest roadways.
In a previous study, the concentrations of ultrafine PM from highway traffic became indistinguishable from the background concentration at distances > 300 m (Zhu et al. 2002). We therefore dichotomized distance to major roads (interstate and state highways, major arterials) at 300 m. To conduct analysis of sensitivity of the results to the choice of cut-points, we also categorized distance to major roads as ≤ 150 m and > 150 m (Hoffmann et al. 2006 Venn et al. 2001).
Background air pollution level
We acquired data on the background ambient concentrations of PM with aerodynamic diameter ≥ 10 μm (PM10), nitrogen dioxide, and ozone during the research period from the U.S. Environmental Protection Agency air quality data retrieval system. We abstracted 24-hr average concentrations for PM10 and NO2 and 8-hr (from 1000 hours to 1800 hours) average concentrations for O3. We spatially interpolated the average concentrations from air quality monitoring stations to the cohort residence locations using inverse distance weighting.
We defined hypertension as systolic blood pressure of ≥140 mmHg, diastolic blood pressure of ≥90 mmHg, or use of antihypertensive medication during the previous 2 weeks. We defined diabetes mellitus as a fasting glucose level of ≥126 mg/dL (7.0 mmol/L), a nonfasting glucose level of ≥200 mg/dL (11.1 mmol/L), or a self-reported history of or treatment for diabetes. Trained, certified technicians determined anthropometric measures following a detailed, standardized protocol. We calculated body mass index (BMI) as weight (kg)/[height (m)] 2 . Blood collection and processing for levels of total cholesterol, low-density lipoprotein (LDL), and high-density lipoprotein (HDL) are described elsewhere (National Heart, Lung, and Blood Institute 1988). Trained and certi-fied interviewers also collected information on age, ethnicity, sex, smoking, environmental tobacco smoke (ETS), alcohol consumption status, occupation, education, family income, and family history of CHD. We calculated the family risk score based on the participant’s report of parents and five oldest siblings’ history of CHD (Li et al. 2000). Smoking variables included smoking status (never, former, and current smokers), age at starting to smoke, years smoked, and cigarettes per day. We classified never smokers and former smokers as exposed to ETS if they reported being in close contact with smokers for more than 1 hr/week (Howard et al. 1998). Thus, we obtained five strata for active and passive smoking: current smoker, former smoker with ETS, former smoker without ETS, never smoker with ETS, never smoker without ETS. Neighborhood-level socioeconomic status (SES), in addition to individual level factors, may affect health status (Geronimus and Bound 1998), so we included 1990 census-tract–level data on median employment rate and poverty rate (U.S. Census Bureau 1992).
The end point of interest was incident CHD, so we excluded participants if they had prevalent CHD (n = 762). We also excluded persons who met the following criteria: ethnicity other than African American or white (n = 48) and, because of their small numbers, African Americans from Minnesota and Maryland field centers (n = 55) and missing geocoding information (n = 1,724). Exclusions overlapped in some instances, leaving 13,309 subjects for analysis.
We conducted all analyses using the statistical software package SAS, version 9.1 (SAS Institute Inc., Cary, NC). We calculated follow-up time as the time from baseline to an event, or the last follow-up contact, or through December 2002, whichever occurred first.
We used Cox proportional hazards regression analyses to assess the associations of traffic exposure with the risk of incident CHD. Distributions of traffic density are highly skewed (Figure 1) therefore, we analyzed traf-fic density both as quartiles and as a continuous variable after log transformation. We estimated the hazard ratios (HRs) of incident CHD for quartiles of traffic density relative to the lowest quartile, and for one unit increase of log-transformed density values. For tests for linear trends across increasing quartiles of traf-fic density, we used the median value in each quartile. We also estimated the risk for living close to major roads (≤300 m or ≤150 m), using living farther away as the reference.
Figure 1 Distribution of traffic density at ARIC participant residences (1987–1989). (A) Traffic density (n = 13,309). (B) PM10 ( μg/m 3 n = 13,309). (C) NO2 (ppb n = 9,902). (D) O3 (ppb n = 13,309).
Our basic models included age, sex, center, and ethnicity (Miller et al. 2007). In the adjusted models, we added factors that we iden-tified a priori as potential confounders: BMI, physical activity, education, occupation, individual family income, census-tract–based SES (median employment rate and poverty rate), smoking status (current smoker, former smoker with ETS, former smoker without ETS, never smoker with ETS, never smoker without ETS), age at starting to smoke (0–15, 15–20, 20–29, and ≥30 years), years smoked, cigarettes per day, alcohol intake (never, former, and current drinker), hypertension, diabetes status, family risk score, HDL, LDL, total cholesterol, fib-rinogen, and background air pollution level (PM10 and O3). NO2 data were missing in Jackson (Figure 1), so we did not include them in the adjusted models. Because other concomitants of traffic exposure, such as stress, could affect cardiovascular health (Williams et al. 2000), we did a sensitivity analysis to examine the impact of social stress (trait anger), measured at the second cohort examination, on the estimated effect of traffic exposure.
We also conducted stratified analyses by sex, smoking status, obesity, LDL level, hypertension, age, and education, to examine potential modifiers of the association between traffic exposure and incident CHD. We categorized BMI according to the standard definition: normal/underweight (BMI < 25) and overweight/obese [BMI ≥25 (Centers for Disease Control and Prevention 2007)]. We dichotomized LDL level as ≤130 mg/dL and > 130 mg/dL. We defined hypertension as systolic blood pressure of ≥140 mmHg, diastolic blood pressure of ≥90 mmHg, or use of anti-hypertensive medication during the previous 2 weeks. We classified education as low (less than high school), middle (high school or vocational school), or high (college or above).
Table 1 presents selected characteristics of participants at baseline, according to whether or not they developed incident CHD during follow-up. Among the 13,309 study participants who were free of CHD at baseline, 976 subjects developed CHD (268 fatal and 708 non-fatal) over an average of 13 years of follow-up. As expected, subjects with incident CHD were slightly older more likely to be male, black, or current smokers had higher BMI, total cholesterol, and LDL and lower HDL and had higher prevalence of hypertension and diabetes.
Table 1 Baseline characteristics of the ARIC participants by the status of incident CHD at end of follow-up (n = 13,309).a
|Characteristic||Incident CHD (n = 976)||No incident CHD (n = 12,333)|
|Sex (% male)||59.3||41.4|
|Age (years)||55.8 ± 5.6||53.9 ± 5.8|
|BMI (kg/m 2 )||28.6 ± 5.4||27.6 ± 5.4|
|Ethnicity (% black)||31.4||28.4|
|Total cholesterol (mmol/L)||5.8 ± 1.2||5.5 ± 1.1|
|HDL (mmol/L)||1.2 ± 0.4||1.4 ± 0.4|
|LDL (mmol/L)||3.9 ± 1.1||3.5 ± 1.0|
aValues are mean ± SD unless specified as percentage.
The estimated traffic density and back-ground air pollutant (PM10, NO2, and O3) concentrations at the baseline home address varied greatly (Figure 1). Consistent with previous reports (Hoek et al. 2001), we did not find a strong correlation between traffic density and background air pollution level the Pearson correlation coefficients of traffic density with PM10, NO2, and O3 were –0.12, –0.04, and –0.10, respectively. Background PM10 was moderately correlated with NO2 (Pearson correlation coefficient, r = 0.60) and O3 (r = 0.44) NO2 was weakly correlated with O3 (r = 0.03).
Greater traffic density was associated with increased risk of incident CHD in both basic and adjusted models (Table 2). Relative to those in the lowest quartile of traffic density, the adjusted HRs across increasing quartiles were 1.17 [95% confidence interval (CI), 0.93–1.47], 1.38 (95% CI, 1.11–1.72), and 1.32 (95% CI, 1.06–1.65) (p-value for trend across quartiles = 0.042). When we treated traffic density as a continuous variable, the adjusted HR per one unit increase of log-transformed density was 1.03 (95% CI, 1.01–1.05 p = 0.006) (Figure 2).
Figure 2 Adjusted HRs (and 95% CIs) for incident CHD in relation to traffic density and by distance to major roads. Covariates were age, sex, center, ethnicity, BMI, physical activity, education, occupation, individual family income, census-tract–based SES, smoking status, age at starting to smoke, years smoked, cigarettes per day, alcohol intake, hypertension, diabetes status, family risk score, HDL, LDL, total cholesterol, fibrinogen, and background air pollution level.
Table 2 HRs (95% CIs) for incident CHD associated with traffic density.
|Quartile||Continuous variable (log-transformed)|
|Model||1 (lowest)||2||3||4||p-Value for trenda||One unit increase||p-Value|
|Median of quartiles||0||2.87||14.97||41.83|
|Basic modelb||1.00||1.13 (0.94–1.37)||1.31 (1.09–1.57)||1.28 (1.07–1.54)||0.018||1.02 (1.01–1.04)||0.004|
|Adjusted modelc||1.00||1.17 (0.93–1.47)||1.38 (1.11–1.72)||1.32 (1.06–1.65)||0.042||1.03 (1.01–1.05)||0.006|
ap-Values for trend based on quartiles scaled by the quartile medians.
bCovariates were age, sex, center, and ethnicity.
cCovariates were age, sex, center, ethnicity, BMI, physical activity, education, occupation, individual family income, census-tract–based SES, smoking status, age at starting to smoke, years smoked, cigarettes per day, alcohol intake, hypertension, diabetes status, family risk score, HDL, LDL, total cholesterol, fibrinogen, and background air pollution level.
For residents living within 300 m of major roads compared with subjects living farther away, the adjusted HR (model 2) was 1.12 (95% CI, 0.95–1.32 p = 0.189) for incident CHD (Table 3, Figure 2). In the analysis with alternative cut point of distance to major roads (150 m), we found similar patterns with incident CHD (Table 3, Figure 2).
Table 3 HRs (95% CIs) for incident CHD by distance to major roads.
|Dichotomized at 300 m||Dichotomized at 150 m|
|Model||< 300 m||≥300 m||p-Value||< 150 m||≥150 m||p-Value|
|Basic modela||1.13 (0.98–1.30)||1.00||0.085||1.12 (0.99–1.28)||1.00||0.073|
|Adjusted modelb||1.12 (0.95–1.32)||1.00||0.189||1.09 (0.94–1.26)||1.00||0.264|
aCovariates were age, sex, center and ethnicity.
bCovariates were age, sex, center, ethnicity, BMI, physical activity, education, occupation, individual family income, census-tract–based SES, smoking status, age at starting to smoke, years smoked, cigarettes per day, alcohol intake, hypertension, diabetes status, family risk score, HDL, LDL, total cholesterol, fibrinogen, and background air pollution level.
We further examined whether sex, smoking status, BMI, LDL level, hypertension, age, and education modified the association of traffic density with incident CHD (Table 4). Although results did not always achieve statistical significance with the reduced sample sizes in subgroup analyses, there were positive associations in most strata and little evidence of effect modification.
Table 4 Adjusted HRs for incident CHD associated with traffic density, stratified by sex, smoking status, BMI, and education.a
|Higher traffic density|
|Characteristic||No. (%) cases||Adjusted ratiob||p-Value for trendc||p-Value for interactiond|
|Female||397 (5.2)||1.27 (0.91–1.78)||0.066||0.397|
|Male||579 (10.2)||1.41 (1.04–1.91)||0.138|
|Never||293 (5.2)||0.84 (0.56–1.24)||0.785||0.380|
|Current/former||681 (8.9)||1.61 (1.22–2.12)||0.013|
|< 25||254 (5.7)||1.49 (0.98–2.28)||0.051||0.271|
|≥25||719 (8.2)||1.29 (0.99–1.68)||0.208|
|≤130||320 (5.5)||1.38 (0.93–2.03)||0.133||0.567|
|> 130||618 (8.8)||1.29 (0.98–1.70)||0.171|
|No||443 (5.2)||1.45 (1.04–2.02)||0.041||0.659|
|Yes||528 (11.1)||1.30 (0.96–1.77)||0.213|
|Baseline age (years)|
|≤60||733 (6.7)||1.36 (1.05–1.75)||0.028||0.624|
|> 60||243 (10.3)||1.10 (0.68–1.78)||0.991|
|Low||321 (10.7)||1.09 (0.73–1.64)||0.725||0.705|
|Middle||372 (6.9)||1.74 (1.20–2.52)||0.016|
|High||283 (5.8)||1.22 (0.80–1.86)||0.420|
aCovariates were age, sex, center, ethnicity, BMI, physical activity, education, occupation, individual family income, census-tract–based SES, smoking status, age at starting to smoke, years smoked, cigarettes per day, alcohol intake, hypertension, diabetes status, family risk score, HDL, LDL, total cholesterol, fibrinogen, and background air pollution level.
bComparing the fourth with the first quartiles of traffic density (95% CI).
cp-Value for trend based on quartiles scaled by the quartile medians.
dp-Value for interaction between traffic exposure and stratification factors.
Consistent with the previous study of Hoek et al. (2002), we did not observe signifi-cant effects of background air pollution on incident CHD the HRs of incident CHD per 10- μg/m 3 increase of PM10 and O3 were 1.28 (95% CI, 0.76–2.18) and 1.04 (95% CI, 0.45–2.42), respectively. In addition, the observed associations between traffic and CHD remained after we further adjusted for trait anger (data not shown).
Among adults from four U.S. communities followed prospectively over an average of 13 years, higher residential exposure to traffic, a major source of air pollution in urban areas, was associated with an increased risk of incident CHD events. To our knowledge, our study provides the first prospective evidence of the association between traffic exposure and incident cardiovascular morbidity in the general population.
Traffic emissions result in small-scale spatial variations and therefore mainly affect residents living close to busy roads (Roorda-Knape et al. 1999). Thus, air pollution data from fixed monitoring stations may be inadequate to study traffic-related air pollution and health outcomes, especially for those living near busy roads. For example, Hoek et al. (2002) identi-fied a consistent association of cardiopulmonary mortality with traffic exposure, but not with estimated ambient background concentration of the traffic indicator pollutants black smoke and NO2. Similarly, we did not observe a significant effect of background air pollution on incident CHD. This is not surprising given that the ARIC study was not designed to examine air pollution and was conducted only in four communities. Furthermore, these four communities were not well supplied with air pollution monitors during the study period, resulting in little variation in measured air pollution within communities. In addition, from an analytic perspective, any affect of baseline air pollution on risk would be indistinguishable from differences in risk by community, a design variable for this study.
Our prospective results support previous findings from cross-sectional, case–control, and cohort studies examining the association between long-term traffic exposure and cardiovascular morbidity, mortality, or intermediate end points. In a prospective cohort analysis of myocardial infraction survival, Rosenlund et al. (2008) found that long-term exposure to traffic-related air pollution increased the risk of CHD, and the relative risk for incident coronary events per 10 μg/m 3 of NO2 was 1.03 (95% CI, 1.00–1.07). In a case–control analysis in Boston, Massachusetts, an interquartile range increase in cumulative traffic near the home was associated with a 4% (95% CI, 2–7%) increase in the odds of acute myocardial infarction, suggesting an effect of long-term exposure to traffic (Tonne et al. 2007). In a cross-sectional analysis, Hoffmann et al. (2006) found that higher long-term exposure to traffic-related emission, but not background air pollution, was associated with increased risk of CHD events (odds ratio = 1.85 95% CI, 1.21–2.84) in a German population. In the same study, residents living within 50 m from a major road had an odds ratio of 1.63 (95% CI, 1.14–2.33), relative to subjects living > 200 m away, for elevated coronary artery calcifica-tion, an intermediate cardiovascular end point (Hoffmann et al. 2007).
In the present study, we examined the association between traffic exposure at the baseline residences (visit 1, 1987–1989) and incident CHD. We found similar associations of traffic with CHD when we used the first-year (1987) exposure data. This is comparable with previous studies of air pollution in relation to mortality that assessed exposure at the beginning of follow-up (Hoek et al. 2002 Pope et al. 2002, 2004). Although air pollution levels may vary over time because of changes in emission or economic activity, substantial changes are usually slow and affect the region in the same way.
Various factors may modify the health effects of air pollution. We did not find signifi-cant evidence for effect modification by sex, smoking status, obesity, LDL cholesterol level, hypertension, age, or education. The information on modification of long-term effects of air pollution by educational status is inconsistent (Hoffmann et al. 2006 Pope et al. 2002). Additional examination of modifying factors in future investigations will help in public policy making, risk assessment, and standard setting.
Some limitations of our analysis should be noted. We did not predict the air pollutant concentration based on traffic density data, and we could not validate our exposure assessment with actual measurements given that the exposure period was 1987–1989. Likewise, our traffic density metric does not reflect the local meteorologic conditions that could influence the emission, mixing, and transport of air pollutants. Some studies have suggested a stronger association with stop-and-go traffic than with moving traffic and with truck traffic compared with car traffic (Ryan et al. 2005). However, in most studies, including ours, it was not possible to separate traffic types. As in some other studies, our exposure assessment was limited to residential address, and we lacked information on home exposures to other sources of pollutants, such as cooking or heating. Because our study outcomes are CHD events, rather than a measure of CHD pathogenesis, it is possible that some of the CHD events may be attributable to short-term exposure to traffic (Lanki et al. 2006 Peters A et al. 2004). However, given evidence that the association for acute exposure to air pollution is smaller in magnitude than the associations for long-term exposure (Künzli et al. 2005), we suspect that any overestimation of effects of long-term exposure to traffic would be minimal. Also, as in any epi-demiologic study, residual confounding is possible. However, we carefully adjusted for known and potential confounders, including demographic characteristics, personal and neighborhood level socioeconomic characteristics, cigarette smoking, family risk factors, and background air pollution.
We are not able to rule out the possible effect of traffic noise, which at high levels may have adverse effects on cardiovascular physiology (Berglund et al. 1996). However, the associations between traffic noise and cardiovascular risk are far less consistent than those between air pollution and cardiovascular disease (Ising and Kruppa 2004). Moreover, given that noise-related cardiovascular events (e.g., hypertension) may follow a different pathway than does air pollution (Jarup et al. 2008), and we found a significant association between traffic and CHD in nonhypertensive subjects (Table 4), it seems unlikely that noise explains the observed effects of traffic.
We lacked assessment of traffic-related air pollution for the approximately 10.9% of subjects whose addresses could not be geocoded. Most of the missing geocodes were attributed to such problems as missing state (most often military addresses), address left blank, temporary address, address not in the United States, apartment name without address, or only a post office box given. This could raise concern about potential selection bias. However, for missing geocode data to have created a spurious association between higher traffic exposure and incident CHD, subjects with and without geocodes would need to differ in both traffic exposure and incident CHD. Although we have no data on their traffic exposure, they were similar to subjects with nonmissing geocodes in sociode-mographic characteristics and incident CHD [see Supplemental Material, Table S1 (online at http://www.ehponline.org/members/2008/11290/suppl.pdf)]. Thus, it is unlikely that the missing geocode data could have created the observed associations.
Because we obtained the geocodes for participants’ addresses from the TIGER file by Mapping Analytics, error could result from the use of an older road network data. To assess this, we randomly selected 100 participants from each of the ARIC communities and regeocoded their residential addresses using the GDT software, which incorporates a more recent road network database. Using these new geocodes, we recalculated the traffic densities and distances to major roads and compared them with the original results. The two geocoding methods resulted in similar estimates for the distance to nearest major roads. For traffic density, the two methods yielded quite concordant values for the Forsyth, Jackson, and Minneapolis communities, but concordance was lower for Washington County. This might reflect a renaming of streets that occurred there, so we repeated our analyses excluding Washington County from our analysis. With the less precise exposure assessment in Washington County excluded, the association of traffic density with incident CHD remained [see Supplemental Material, Table S2 (online at http://www.ehponline.org/members/2008/11290/suppl.pdf)], suggesting that the association is relatively robust to geocoding error.
A major strength of our study is the use of an objective measure of traffic-related air pollution at residential addresses (e.g., GIS-based assessment of traffic density and distance to major roads) to capture exposure relevant for subjects living in close proximity to busy roads. A further strength of our study is the overall residential stability of the cohort approximately 90% of the ARIC participants had lived in the same community for > 10 years at the baseline visit. An earlier study of ARIC study participants reported very high concordance between past decades and visit 1 for county and state of residence (Rose et al. 2004). Moreover, we based our analysis on carefully collected incidence data in a large cohort from four U.S. communities. We prospectively collected data on exposure, outcome, and a wide range of potential confounders at the individual levels using standardized protocols and extensive quality assurance. In addition to doing detailed adjustment for individual-level confounders and evaluating potential effect modification, we also adjusted for a community-level measure of SES to help account for confounding (Marmot 2001).
In summary, in this prospective analysis, higher long-term exposure to traffic, a major source of air pollution, was related to increased risk of incident CHD. These find-ings add to the previous data on mortality and disease prevalence and suggest that traffic-related air pollution can influence the development of disease in an ostensibly healthy middle-age population. Continued emphasis on the implementation of strategies for reducing traffic-related air pollution is likely to reap additional public health benefits.
The Atherosclerosis Risk in Communities study is a collaborative study supported by National Heart, Lung, and Blood Institute contracts N01-HC-55015, N01-HC-55016, N01-HC-55018, N01-HC-55019, N01-HC-55020, N01-HC-55021, and N01-HC-55022. This work was also supported by Z01 ES043012 from the Intramural Research Program, National Institute of Environmental Health Sciences.
In general, the present invention provides a traffic sensing system that includes multiple sensing modalities, as well as an associated method for normalizing overlapping sensor fields of view and operating the traffic sensing system. The system can be installed at a roadway, such as at a roadway intersection, and can work in conjunction with traffic control systems. Traffic sensing systems can incorporate radar sensors, machine vision sensors, etc. The present invention provides a hybrid sensing system that includes different types of sensing modalities (i.e., different sensor types) with at least partially overlapping fields of view that can each be selectively used for traffic sensing under particular circumstances. These different sensing modalities can be switched as a function of operating conditions. For instance, machine vision sensing can be used during clear daytime conditions and radar sensing can be used instead during nighttime conditions. In various embodiments, switching can be implemented across an entire field of view for given sensors, or can alternatively be implemented for one or more subsections of a given sensor field of view (e.g., to provide switching for one or more discrete detector zones established within a field of view). Such a sensor switching approach is generally distinguishable from data fusion. Alternatively, different sensing modalities can work simultaneously or in conjunction as desired for certain circumstances. The use of multiple sensors in a given traffic sensing system presents numerous challenges, such as the need to correlate sensed data from the various sensors such that detections with any sensing modality are consistent with respect to real-world objects and locations in the spatial domain. Furthermore, sensor switching requires appropriate algorithms or rules to guide the appropriate sensor selection as a function of given operating conditions. In operation, traffic sensing allows for the detection of objects in a given field of view, which allows for traffic signal control, data collection, warnings, and other useful work. This application claims priority to U.S. Provisional Patent Application Ser. No. 61/413,764, entitled “Autoscope Hybrid Detection System,” filed Nov. 15, 2010, which is hereby incorporated by reference in its entirety.
FIG. 1 is plan view of an example roadway intersection 30 (e.g., signal-controlled intersection) at which a traffic sensing system 32 is installed. The traffic sensing system 32 includes a hybrid sensor assembly (or field sensor assembly) 34 supported by a support structure 36 (e.g., mast arm, luminaire, pole, or other suitable structure) in a forward-looking arrangement. In the illustrated embodiment, the sensor assembly 34 is mounted in a middle portion of a mast arm that extends across at least a portion of the roadway, and is arranged in an opposing direction (i.e., opposed relative to a portion of the roadway of interest for traffic sensing). The sensor assembly 34 is located a distance D1 from an edge of the roadway (e.g., from a curb) and at a height H above the roadway (e.g., about 5-11 m). The sensor assembly 34 has an azimuth angle θ with respect to the roadway, and an elevation (or tilt) angle β. The azimuth angle θ and the elevation (or tilt) angle β can be measured with respect to a center of a beam or field of view (FOV) of each sensor of the sensor assembly 34. In relation to features of the roadway intersection 30, the sensor assembly 34 is located a distance DS from a stop bar (synonymously called a stop line) for a direction of approach of traffic 38 intended to be sensed. A stop bar is generally a designated (e.g., painted line) or de facto (i.e., not indicated on the pavement) location where traffic stops in the direction of approach 38 of the roadway intersection 30. The direction of approach 38 has a width DR and 1 to n lanes of traffic, which in the illustrated embodiment includes four lanes of traffic having widths DL1, DL2, DL3 and DL4 respectively. An area of interest in the direction of approach of traffic 38 has a depth DA, measured beyond the stop bar in relation to the sensor assembly 34.
It should be noted that while FIG. 1 specifically identifies elements of the intersection 30 and the traffic sensing system 32 for a single direction of approach, a typical application will involve multiple sensor assemblies 34, with at least one sensor assembly 34 for each direction of approach for which it is desired to sense traffic data. For example, in a conventional four-way intersection, four sensor assemblies 34 can be provided. At a T-shaped, three-way intersection, three sensor assemblies 34 can be provided. The precise number of sensor assemblies 34 can vary as desired, and will frequently be influenced by roadway configuration and desired traffic sensing objectives. Moreover, the present invention is useful for applications other than strictly intersections. Other suitable applications include use at tunnels, bridges, toll stations, access-controlled facilities, highways, etc.
The hybrid sensor assembly 34 can include a plurality of discrete sensors, which can provide different sensing modalities. The number of discrete sensors can vary as desired for particular applications, as can the modalities of each of the sensors. Machine vision, radar (e.g., Doppler radar), LIDAR, acoustic, and other suitable types of sensors can be used.
FIG. 2 is a schematic view of the roadway intersection 30 illustrating one embodiment of three overlapping fields of view 34-1, 34-2 and 34-3 for respective discrete sensors of the hybrid sensor assembly 34. In the illustrated embodiment, the first field of view 34-1 is relatively large and has an azimuth angle θ1 close to zero, the second field of view 34-2 is shorter (i.e., shallower depth of field) and wider than the first field of view 34-1 but also has an azimuth angle θ2 close to zero, while the third field of view 34-3 is shorter and wider than the second field of view 34-2 but has an azimuth angle with an absolute value significantly greater than zero. In this way, the first and second fields of view 34-1 and 34-2 have a substantial overlap, while the third field of view 34-3 provides less overlap and instead encompasses additional roadway area (e.g., turning regions). It should be noted that fields of view 34-1, 34-2 and 34-3 can vary based on an associated type of sensing modality for a corresponding sensor. Moreover, the number and orientation of the fields of view 34-1, 34-2 and 34-3 can vary as desired for particular applications. For instance, in one embodiment, only the first and second fields of view 34-1 and 34-2 can be provided, and the third field of view 34-3 omitted.
FIG. 3 is a perspective view of an embodiment of the hybrid sensor assembly 34 of the traffic sensing system 32. A first sensor 40 can be a radar (e.g., Doppler radar), and a second sensor 42 can be a machine vision device (e.g., charge-coupled device). The first sensor 40 can be located below the second sensor, with both sensors 40 and 42 generally facing the same direction. The hardware should have a robust mechanical design that meets National Electrical Manufacturers Association (NEMA) environmental requirements. In one embodiment, the first sensor 40 can be an Universal Medium Range Resolution (UMRR) radar, and the second sensor 42 can be a visible light camera which is capable of recording images in a video stream composed of a series of image frames. A support mechanism 44 commonly supports the first and second sensors 40 and 42 on the support structure 36, while allowing for sensor adjustment (e.g., adjustment of pan/yaw, tilt/elevation, etc.). Adjustment of the support mechanism allows for simultaneous adjustment of the position of both the first and second sensors 40 and 42. Such simultaneous adjustment facilitates installation and set-up where the azimuth angles θ1 and θ2 of the first and second sensors 40 and 42 are substantially the same. For instance, where the first sensor 40 is a radar, the orientation of the field of view of the second sensor 42 simply through manual sighting along a protective covering 46 can be used to simplify aiming of the radar due to mechanical relationships between the sensors. In some embodiments, the first and the second sensors 40 and 42 can also permit adjustment relative to one another (e.g., rotation, etc.). Independent sensor adjustment may be desirable where the azimuth angles θ1 and θ2 of the first and second sensors 40 and 42 are desired to be significantly different. The protective covering 46 can be provided to help protect and shield the first and second sensors 40 and 42 from environmental conditions, such as sun, rain, snow and ice. Tilt of the first sensor 40 can be constrained to a given range to minimize protrusion from a lower back shroud and field of view obstruction by other portions of the assembly 34.
FIG. 4A is a schematic block diagram of an embodiment of the hybrid sensor assembly 34 and associated circuitry. In the illustrated embodiment, the first sensor 40 is a radar (e.g., Doppler radar) and includes one or more antennae 50, and analog-to-digital (A/D) converter 52, and a digital signal processor (DSP) 54. Output from the antenna(e) 50 is sent to the A/D converter 52, which sends a digital signal to the DSP 54. The DSP 54 communicates with a processor (CPU) 56, which is connected to an input/output (I/O) mechanism 58 to allow the first sensor 40 to communicate with external components. The I/O mechanism can be a port for a hard-wired connection, and alternatively (or in addition) can provide for wireless communication.
Furthermore, in the illustrated embodiment, the second sensor 42 is a machine vision device and includes a vision sensor (e.g., CCD or CMOS array) 60, an A/D converter 62, and a DSP 64. Output from the vision sensor 60 is sent to the A/D converter 62, which sends a digital signal to the DSP 64. The DSP 64 communicates with the processor (CPU) 56, which in turn is connected to the I/O mechanism 58.
FIG. 4B is a schematic block diagram of another embodiment of a hybrid sensor assembly 34. As shown in FIG. 4B, the A/D converters 52 and 62, DSPs 54 and 64, and CPU 56 are all integrated into the same physical unit as the sensors 40 and 42, in contrast to the embodiment of FIG. 4A where the A/D converters 52 and 62, DSPs 54 and 64, and CPU 56 can be located remote from the hybrid sensor assembly 34 in a separate enclosure.
Internal sensor algorithms can be the same or similar to those for known traffic sensors, with any desired modifications of additions, such as queue detection and turning movement detection algorithms that can be implemented with a hybrid detection module (HDM) described further below.
It should be noted that the embodiment illustrated in FIG. 4 is shown merely by way of example, and not limitation. In further embodiments, other types of sensors can be utilized, such as LIDAR, etc. Moreover, more than two sensors can be used, as desired for particular applications.
In a typical installation, the hybrid sensor assembly 34 is operatively connected to additional components, such as one or more controller or interfaces boxes and a traffic controller (e.g., traffic signal system). FIG. 5A is a schematic block diagram of one embodiment of the traffic sensing system 32, which includes four hybrid sensor assemblies 34A-34D, a bus 72, a hybrid interface panel box 74, and a hybrid traffic detection system box 76. The bus 72 is operatively connected to each of the hybrid sensor assemblies 34A-34D, and allows transmission of power, video and data. Also connected to the bus 72 is the hybrid interface panel box 74. A zoom controller box 78 and a display 80 are connected to the hybrid interface panel box 74 in the illustrated embodiment. The zoom controller box 78 allows for control of zoom of machine vision sensors of the hybrid sensor assemblies 34A-34D. The display 80 allows for viewing of video output (e.g., analog video output). A power supply 82 is further connected to the hybrid interface panel box 74, and a terminal 84 (e.g., laptop computer) can be interfaced with the hybrid interface panel box 74. The hybrid interface panel box 74 can accept 110/220 VAC power and provides 24 VDC power to the sensor assemblies 34A-34D. Key functions of the hybrid interface panel box 74 are to deliver power to the hybrid sensor assemblies 34A-34D and to manage communications between the hybrid sensor assemblies 34A-34D and other components like the hybrid traffic detection system box 76. The hybrid interface panel box 74 can include suitable circuitry, processors, computer-readable memory, etc. to accomplish those tasks and to run applicable software. The terminal 84 allows an operator or technician to access and interface with the hybrid interface panel box 74 and the hybrid sensor assemblies 34A-34D to perform set-up, configuration, adjustment, maintenance, monitoring and other similar tasks. A suitable operating system, such as WINDOWS from Microsoft Corporation, Redmond, Wash., can be used with the terminal 84. The terminal 84 can be located at the roadway intersection 30, or can be located remotely from the roadway 30 and connected to the hybrid interface panel box 74 by a suitable connection, such as via Ethernet, a private network or other suitable communication link. The hybrid traffic detection system box 76 in the illustrated embodiment is further connected to a traffic controller 86, such as a traffic signal system that can be used to control traffic at the intersection 30. The hybrid detection system box 76 can include suitable circuitry, processors, computer-readable memory, etc. to run applicable software, which is discussed further below. In some embodiments, the hybrid detection system box 76 includes one or more hot-swappable circuitry cards, with each card providing processing support for a given one of the hybrid sensor assemblies 34A-34D. In further embodiments, the traffic controller 86 can be omitted. One or more additional sensors 87 can optionally be provided, such as a rain/humidity sensor, or can be omitted in other embodiments. It should be noted that the illustrated embodiment of FIG. 5A is shown merely by way of example. Alternative implementations are possible, such as with further bus integration or with additional components not specifically shown. For example, an Internet connection that enables access to third-party data, such as weather information, etc., can be provided.
FIG. 5B is a schematic block diagram of another embodiment of the traffic sensing system 32′. The embodiment of system 32′ shown in FIG. 5B is generally similar to that of system 32 shown in FIG. 5A however, the system 32′ includes an integrated control system box 88 that provides functions of both the hybrid interface panel box 74 and the hybrid traffic detection system box 76. The integrated control system box 88 can be located at or in close proximity to the hybrid sensors 34, with only minimal interface circuitry on the ground to plumb detection signals to the traffic controller 86. Integrating multiple control boxes together can facilitate installation.
FIG. 6 is a schematic block diagram of software subsystems of the traffic sensing system 32 or 32′. For each n hybrid sensor assemblies, a hybrid detection module (HDM) 90-1 to 90-n is provided that includes a hybrid detection state machine (HDSM) 92, a radar subsystem 94, a video subsystem 96 and a state block 98. In general, each HDM 90-1 to 90-n correlates, synchronizes and evaluates the detection results from the first and second sensors 40 and 42, but also contains decision logic to discern what is happening in the scene (e.g., intersection 30) when the two sensors 40 and 42 (and subsystems 94 and 96) offer conflicting assessments. With the exception of certain Master-Slave functionality, each HDM 90-1 to 90-n generally operates independently of the others, thereby providing a scalable, modular system. The hybrid detection state machine 92 of the HDMs 90-1 to 90-n further can combine detection outputs from the radar and video subsystems 94 and 96 together. The HDMs 90-1 to 90-n can add data from the radar subsystem 94 onto a video overlay from the video subsystem 96, which can be digitally streamed to the terminal 84 or displayed on the display 80 in analog for viewing. While the illustrated embodiment is described with respect to radar and video/camera (machine vision) sensors, it should be understood that other types of sensors can be utilized in alternative embodiments. The software of the system 32 or 32′ further includes a communication server (comserver) 100 that manages communication between each of the HDMs 90-1 to 90-n and a hybrid graphical user interface (GUI) 102, a configuration wizard 104 and a detector editor 106. HDM 90-1 to 90-n software can run independent of GUI 102 software once configured, and incorporates communication from the GUI 102, the radar subsystem 94, the video subsystem 96 as well as the HDSM 92. HDM 90-1 to 90-n software can be implemented on respective hardware cards provided in the hybrid traffic detection system box 76 of the system 32 or the integrated control system box 88 of the system 32′.
The radar and video subsystems 94 and 96 process and control the collection of sensor data, and transmit outputs to the HDSM 92. The video subsystem 96 (utilizing appropriate processor(s) or other hardware) can analyze video or other image data to provide a set of detector outputs, according to the user's detector configuration created using the detector editor 106 and saved as a detector file. This detector file is then executed to process the input video and generate output data which is then transferred to the associated HDM 90-1 to 90-n for processing and final detection selection. Some detectors, such as queue size detector and detection of turning movements, may require additional sensor information (e.g., radar data) and thus can be implemented in the HDM 90-1 to 90-n where such additional data is available.
The radar subsystem 94 can provide data to the associated HDMs 90-1 to 90-n in the form of object lists, which provide speed, position, and size of all objects (vehicles, pedestrians, etc.) sensed/tracked. Typically, the radar has no ability to configure and run machine vision-style detectors, so the detector logic must generally be implemented in the HDMs 90-1 to 90-n. Radar-based detector logic in the HDMs 90-1 to 90-n can normalize sensed/tracked objects to the same spatial coordinate system as other sensors, such as machine vision devices. The system 32 or 32′ can use the normalized object data, along with detector boundaries obtained from a machine vision (or other) detector file to generate detector outputs analogous to what a machine vision system provides.
The state block 98 provides indication and output relative to the state of the traffic controller 86, such as to indicate if a given traffic signal is “green”, “red”, etc.
The hybrid GUI 102 allows an operator to interact with the system 32 or 32′, and provides a computer interface, such as for sensor normalization, detection domain setting, and data streaming and collection to enable performance visualization and evaluation. The configuration wizard 104 can include features for initial set-up of the system and related functions. The detector editor 106 allows for configuration of detection zones and related detection management functions. The GUI 102, configuration wizard 104 and detector editor 106 can be accessible via the terminal 84 or a similar computer operatively connected to the system 32. It should be noted that while various software modules and components have been described separately, it should be noted that these functions can be integrated into a single program or software suite, or provided as separate stand-alone packages. The disclosed functions can be implemented via any suitable software in further embodiments.
The GUI 102 software can run on a Windows® PC, Apple PC or Linux PC, or other suitable computing device with a suitable operating system, and can utilize Ethernet or other suitable communication protocols to communicate with the HDMs 90-1 to 90-n. The GUI 102 provides a mechanism for setting up the HDMs 90-1 to 90-n, including the video and the radar subsystems 94 and 96 to: (1) normalize/align fields of view from both the first and second sensors 40 and 42 (2) configure parameters for the HDSM 92 to combine video and radar data (3) enable visual evaluation of detection performance (overlay on video display) and (4) allow collection of data, both standard detection output and development data. A hybrid video player of the GUI 102 will allow users to overlay radar-tracking markers (or markers from any other sensing modality) onto video from a machine vision sensor (see FIGS. 11B and 14). These tracking markers can show regions where the radar is currently detecting vehicles. This video overlay is useful to verify that the radar is properly configured, as well as to enable users to easily evaluate the radar's performance in real-time. The hybrid video player of the GUI 102 can allow a user to select from multiple display modes, such as: (1) Hybrid—shows current state of the detectors determined from hybrid decision logic using both the machine vision and radar sensor inputs (2) Video/Vision—shows current state of the detectors using only machine vision input (3) Radar—shows current state of the detectors using only radar sensor input and/or (4) Video/Radar Comparison—provides a simple way to visually compare the performance of machine vision and radar, using a multi-color scheme (e.g., black, blue, red and green) to show all of the permutations of when the two devices agree and disagree for a given detection zone. In some embodiments, only some of the display modes described above can be made available to users.
The GUI 102 communicates with the HDMs 90-1 to 90-n via an API, namely additions to a client application programming interface (CLAPI), which can go through the comserver 100, and eventually to the HDMs 90-1 to 90-n. An applicable communications protocol can send and receive normalization information, detector output definitions, configuration data, and other information to support the GUI 102.
Functionality for interpreting, analyzing and making final detections or other such functions of the system are primarily performed by the hybrid detection state machine 92. The HDSM 92 can take outputs from detectors, such as machine vision detectors and radar-based detectors, and arbitrates between them to make final detection decisions. For radar data, the HDSM 92 can, for instance, retrieve speed, size and polar coordinates of target objects (e.g., vehicles) as well as Cartesian coordinates of tracked objects, from the radar subsystem 94 and the corresponding radar sensors 40-1 to 40-n. For machine vision, the HDSM 92 can retrieve data from the detection state block 98 and from the video subsystem 96 and the associated video sensors (e.g., camera) 42-1 to 42-n. Video data is available at the end of every video frame processed. The HDSM 92 can contain and perform sensor algorithm data switching/fusion/decision logic/etc. to process radar and machine vision data. A state machine to determine which detection outcomes can be used, based on input from the radar and machine vision data and post-algorithm decision logic. Priority can be given to the sensor believed to be most accurate for the current conditions (time of day, weather, video contrast level, traffic level, sensor mounting position, etc.).
The state block 98 can provide final, unified detector outputs to a bus or directly to the traffic controller 86 through suitable ports (or wirelessly). Polling at regular intervals can be used to provide these detector outputs from the state block 98. Also, the state block can provide indications of each signal phase (e.g., red, green) of the signal controller 86 as an input.
Numerous types of detection can be employed. Presence or stop-line detectors identify the presence of a vehicle in the field of view (e.g., at the stop line or stop bar) their high accuracy in determining the presence of vehicles makes them ideal for signal-controlled intersection applications. Count and speed detection (which includes vehicle length and classification) for vehicles passing along the roadway. Crosslane count detectors provide the capability to detect the gaps between vehicles, to aid in accurate counting. The count detectors and speed detectors work in tandem to perform vehicle detection processing (that is, the detectors show whether or not there is a vehicle under the detector and calculate its speed). Secondary detector stations compile traffic volume statistics. Volume is the sum of the vehicles detected during a time interval specified. Vehicle speeds can be reported either in km/hr or mi/hr. and can be reported as an integer. Vehicle lengths can be reported in meters or feet. Advanced detection can be provided for the dilemma zone (primarily focusing on presence detection, speed, acceleration and deceleration). The “dilemma zone” is the zone in which drivers must decide to proceed or stop as the traffic control (i.e., traffic signal light) changes from green to amber and then red. Turning movement counts can be provided, with secondary detector stations connected to primary detectors to compile traffic volume statistics. Volume is the sum of the vehicles detected during a time interval specified. Turning movement counts are simply counts of vehicles making turns at the intersection (not proceeding straight through the intersection). Specifically, left turning counts and right turning counts can be provided separately. Often, traffic in the same lane may either proceed straight through or turn and this dual lane capability must be taken into account. Queue size measurement can also be provided. The queue size can be defined as the objects stopped or moving below a user-defined speed (e.g., a default 5 ml/hr threshold) at the intersection approach thus, the queue size can be the number of vehicles in the queue. Alternately, the queue size can be measured from the stop bar to the end of the upstream queue or end of the furthest detection zone, whichever is shortest. Vehicles can be detected as they approach and enter the queue, with continuous accounting of the number of vehicles in the region defined by the stop line extending to the back of the queue tail.
Handling of errors is also provided, including handling of communication, software errors and hardware errors. Regarding potential communication errors, outputs can be set to place a call to fail safe in the following conditions: (i) for failure of communications between hardware circuitry and the associated radar sensors (e.g., first sensors 40) and only outputs associated with that radar sensor, the machine vision outputs (e.g., second sensors 42) can be used instead, if operating properly (ii) for loss of a machine vision output and only outputs associated with that machine vision sensor and (iii) for loss of detector port communications—associated outputs will be placed into call or fail safe for the slave unit whose communications is lost. A call is generally an output (e.g., to the traffic controller 86) based on a detection (i.e., a given detector triggered “on”), and a fail-safe call can default to a state that corresponds to a detection, which generally reduces the likelihood of a driver being “stranded” at an intersection because of a lack of detection. Regarding potential software errors, outputs can be set to place call to fail safe if the HDM software 90-1 to 90-n is not operational. Regarding potential hardware errors, selected outputs can be set to place call (sink current), or fail safe, in the following conditions: (i) loss of power, all outputs (ii) failure of control circuitry, all outputs and (iii) failure of any sensors of the sensor assemblies 34A-34D, only outputs associated with failed sensors.
Although the makeup of software for the traffic sensing system 32 or 32′ has been described above, it should be understood that various other features not specifically discussed can be incorporated as desired for particular applications. For example, known features of the Autoscope® system and RTMS® system, both available from Image Sensing Systems, Inc., St. Paul, Minn., can be incorporated. For instance, such known functionality can include: (a) a health monitor—monitors the system to ensure everything is running properly (b) a logging system—logs all significant events for troubleshooting and servicing (c) detector port messages—for use when attaching a device (slave) for communication with another device (master) detector processing of algorithms—for processing the video images and radar outputs to enable detection and data collection (d) video streaming—for allowing the user to see an output video feed (e) writing to non-volatile memory—allows a module to write and read internal non-volatile memory containing a boot loader, operational software, plus additional memory that system devices can write to for data storage (f) protocol messaging—message/protocol from outside systems to enable communication with the traffic sensing system 32 or 32′ (g) a state block—contains the state of the I/O and (h) data collection—for recording I/O, traffic data, and alarm states.
Now that basic components of the traffic sensing system 32 and 32′ have been described, a method of installing and normalizing the system can be discussed. Normalization of overlapping sensor fields of view of a hybrid system is important so that data obtained from different sensors, especially those using different sensing modalities, can be correlated and used in conjunction or interchangeably. Without suitable normalization, use of data from different sensors would produce detections in disparate coordinate systems preventing a unified system detection capability.
FIG. 7 is a flow chart illustrating an installation and normalization method for use with the system 32 and 32′. Initially, hardware and associated software are installed at location where traffic sensing is desired, such as the roadway intersection 30 (step 100). Installation includes physically installing all sensor assemblies 34 (the number of assemblies provided will vary for particular applications), installing control boxes 74, 76 and/or 88, making wired and/or wireless connections between components, and aiming the sensor assemblies 34 to provide desired fields of view (see FIGS. 2 and 8). The sensor assemblies 34 can be mounted to any suitable support structure 36, and the particular mounting configuration will vary as desired for particular applications. Aiming the sensor assemblies 34 an include pan/yaw (left or right), elevation/tilt (up or down), camera barrel rotation (clockwise or counterclockwise), sunshield/covering overhang, and zoom adjustments. Once physically installed, relevant physical positions can be measured (step 102). Physical measurements can be taken manually by a technician, such as height H of the sensor assemblies 34, and distances D1, DS, DA, DR, DL1 to DL2, described above with respect to FIG. 1. These measurements can be used to determine sensor orientation, help normalize and calibrate the system and establish sensing and detection parameters. In one embodiment, only sensor height H and distance to the stop bar DS measurements are taken.
After physical positions have been measured, orientations of the sensor assemblies 34 and the associated first and second sensors 40 and 42 can be determined (step 104). This orientation determination can include configuration of azimuth angles θ, elevation angles θ, and rotation angle. The azimuth angle θ for each discrete sensor 40 and 42 of a given hybrid sensor assembly 34 can be a dependent degree of freedom, i.e., azimuth angles θ1 and θ2 are identical for the first and second sensors 40 and 42, given the mechanical linkage in the preferred embodiment. The second sensor 42 (e.g., machine vision device) can be configured such that a center of the stop-line for the traffic approach 38 substantially aligns with a center of the associated field of view 34-1. Given the mechanical connection between the first and second sensors 40 and 42 in a preferred embodiment, one then knows that alignment of the first sensor 40 (e.g., a bore sight of a radar) has been properly set. The elevation angle β for each sensor 40 and 42 is an independent degree of freedom for the hybrid sensor assembly 34, meaning the elevation angle β1 of the first sensor 40 (e.g., radar) can be adjusted independently of the elevation angle β2 of the second sensor 42 (e.g., machine vision device).
Once sensor orientation is known, the coordinates of that sensor can be rotated by the azimuth angle θ so that axes align substantially parallel and perpendicular to a traffic direction of the approach 38. Adjustment can be made according to the following equations (1) and (2), where sensor data is provided in x, y Cartesian coordinates:
Also a second transformation can be used to harmonize axis-labeling conventions of the first and second sensors 40 and 42, according to equations (3) and (4):
A normalization application (e.g., the GUI 102 and/or the configuration wizard 104) can then be opened to begin field of view normalization for the first and second sensors 40 and 42 of each hybrid sensor assembly 34 (step 106). With the normalization application open, objects are positioned on or near the roadway of interest (e.g., roadway intersection 30) in a common field of view of at least two sensors of a given hybrid sensor assembly 34 (step 108). In one embodiment, the objects can be synthetic target generators, which, generally speaking, are objects or devices capable of generating a recordable sensor signal. For example, in one embodiment a synthetic target generator can be a Doppler generator that can generate a radar signature (Doppler effect) while stationary along the roadway 30 (i.e., not moving over the roadway 30). In an alternative embodiment using an infrared (IR) sensor, synthetic target generator can be a heating element. Multiple objects can be positioned simultaneously, or alternatively one or more objects can be sequentially positioned, as desired. The objects can be positioned on the roadway in a path of traffic or on a sidewalk, boulevard, curtilage or other adjacent area. Generally at least three objects are positioned in a non-collinear arrangement. In applications where the hybrid sensor assembly 34 includes three or more discrete sensors, the objects can be positioned in an overlapping field of view of all of the discrete sensors, or of only a subset of the sensors at a given time, though eventually an objects should be positioned within the field of view of each of the sensors of the assembly 34. Objects can be temporarily held in place manually by an operator, or can be self-supporting without operator presence. In still further embodiments, the objects can be existing objects positioned at the roadway 30, such as posts, mailboxes, buildings, etc.
With the object(s) positioned, data is recorded for multiple sensors of the hybrid sensor assembly 34 being normalized, to capture data that includes the positioned objects in the overlapping field of view, that is, multiple sensors sense the object(s) on the roadway within the overlapping fields of view (step 110). This process can involve simultaneous sensing of multiple objects, or sequential recording of one or more objects in different locations (assuming no intervening adjustment or repositioning of the sensors of the hybrid sensor assembly 34 being normalized). After data is captured, an operator can use the GUI 102 to select one or more frames of data recorded from the second sensor 42 (e.g., machine vision device) of the hybrid sensor assembly 34 being normalized that provide at least three non-collinear points that correspond to the locations of the positioned objects in the overlapping field of view of the roadway 30, and selects those points in the one or more selected frames to identify the objects' locations in a coordinate system for the second sensor 42 (step 112). Selecting the points in the frame(s) from the second sensor 42 can be done manually, through a visual assessment by the operator and actuation of an input device (e.g., mouse-click, touch screen contact, etc.) to designate the location of the objects in the frame(s). In an alternate embodiment, a distinctive visual marking can be provided to attached to the object(s) and the GUI 102 can automatically or semi-automatically search through frames to identify and select the location of the markers and therefore also the object(s). The system 32 or 32′ can record the selection in the coordinate system associated with second sensor 42, such as pixel location for output of a machine vision device. The system 32 or 32′ can also perform an automatic recognition of the objects relative to another coordinate system associated with the first sensor 40, such as in polar coordinates for output of a radar. The operator can select the coordinates of the coordinate system of the first sensor 40 from an object list (due to the possibility that other objects may be sensed on the roadway 30 in addition to the object(s)), or alternatively automated filtering could be performed to select the appropriate coordinates. The selected coordinates of the first sensor 40 can be adjusted (e.g., rotated) in accordance with the orientation determination of step 104 described above. The location selection process can be repeated for all applicable sensors of a given hybrid sensor assembly 34 until locations of the same object(s) have been selected in the respective coordinate systems for each of the sensors.
After points corresponding to the locations of the objects have been selected in each sensor coordinate system, those points are translated or correlated to common coordinates used to normalize and configure the traffic sensing system 32 or 32′ (step 114). For instance, radar polar coordinates can be mapped, translated or correlated to pixel coordinates of a machine vision device. In this way, a correlation between data of all of the sensors of a given hybrid sensor assembly 34, so that objections in a common, overlapping field of view of those sensors can be identified in a common coordinate system, or alternatively in a primary coordinate system and mapped into any other correlated coordinate systems for other sensors. In one embodiment, all sensors can be correlated to a common pixel coordinate system.
Next, a verification process can be performed, through operation of the system 32 or 32′ and observation of moving objects traveling through the common, overlapping field of view of the sensors of the hybrid sensor assembly 34 being normalized (step 116). This is a check on the normalization already performed, and an operator can adjust or clear and perform again the previous steps to obtain a more desired normalization.
After normalization of the sensor assembly 34, an operator can use the GUI 102 to identify one or more lanes of traffic for one or more approaches 38 on the roadway 30 in the common coordinate system (or in one coordinate system correlated to other coordinate systems) (step 118). Lane identification can be performed manually by an operator drawing lane boundaries on a display of sensor data (e.g., using a machine vision frame or frames depicting the roadway 30). Physical measurements (from step 102) can be used to assist the identification of lanes. In alternative embodiments automated methods can be used to identify and/or adjust lane identifications.
Additionally, an operator can use the GUI 102 and/or the detection editor 106 to establish one or more detection zones (step 120). The operator can draw the detection zones on a display of the roadway 30. Physical measurements (from step 102) can be used to assist the establishment of detection zones.
The method illustrated in FIG. 7 is shown merely by way of example. Those of ordinary skill in the art will appreciate that the method can be performed in conjunction with other steps not specifically shown or discussed above. Moreover, the order of particular steps can vary, or can be performed simultaneously, in further embodiments. Further details of the method shown in FIG. 7 will be better understood in relation to additional figures described below.
FIG. 8 is an elevation view of a portion of the roadway intersection 30, illustrating an embodiment of the hybrid sensor assembly 34 in which the first sensor 40 is a radar. In the illustrated embodiment, the first sensor 40 is aimed such that its field of view 34-1 extends in front of a stop bar 130. For example, for a stop-bar positioned approximately 30 m from the hybrid sensor assembly 34 (i.e., DS=30 m), the elevation angle β1 for the radar (e.g., the first sensor 40) is set such that 10 dB off a main lobe aligns approximately with the stop-bar 130. FIG. 8 illustrates this concept for a luminaire installation (i.e., where the support structure 36 is a luminaire). The radar is configured such that a 10 dB point off the main lobe intersects with the roadway 30 approximately 5 m in front of the stop-line. Half of the elevation width of the radar beam is then subtracted to obtain an elevation orientation value usable by the traffic sensing system 32 or 32′.
FIG. 9 is a view of a normalization display interface 140 of the GUI 102 for establishing coordinate system correlation between multiple sensor inputs from a given hybrid sensor assembly 34. In the illustrated embodiment, six objects 142A-142F are positioned in the roadway 30. In some embodiments it may be desirable to position the objects 142A-142F in meaningful locations on the roadway 30, such as along lane boundaries, along the stop bar 130, etc. Meaningful locations will generally corresponding to the type of detection(s) desired for a given application. Alternatively, the objects 142A-142F can be positioned outside of the approach 38, such as on a median or boulevard strip, sidewalk, etc., to reduce obstruction of traffic on the approach 38 during normalization.
The objects 142A-142F can each be synthetic target generators (e.g., Doppler generators, etc.). In general, synthetic target generators are objects or devices capable of generating a recordable sensor signal, such as a radar signature (Doppler effect) generated while the object is stationary along the roadway 30 (i.e., not moving over the roadway 30). In this way, a stationary object on the roadway 30 can given the appearance of being a moving object that can be sensed and detected by a radar. For instance, mechanical and electrical Doppler generators are known, and any suitable Doppler generator can be used with the present invention as a synthetic target generator for embodiments utilizing a radar sensor. A mechanical or electro-mechanical Doppler generator can include a spinning fan in a slit enclosure having a slit. An electrical Doppler generator can include a transmitter to transmit an electromagnetic wave to emulate a radar return signal (i.e., emulating a reflected radar wave) from a moving object at a suitable or desired speed. Although a typical radar cannot normally detect stationary objects, a synthetic target generator like a Doppler generator makes such detection possible. For normalization as described above with respect to FIG. 7, stationary objects are much more convenient than moving objects. Alternatively, the objects 142A-142F can be objects that move or are moved relative to the roadway 30, such as corner reflectors that halp provide radar reflection signatures.
Although six objects 142A-142F are shown in FIG. 9, only a minimum of three non-collinearly positioned objects need to be positioned in other embodiments. Moreover, as noted above, not all of the objects 142A-142F need to be positioned simultaneously.
FIG. 10 is a view of a normalization display 146 for establishing traffic lanes using machine vision data (e.g., from the second sensor 42). Lane boundary lines 148-1, 148-2 and 148-3 can be manually drawn over a display of sensor data, using the GUI 102. A stop line boundary 148-4 and a boundary of a region of interest 148-5 can also be drawn over a display of sensor data by an operator. Moreover, although the illustrated embodiment depicts an embodiment with linear boundaries, non-linear boundaries can be provided for different roadway geometries. Drawing boundary lines as shown in FIG. 10 can be performed after a correlation between sensor coordinate systems has been established, allowing the boundary lines drawn with respect to one coordinate system to be mapped or correlated to another or universal coordinate system (e.g., in an automatic fashion).
As an alternative to having an operator manually draw the stop line boundary 148-4, an automatic or semi-automatic process can be used in further embodiments. The stop line position is usually difficult to find, because there is only one somewhat noisy indicator: where objects (e.g., vehicles) stop. Objects are not guaranteed to stop exactly on the stop line (as designated on the roadway 30 by paint, etc.) they could stop up to several meters ahead or behind the designated stop line on the roadway 30. Also, some sensing modalities, such as radar, can have significant errors in estimating positions of stopped vehicles. Thus, an error of +/− several meters can be expected in a stop line estimate. The stop line position can be found automatically or semi-automatically by averaging a position (e.g., a y-axis position) of a nearest stopped object in each measurement/sensing cycle. Taking only the nearest stopped objects helps eliminate undesired skew caused by non-front objects in queues (i.e., second, third, etc. vehicles in a queue). This dataset will have some outliers, which can be removed using an iterative process (similar to one that can be used in azimuth angle estimates):
(a) Take a middle 50% of samples nearest a stop line position estimate (inliers), and discard the other 50% of points (outliers). An initial stop line position estimate can be an operator's best guess, informed by any available physical measurements, geographic information system (GIS) data, etc.
(b) Determine a mean (average) of the inliers, and consider this mean the new stop line position estimate.
(c) Repeat steps (a) and (b) until method converges (e.g., 0.0001 delta between steps (a) and (b)) a threshold number of iterations of steps (a) and (b) have been reached (e.g., 100 iterations). Typically, method should converge within around 10 iterations. After convergence or reaching the iteration threshold, a final estimate of this the stop line boundary position is obtained. A small offset can be applied, as desired.
It is generally necessary to provide orientation information to the system 32 or 32′ to allow suitable recognition of the orientation of the sensors of the hybrid sensor assembly 34 relative to the roadway 30 desired to be sensed. Two possible methods for determining orientation angles are illustrated in FIGS. 11A, 11B and 11C. FIG. 11A is a view of a normalization display 150 for one form of sensor orientation detection and normalization. As shown in the illustrated embodiment of FIG. 11A, a radar output (e.g., of the first sensor 40) is provided in a first field of view 34-1 for four lanes of traffic L1 to L4 of the roadway 30. Numerous objects 152 (e.g., vehicles) are detected in the field of view 34-1, and a movement vector 152-1 is provided for each detected object. It should be noted that it is well-known for radar sensor systems to provide vector outputs for detected moving objects. By viewing the display 150 (e.g., with the GUI 102), an operator can adjust an orientation of the first sensor 40 recognized by the system 32 or 32′ such that vectors 152-1 substantially align with the lanes of traffic L1 to L4. Lines designating lanes of traffic L1 to L4 can be manually drawn by an operator (see FIG. 10). This approach assumes that sensed objects travel substantially parallel to lanes of the roadway 30. Operator skill can account for any outliers or artifacts in data used for this process.
FIG. 11B is a view of another normalization display 150′ for another form of sensor orientation detection and normalization. In the embodiment illustrated in FIG. 11B, the display 150′ is a video overlay of image data from the second sensor 42 (e.g., machine vision device) with bounding boxes 154-1 of objects detected with the first sensor 40 (e.g., radar). An operator can view the display 150′ to assess and adjust alignment between the bounding boxes 154-1 and depictions of objects 154-2 visible on the display 150′. Operator skill can be used to address any outliers or artifacts in data used for this process.
FIG. 11C is a view of yet another normalization display 150″ for another form of sensor orientation detection and normalization. In the embodiment illustrated in FIG. 11C, an automated or semi-automated procedure allows sensor orientation determination and normalization. The procedure can proceed as follows. First, sensor data of vehicle traffic is recorded for a given period of time (e.g., 10-20 minutes), and saved. An operator then opens the display 150″ (e.g., part of the GUI 102), and accesses the saved sensor data. The operator enters an initial normalization guess in block 156 for a given sensor (e.g., the first sensor 40, which can be a radar), which can include a guess as to azimuth angle θ, stop line position and lane boundaries. These guesses can be informed by physical measurements, or alternatively using engineering/technical drawings or distance measurement tools of electronic GIS tools, such as GOOGLE MAPS, available from Google, Inc., Mountain View, Calif., or BING MAPS, available from Microsoft Corp. The azimuth angle θ guess can match the applicable sensor's setting at the time of the recording. The operator can then request that the system take the recorded data and the initial guesses and compute the most likely normalization. Results can be shown and visually displayed, with object tracks 158-1, lane boundaries 158-2, stop line 158-3, the sensor position 158-4 (located at origin of distance graph) and field of view 158-5. The operator can visually assess the automatic normalization, and can make any desired changes in the results block 159, which refreshing of the plot after adjustment. This feature allows manual fine-tuning of the automated results.
Steps of the auto-normalization algorithm can be as described in the following embodiment. The azimuth angle θ is estimated first. Once the azimuth angle θ is known, the object coordinates for the associated sensor (e.g., the first sensor 40) can be rotated so that axes of the associated coordinate system align parallel and perpendicular to the traffic direction. This azimuth angle θ simplifies estimation of the stop line and lane boundaries. Next, the sensor coordinates can be rotated as a function of the azimuth angle θ the user entered as an initial guess. The azimuth angle θ is computed by finding an average direction of travel of the objects (e.g., vehicles) in the sensor's field of view. It is assumed that on average objects will travel parallel to lane lines. Of course, vehicles executing turning maneuvers or changing lanes will violate this assumption. Those types of vehicles produce outliers in the sample set that must be removed. Several different methods are employed to filter outliers. As an initial filter, all objects with speed less than a given threshold (e.g., approximately 24 km/hr or 15 ml/hr) can be removed. Those objects are considered more likely to be turning vehicles or otherwise not traveling parallel to lane lines. Also, any objects with a distance outside of approximately 5 to 35 meters past the stop line are removed objects in this middle zone are considered the most reliable candidates to be accurately tracked while travelling within the lanes of the roadway 30. Because the stop line location is not yet known, the operator's guess can be used at this point. Now using this filtered dataset, an angle of travel for each tracked object is computed by taking the arctangent of the associated x and y velocity components. An average angle of all the filtered, tracked objects produces an azimuth angle θ estimate. However, at this point, outliers could still be skewing the result. A second outlier removal step can now be employed as follows:
(a) Take a middle 50% of samples nearest the azimuth angle θ estimate (inliers), and discard the other 50% of points (outliers)
(b) Take the mean of the inliers, and consider this the new azimuth angle θ estimate and
(c) Repeat steps (a) and (b) until the method converges (e.g., 0.0001 delta between steps (a) and (b)) or a threshold number of iterations of steps (a) and (b) have been reached (e.g., 100 iterations). Typically, this method should converge within around 10 iterations. After converging or reaching the iteration threshold, the final azimuth angle θ estimate is obtained. This convergence can be graphically represented as a histogram, if desired.
FIGS. 12A-12E are graphs of lane boundary estimates for an alternative embodiment of a method of automatic or semi-automatic lane boundary establishment or adjustment. In general, this embodiment assumes objects (e.g., vehicles) will travel in approximately a center of the lanes of the roadway 30, and involves an effort to reduce or minimize an average distance to the nearest lane center for each object. A user's initial guess is used as a starting point for the lane centers (including the number of lanes), and then small shifts are tested to see if they give a better result. It is possible to leave lane widths constant at the user's guesses (which can be based on physical measurements), and only horizontal shifts of lane locations applied. A search window of +/−2 meters can be used, with 0.1 meter lane shift increments. For each search position, lane boundaries are shifted by the offset, then an average distance to center of lane is computed for all vehicles in each lane (this can be called an “average error” of the lane). After trying all possible offsets, the average errors for each lane can be normalized by dividing by a minimum average error for that lane over all possible offsets. This normalization provides a weighting mechanism that increases a weight assigned to lanes where a good fit to vehicle paths is found and reduces the weight of lanes with more noisy data. Then the normalized average errors of all lanes can be added together for each offset, as shown in FIG. 12E. The offset giving a lowest total normalized average error (designated by line 170 in FIG. 12E) can be taken as the best estimate. The user's initial guess, adjusted by the best estimate offset, can be used to establish the lane boundaries for the system 32 or 32′. As noted already, in this embodiment, a single offset for all lanes is used to shift all lanes together, rather than to adjust individual lane sizes to provide for different shifts between different lanes.
FIG. 13 is a view of a calibration display interface 180 for establishing detection zones, which can be implemented via the detector editor 106. Generally speaking, detection zones are areas of a roadway in which the presence of an object (e.g., vehicle) is desired to be detected by the system 32 or 32′. Many different types of detectors are possible, and the particular number or types employed for a given application can vary as desired. The display 180 can include a menu or toolbar 182 for providing a user with tools for designating detectors with respect to the roadway 30. In the illustrated embodiment, the roadway 30 is illustrated adjacent to the toolbar 182 based upon machine vision sensor data. Detector zones, such as stop line detectors 184 and speed detectors 186 are defined relative to desired locations. Furthermore, other information icons 188 can be selected for display, such as signal state indicators. The display interface 180 allows detectors and related system parameters to be set that are used during normal operation of the system 32 or 32′ for traffic sensing. Configuration of detector zones can be conducted independent from the normalization process described above. The configuration of detection zones can occur in pixel/image space and is generally not reliant on the presence of vehicle traffic. Configuration of detection zones can occur after the coordinate systems for multiple sensors are normalized.
FIG. 14 is a view of an operational display 190 of the traffic sensing system 32 or 32′, showing an example comparison of detections from two different sensor modalities (e.g., the first and second sensors 40 and 42) in a video overlay (i.e., graphics are overlaid on a machine vision sensor video output). In the illustrated embodiment, detectors 184A to 184D are provided, one in each of four lanes of the illustrated roadway 30. A legend 192 is provided in the illustrated embodiment to indicate whether no detections are made (“both off”), only a first sensor makes a detection (“radar on”), only a second sensor makes a detection (“machine vision on”), or whether both sensors make a detection. As shown, vehicles 194 have triggered detections for detectors 184B and 184D for both sensors, while the machine vision sensor has triggered a “false” detection for detector 184A based on the presence of pedestrians 196 traveling in a cross-lane direction perpendicular to the direction of the approach 38 who did not trigger one sensor (radar). The illustration of FIG. 14 shows how different sensing modalities can operate different under given conditions.
As already noted, the present invention allows for switching between different sensors or sensing modalities based upon operating conditions at the roadway and/or type of detection. In one embodiment, the traffic sensing system 32 or 32′ can be configured as a gross switching system in which multiple sensors run simultaneously (i.e., operate simultaneously to sense data) but with only one sensor being selected at any given time for detection state analysis. The HDSMs 90-1 to 90-n carry out logical operations based on the type of sensor being used, taking into account the type of detection.
One embodiment of a sensor switching approach is summarized in Table 1, which applies to post-processed data from the sensors 40-1 to 40-n and 42-1 to 40-n from the hybrid sensor assemblies 34. A final output of any sensor subsystem can simply be passed through on a go/no-go basis to provide a final detection decision. This is in contrast to a data fusion approach that makes detection decisions based upon fused data from all of the sensors. The inventors have developed rules in Table 1 based on comparative field-testing between machine vision and radar sensing, and discoveries as to beneficial uses and switching logic. All the rules of Table 1 assume use of a radar deployed for detection up to 50 m after (i.e., upstream from) a stop line and then machine vision is relied upon past that 50 m region. Other rules can be applied under different configuration assumptions. For example, with a narrower radar antenna field of view, the radar could be relied upon at relatively longer ranges than machine vision.
FIG. 15 is a flow chart illustrating an embodiment of a method of sensor modality selection, that is, sensor switching, for use with the traffic sensing system 32 or 32′. Initially, a new frame is started, representing newly acquired sensor data from all available sensing modalities for a given hybrid sensor assembly 34 (step 200). A check for radar (or other first sensor) failure is performed (step 202). If a failure is recognized at step 202, another check for video (or other second sensor) failure is performed (step 204). If all sensors have failed, the system 32 or 32′ can be placed in a global failsafe mode (step 206). If the video (or other second sensor) is still operational, the system 32 or 32′ can enter a video-only mode (step 208). If there is no failure at step 202, another check for video (or other second sensor) failure is performed (step 210). If the video (or other second sensor) has failed, the system 32 or 32′ can enter a radar-only mode (step 212). In radar-only mode, a check of detector distance from the radar sensor (i.e., the hybrid sensor assembly 34) is performed (step 214). If the detector is outside the radar beam, a failsafe mode for radar can be entered (step 216), or if the detector is inside the radar beam then radar-based detection can begin (step 218).
If all of the sensors are working (i.e., none have failed), the system 32 or 32′ can enter a hybrid detection mode that can take advantage of sensor data from all sensors (step 220). A check of detector distance from the radar sensor (i.e., the hybrid sensor assembly 34) is performed (step 222). Here, detector distance can refer to a location and distance of a given detector defined within a sensor field of view in relation to a given sensor. If the detector is outside the radar beam, the system 32 or 32′ can use only video sensor data for the detector (step 224), or if the detector is inside the radar beam then a hybrid detection decision can be made (step 226). Time of day is determined (step 228). During daytime, a hybrid daytime processing mode (see FIG. 16) is entered (step 230), and during nighttime, a hybrid nighttime processing mode (see FIG. 17) is entered (step 232).
The process described above with respect to FIG. 15 can be performed for each frame analyzed. The system 32 or 32′ can return to step 200 for each new frame of sensor data analyzed. It should be noted that although the disclosed embodiment refers to machine vision (video) and radar sensors, the same method can be applied to systems using other types of sensing modalities. Moreover, those of ordinary skill in the art will appreciate the disclosed method can be extended to systems with more than two sensors. It should further be noted that sensor modality switching can be performed across an entire common, overlapping field of view of the associated sensors, or can be localized for switching of sensor modalities for one or more portions of the common, overlapping field of view. In the latter embodiment, different switching decisions can be made for different portions of the common, overlapping field of view, such as to make different switching decisions for different detector types, different lanes, etc.
FIG. 16 is a flow chart illustrating an embodiment of a method of daytime image processing for use with the traffic sensing system 32 or 32′. The method illustrated in FIG. 16 can be used at step 230 of FIG. 15.
For each new frame (step 300), a global contrast detector, which can be a feature of a machine vision system, can be checked (step 302). If contrast is poor (i.e., low), then the system 32 or 32′ can rely on radar data only for detection (step 304). If contrast is good, that is, sufficient for machine vision system performance, then a check is performed for ice and/or snow buildup on the radar (i.e., radome) (step 306). If there is ice or snow buildup, the system 32 or 32′ can rely on machine vision data only for detection (step 308).
If there is no ice or snow buildup on the radar, then a check can be performed to determine if rain is present (step 309). This rain check can utilize input from any available sensor. If no rain is detected, then a check can be performed to determine if shadows are possible or likely (step 310). This check can involve a sun angle calculation or use any other suitable method, such as any described below). If shadows are possible, a check is performed to verify if strong shadows are observed (step 312). If shadows are not possible or likely, or if no strong shadows are observed, then a check is performed for wet road conditions (step 314). If there is no wet road condition, a check can be performed for a lane being susceptible to occlusion (step 316). If there is no susceptibility to occlusion, the system 32 or 32′ can reply on machine vision data only for detection (step 308). In this way, machine vision can act as a default sensing modality for daytime detection. If rain, strong shadows, wet road, or lane occlusion conditions exist, then a check can be performed for traffic density and speed (step 318). For slow moving and congested conditions, the system 32 or 32′ can rely on machine vision data only (go to step 308). For light or moderate traffic density and normal traffic speeds, a hybrid detection decision can be made (step 320).
FIG. 17 is a flow chart illustrating an embodiment of a method of nighttime image processing for use with the traffic sensing system 32 or 32′. The method illustrated in FIG. 17 can be used at step 232 of FIG. 15.
For each new frame (step 400), a check is performed for ice or snow buildup on the radar (i.e., radome) (step 402). If ice or snow buildup is present, the system 32 or 32′ can rely on machine vision data only for detection (step 404). If no ice or snow buildup is present, the system 32 or 32′ can rely on the radar for detection (step 406). When radar is used for detection, machine vision can be used for validation or other purposes as well in some embodiments, such as to provide more refined switching.
Examples of possible ways to measure various conditions at the roadway 30 are summarized in Table 2, and are described further below. It should be noted that the examples given in Table 2 and accompanying description generally focus on machine vision and radar sensing modalities, other approaches can be used in conjunction with out types of sensing modalities (LIDAR, etc.), whether explicitly mentioned or not.
A strong shadows condition generally occurs during daytime when the sun is at such an angle that objects (e.g., vehicles) cast dynamic shadows on a roadway extending significantly outside of the object body. Shadow can cause false alarms with machine vision sensors. Also, applying shadow false alarm filters to machine vision systems can have an undesired side effect of causing missed detections of dark objects. Shadows generally produce no performance degradation for radars.
A multitude of methods to detect shadows with machine vision are known, and can be employed in the present context as will be understood by a person of ordinary skill in the art. Candidate techniques include spatial and temporal edge content analysis, uniform biasing of background intensity, and identification of spatially connected inter-lane objects.
One can also exploit information from multiple sensor modalities to identify detection characteristics. Such methods can include analysis of vision versus radar detection reports. If shadow condition is such that vision-based detection results in a high quantity of false detections, an analysis of vision detection to radar detection count differentials can indicate a shadow condition. Presence of shadows can also be predicted through knowledge of a machine vision sensor's compass direction, latitude/longitude, and date/time, and use of those inputs in a geometrical calculation to find the sun's angle in the sky and to predict if strong shadows will be observed.
Radar can be used exclusively when strong shadows are present (assuming the presence of shadows can reliably be detected) in a preferred embodiment. Numerous alternative switching mechanisms can be employed for strong shadow handling, in alternative embodiments. For example, a machine vision detection algorithm can instead assign a confidence level indicating the likelihood that a detected object is a shadow or object. Radar can be used as a false alarm filter when video detection has low confidence that the detected object is an object and not a shadow. Alternatively, radar can provide a number of radar targets detected in each detector's detection zone (radar targets are typically instantaneous detections of moving objects, which are clustered over time to form radar objects). A target count is an additional parameter that can be used in the machine vision sensor's shadow processing. In a further alternative embodiment, inter-lane communication can be used, using the assumption is that a shadow must have an associated shadow-casting object nearby. Moreover, in yet another embodiment, if machine vision is known to have a bad background estimate, radar can be used exclusively.
A nighttime condition generally occurs when the sun is sufficiently far below the horizon so that the scene (i.e., roadway area at which traffic is being sensed) becomes dark. For machine vision systems alone, the body of objects (e.g., vehicles) becomes harder to see at nighttime, and primarily just vehicle headlights and headlight reflections on the roadway (headlight splash) stand out to vision detectors. Positive detection generally remains high (unless the vehicle's headlights are off). However, headlight splash often causes an undesirable increase in false alarms and early detector actuations. The presence of nighttime conditions can be predicted through knowledge of the latitude/longitude and date/time for the installation location of the system 32 or 32′. These inputs can be used in a geometrical calculation to find when the sun drops below a threshold angle relative to a horizon.
Radar can be used exclusively during nighttime, in one embodiment. In an alternative embodiment, radar can be used to detect vehicle arrival, and machine vision can be used to monitor stopped objects, therefore helping to limit false alarms.
Rain and wet road conditions generally include periods during rainfall, and after rainfall while the road is still wet. Rain can be categorized by a rate of precipitation. For machine vision systems, rain and wet road conditions cause are typically similar to nighttime conditions: a darkened scene with vehicle headlights on and many light reflections visible on the roadway. In one embodiment, rain/wet road conditions can be detected based upon analysis of machine vision versus radar detection time, where an increased time difference is an indication that headlight splash is activating machine vision detection early. In an alternative embodiment, a separate rain sensor 87 (e.g., piezoelectric or other type) is monitored to identify when a rain event has taken place. In still further embodiments, rain can be detected through machine vision processing, by looking for actual raindrops or optical distortions caused by the rain. Wet road can be detected through machine vision processing by measuring the size, intensity, and edge strength of headlight reflections on the roadway (all of these factors should increase while the road is wet). Radar can detect rain by observing changes in the radar signal return (e.g., increased noise, reduced reflection strength from true vehicles). In addition, rain could be identified through receiving local weather data over an Internet, radio or other link.
In a preferred embodiment, when a wet road condition is recognized, the radar detection can be used exclusively. In an alternative embodiment, when rain exceeds a threshold level (e.g., reliability threshold), machine vision can be used exclusively, and when rain is below the threshold level but the road is wet, radar can be weighted more heavily to reduce false alarms, and switching mechanisms described above with respect to nighttime conditions can be used.
Occlusion refers generally to an object (e.g., vehicle) partially or fully blocking a line of sight from a sensor to a farther-away object. Machine vision may be susceptible to occlusion false alarms, and may have problems with occlusions falsely turning on detectors in adjacent lanes. Radar is much less susceptible to occlusion false alarms. Like machine vision, though, radar will likely miss vehicles that are fully or near fully occluded.
The possibility for occlusion can be determined through geometrical reasoning. Positions and angles of detectors, and a sensor's position, height H, and orientation, can be used to assess whether occlusion would be likely. Also, the extent of occlusion can be predicted by assuming an average vehicle size and height.
In one embodiment, radar can be used exclusively in lanes where occlusion is likely. In another embodiment, radar can be used as a false alarm filter when machine vision thinks an occlusion is present. Machine vision can assign occluding-occluded lane pairs, then when machine vision finds a possible occlusion and matching occluding object, the system can check radar to verify whether the radar only detects an object in the occluding lane. Furthermore, in another embodiment, radar can be used to address a problem of cross traffic false alarms for machine vision.
Low contrast conditions generally exist when there is a lack of strong visual edges in a machine vision image. A low contrast condition can be caused by factors such as fog, haze, smoke, snow, ice, rain, or loss of video signal. Machine vision detectors occasionally lose the ability to detect vehicles in low-contrast conditions. Machine vision systems can have the ability to detect low contrast conditions and force detectors into a failsafe always-on state, though this presents traffic flow inefficiency at an intersection. Radar should be largely unaffected by low-contrast conditions. The only exception for radar low contrast performance is heavy rain or snow, and especially snow buildup on a radome of the radar the radar may miss objects in those conditions. It is possible to use an external heater to prevent snow buildup on the radome.
Machine vision systems can detect low-contrast conditions by looking for a loss of visibility of strong visual edges in a sensed image, in a known manner. Radar can be relied upon exclusively in low contrast conditions. In certain weather conditions where the radar may not perform adequately, those conditions can be detected and detectors placed in a failsafe state rather than relying on the impaired radar input, in further embodiments.
Sensor failure generally refers to a complete dropout of the ability to detect for a machine vision, radar or any other sensing modality. It can also encompass partial sensor failure. A sensor failure condition may occur due to user error, power outage, wiring failure, component failure, interference, software hang-up, physical obstruction of the sensor, or other causes. In many cases, the sensor affected by sensor failure can self-diagnose its own failure and provide an error flag. In other cases, the sensor may appear to be running normally, but produce no reasonable detections. Radar and machine vision detection counts can be compared over time to detect these cases. If one of the sensors has far less detections than the other, that is a warning sign that the sensor with less detections may not be operating properly. If only one sensor fails, the working (i.e., non-failed) sensor can be relied upon exclusively. If both sensors fail, usually nothing can be done with respect to switching, and outputs can be set to a fail-safe, always on, state.
Traffic density generally refers to the rate of vehicles passing through an intersection or other area where traffic is being sensed. Machine vision detectors are not greatly affected by traffic density. There are an increased number of sources of shadows, headlight splash, or occlusions in high traffic density conditions, which could potentially increase false alarms. However, there is also less practical opportunity for false alarms during high traffic density conditions because detectors are more likely to be occupied by a real object (e.g., vehicle). Radar generally experiences reduced performance in heavy traffic, and is more likely to miss objects in heavy traffic conditions. Traffic density can be measured by common traffic engineering statistics like volume, occupancy, or flow rate. These statistics can easily be derived from radar, video, or other detections. In one embodiment, machine vision can be relied upon exclusively when traffic density exceeds a threshold.
Distance generally refers to real-world distance from the sensor to the detector (e.g., distance to the stop line DS). Machine vision has decent positive detection even at relatively large distances. Maximum machine vision detection range depends on camera angle, lens zoom, and mounting height H, and is limited by low resolution in a far-field range. Machine vision usually cannot reliably measure vehicle distances or speeds in the far-field, though certain types of false alarms actually become less of a problem in the far-field because the viewing angle becomes nearly parallel to the roadway, limiting visibility of optical effects on the roadway. Radar positive detection falls off sharply with distance. The rate of drop-off depends upon the elevation angle β and mounting height of the radar sensor. For example, a radar may experience poor positive detection rates at distances significantly below a rated maximum vehicle detection range. The distance of each detector from the sensor can be readily determined through the system's 32 or 32′ calibration and normalization data. The system 32 or 32′ will know the real-world distance to all corners of the detectors. Machine vision can be relied on exclusively when detectors exceed a maximum threshold distance to the radar. This threshold can be adjusted based on the mounting height H and elevation angle β of the radar.
Speed generally refers to a speed of the object(s) being sensed. Machine vision is not greatly affected by vehicle speed. Radar is more reliable at detecting moving vehicles because it generally relies on the Doppler effect. Radar is usually not capable of detecting slow-moving or stopped objects (below approximately 4 km/hr or 2.5 ml/hr). Missing stopped objects is less than optimal, as it could lead an associated traffic controller 86 to delay switching traffic lights to service a roadway approach 38, delaying or stranding drivers. Radar provides speed measurements each frame for each sensed/tracked object. Machine vision can also measure speeds using a known speed detector. Either or both mechanism can be utilized as desired. Machine vision can be used for stopped vehicle detection, and radar can be used for moving vehicle detection. This can limit false alarms for moving vehicles, and limit missed detections of stopped vehicles.
Sensor movement refers to physical movement of a traffic sensor. There are two main types of sensor movement: vibrations, which are oscillatory movements, and shifts, which are a long-lasting change in the sensor's position. Movement can be caused by a variety of factors, such as wind, passing traffic, bending or arching of supporting infrastructure, or bumping of the sensor. Machine vision sensor movement can cause misalignment of vision sensors with respect to established (i.e., fixed) detection zones, creating a potential for both false alarms and missed detections. Image stabilization onboard the machine vision camera, or afterwards in the video processing, can be used to lessen the impact of sensor movement. Radar may experience errors in its position estimates of objects when the radar is moved from its original position. This could cause both false alarms and missed detections. Radar may be less affected than machine vision by sensor movements. Machine vision can provide a camera movement detector that detects changes in the camera's position through machine vision processing. Also, or in the alternative, sensor movement of either the radar or machine vision device can be detected by comparing positions of radar-tracked vehicles to the known lane boundaries. If vehicle tracks don't consistently align with the lanes, then it is likely a sensor's position has been disturbed.
If only one sensor has moved, then the other sensor can be used exclusively. Because both sensors are linked to the same enclosure, it is likely both will move simultaneously. In that case, the least affected sensor can be weighted more heavily or even used exclusively. Any estimates of the motion as obtained from machine vision or radar data can be used to determine which sensor is most affected by the movement. Otherwise, radar can be used as the default when significant movement occurs. Alternatively, a motion estimate based on machine vision and radar data can be used to correct the detection results of both sensors, in an attempt to reverse the effects of the motion. For machine vision, this can be done by applying transformations to the image (e.g., translation, rotation, warping). With radar, it can involve transformations to the position estimate of vehicles (e.g., rotation only). Furthermore, if all sensors have moved significantly such that part of the area-of-interest is no longer visible, then affected detectors can be placed in a failsafe state (e.g., a detector turned on by default).
Lane type generally refers to the type of the lane (e.g., thru-lane, turn-lane, or mixed use). Machine vision is usually not greatly affected by the lane type. Radar generally performs better than machine vision for thru-lanes. Lane type can be inferred from phase number or relative position of the lane to other lanes. Lane type can alternatively be explicitly defined by a user during initial system setup. Machine vision can be relied upon more heavily in turn lanes to limit misses of stopped objects waiting to turn. Radar can be relied upon more heavily in thru lanes.
The traffic sensing system 32 can provide improved performance over existing products that rely on video detection or radar alone. Some improvements that can be made possible with a hybrid system include improved traditional vehicle classification accuracy, speed accuracy, stopped vehicle detection, wrong way vehicle detection, vehicle tracking, cost savings, and setup. Also, improved positive detection, decreased false detection is made possible. Vehicle classification is difficult during nighttime and poor weather conditions because machine vision may have difficulty detecting vehicle features however, radar is unaffected by most of these conditions and thus can generally improve upon basic classification accuracy during such conditions despite known limitations of radar at measuring vehicle length. While one version of speed detector integration improves speed measurement through time of day, distance and other approaches, another syllogism can further improve speed detection accuracy by seeking out a combination process for using multiple modalities (e.g., machine vision and radar) simultaneously. For stopped vehicles, a “disappearing” vehicle in Doppler radar (even with tracking enabled) often occurs when an object (e.g., vehicle) slows to less than approximately 4 km/hr. (2.5 ml/hr.), though integration of machine vision and radar technology can help maintain detection until the object starts moving again and also to provide the ability to detect stopped objects more accurately and quickly. For wrong way objects (e.g., vehicles), the radar can easily determine if an object is traveling the wrong way (i.e., in the wrong direction on a one-way roadway) via Doppler radar, with a small probability of false alarm. Thus, when normal traffic is approaching from, for example, a one-way freeway exit, the system could provide an alert alarm when a driver inadvertently drives the wrong way onto the freeway exit ramp. For vehicle tracking through data fusion, the machine vision or radar outputs are chosen, depending on lighting, weather, shadows, time of day and other factors, enabling the HDM 90-1 to 90-n to map coordinates of radar objects into a common reference system (e.g., universal coordinate system), in the form of a post-algorithm decision logic. Increased system integration can help limit cost and improve performance. The cooperation of radar and machine vision while sharing common components such as power supply, I/O and DSP in further embodiments can help to reduce manufacturing costs further while enabling continued performance improvements. With respect to automatic setup and normalization, the user experience is benefited by a relatively simple and intuitive setup and normalization process.
Any relative terms or terms of degree used herein, such as “substantially”, “approximately”, “essentially”, “generally” and the like, should be interpreted in accordance with and subject to any applicable definitions or limits expressly stated herein. In all instances, any relative terms or terms of degree used herein should be interpreted to broadly encompass any relevant disclosed embodiments as well as such ranges or variations as would be understood by a person of ordinary skill in the art in view of the entirety of the present disclosure, such as to encompass ordinary manufacturing tolerance variations, sensor sensitivity variations, incidental alignment variations, and the like.
While the invention has been described with reference to an exemplary embodiment(s), it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted for elements thereof without departing from the scope of the invention. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the invention without departing from the essential scope thereof. Therefore, it is intended that the invention not be limited to the particular embodiment(s) disclosed, but that the invention will include all embodiments falling within the scope of the appended claims. For example, features of various embodiments disclosed above can be used together in any suitable combination, as desired for particular applications.
"deux coupes couplées" pour analyser des dispositifs tridimensionnels
Ln all fields, but more especially in electricity, everybody is more and more worried about the security of users as weil as goods. This requires equipment with automatic switch off power supply as soon as an electric defect in a installation occurs. This protection is made by differential devices, where the main component consists in a high sensitivity polarized relay. The improvement of this protection requires a complete control of the characteristics of the relay. However the various parts the component present tolerances, during assembly those will involve drift of its electric characteristics in proportions that remains to be determined. The control of these drifts, requires new tools to quantify the parameters influent but also to improve electric properties of the relay. The first tool is an analytical modelling techniques which enables us ta apprehend the electromagnetic phenomena like saturation, the eddy currents, polarisation points of the magnet, as weil as the influence contact points between the fixed part and the moving one. The second is a numerical model using the finite elements techniques representation, with an original method which uses a two-dimensional formulation associating. . Two cuts coupled Il to analyse three-dimensional devices
1. A method for Real Time Kinematic satellite positioning comprising:
at a mobile receiver, receiving a plurality of navigation satellite carrier signals, wherein each navigation satellite carrier signal is associated with a navigation satellite from a plurality of navigation satellites at the mobile receiver, receiving a plurality of correction signals from a reference station, wherein each correction signal corresponds to a navigation satellite of the plurality of navigation satellites determining a phase and a pseudo-range for each of the navigation satellite carrier signals calculating a set of integer phase ambiguity hypotheses from the pseudo-range and the phase using a measurement equation that does not include a baseline vector wherein calculating the set of integer phase ambiguity hypotheses comprises: performing hypothesis testing on the set of integer phase ambiguity hypotheses, comprising: ceasing hypothesis testing when a ratio between probabilities of a highest and a next highest integer phase ambiguity hypothesis exceeds a threshold value and generating a second set of integer phase ambiguity hypotheses in response to a hypothesis search space becoming smaller than a threshold value and at the mobile receiver, calculating a position of the mobile receiver from the set of integer phase ambiguity hypotheses and the double-differenced measurements of pseudo-range and phase.
2. The method of claim 1, wherein performing hypothesis testing further comprises removing an integer phase ambiguity hypothesis from further testing based on a pseudo-likelihood of the integer phase ambiguity hypothesis passing a removal threshold.
3. The method of claim 1, wherein calculating the set of integer phase ambiguity hypotheses further comprises performing a decorrelating reparameterization of the hypothesis search space.
4. The method of claim 1, wherein the probabilities are computed as single precision values.
5. The method of claim 1, wherein calculating the set of integer phase ambiguity hypotheses is performed at the mobile receiver.
6. A method for Real Time Kinematic satellite positioning comprising:
at a mobile receiver, receiving a plurality of navigation satellite carrier signals, wherein each navigation satellite carrier signal is associated with a navigation satellite from a plurality of navigation satellites receiving a plurality of correction signals from a reference station, wherein each correction signal corresponds to a navigation satellite of the plurality of navigation satellites determining a phase and a pseudo-range for each of the navigation satellite carrier signals generating a set of integer phase ambiguity hypotheses from the pseudo-ranges and the phases using a measurement equation that does not include a baseline vector at the mobile receiver, calculating a position of the mobile receiver from the set of integer phase ambiguity hypotheses and the double-differenced measurements of pseudo-range and phase.
7. The method of claim 6, further comprising estimating a baseline vector based on the phase for each navigation satellite carrier signal and an integer phase ambiguity hypothesis from the set of integer phase ambiguity hypotheses, wherein the measurement equation includes the estimated baseline vector.
8. The method of claim 6, wherein generating the set of integer phase ambiguity hypotheses comprises generating the set of integer phase ambiguity hypotheses without a dynamic transition model.
9. The method of claim 6, further comprising testing a subset of the set of integer phase ambiguity hypotheses.
10. The method of claim 9, wherein testing the subset of integer phase ambiguity hypotheses further comprises testing the subset of integer phase ambiguity hypotheses using a Bayesian update algorithm.
11. The method of claim 9, wherein testing the subset of integer phase ambiguity hypotheses further comprises:
generating an updated subset of integer phase ambiguity hypotheses and testing the updated subset of integer phase ambiguity hypotheses.
12. The method of claim 11, wherein generating the updated subset comprises computing an outer product of the subset of integer phase ambiguity hypotheses and a second subset of integer phase ambiguity hypotheses, wherein the second subset of integer phase ambiguity hypotheses includes at least one integer phase ambiguity hypothesis corresponding to an additional navigation satellite, wherein the subset of integer phase ambiguity hypotheses does not include an integer phase ambiguity hypothesis corresponding to the additional navigation satellite.
13. The method of claim 12, further comprising initializing a probability of the updated subset based on a probability of the subset of integer phase ambiguity hypotheses and a number of integer phase ambiguity hypotheses in the updated subset.
14. The method of claim 9, further comprising:
ceasing testing of the subset of integer phase ambiguity hypotheses when a ratio between probabilities of a highest and a next-highest probability integer phase ambiguity hypothesis exceeds a threshold.
15. The method of claim 9, wherein testing the subset of integer phase ambiguity hypotheses comprises testing the subset of integer phase ambiguity hypotheses in logarithmic space.
16. The method of claim 9, wherein testing the subset of integer phase ambiguity hypotheses further comprises determining a probability for each integer phase ambiguity hypothesis in single precision.
17. The method of claim 6, wherein the pseudo-range and the phase comprise double differenced measurements of the pseudo-range and the phase.
18. The method of claim 6, wherein generating the set of integer phase ambiguity hypotheses comprises generating the set of integer phase ambiguity hypotheses using a Kalman filter.
19. The method of claim 6, wherein generating the set of integer phase ambiguity hypotheses comprises generating the set of integer phase ambiguity hypotheses using means and covariances generated by a Bierman-Thornton filter and at least one of a LAMBDA and an MLAMBDA algorithm.
20. The method of claim 6, further comprising reparametrizing the set of integer phase ambiguity hypotheses to decorrelate each integer phase ambiguity hypothesis.