More

How to make query task synchronous in ArcGIS JSAPI


I am using ArcGIS JS API version 3.6. I want to make query task synchronize. While executing query task reaming code will wait until completion of query task.

For example I have shared my code here which contains query layer.

Case 1: In which I am directly executing the code and printing the log in console window, where you can see before the query task output is displaying

Case 2: In this case we have used alert (for time delay) in the code then value is getting before query task.

-Researching on Google and going through similar question

Case 1 screenshot :

Case 2 screenshot :

Any help will be great !!! thanks in advance : )


It is exactly as @Devdatta Tengshe already said so this is just to expand. Any code you want to execute only after the queryTask has returned simply goes in the function you've already defined. So, you can just move your console log statement to go in there after you've filled the array:

queryTask.execute(query, function(featureSet){ for (var x = 0; x < featureSet.features.length; x++) { dataArray.push(featureSet.features[x].attributes.SYMBOLNAME); } console.log("dataArray=="+dataArray.length); });

You can also define a second function (so the third parameter to queryTask that is called if queryTask returns an error instead of success:

queryTask.execute(query, function(featureSet){ // stuff to do on success }, function(error){ // stuff to do on error }

You don't have to declare the functions in the queryTask if you want to organise you code differently. You can declare them in the main body of your JavaScript and then just call them by name:

// code to prepare your query task here… // execute your query passing it the callback functions queryTask.execute(query, querySuccess, queryError); function querySuccess(featureSet) { // stuff to do on success } function queryError(error) { // stuff to do on error }

Supporting collaborative sense-making in emergency management through geo-visualization

In emergency management, collaborative decision-making usually involves collaborative sense-making of diverse information by a group of experts from different knowledge domains, and needs better tools to analyze role-specific information, share and synthesize relevant information, and remain aware of the activities of others. This paper presents our research on the design of a collaborative sense-making system to support team work. We propose a multi-view, role-based design to help team members analyze geo-spatial information, share and integrate critical information, and monitor individual activities. Our design uses coordinated maps and activity visualization to aid decision-making as well as group activity awareness. The paper discusses design rationale, iterative design of visualization tools, prototype implementation, and system evaluation. Our work can potentially improve and extend collaborative tasks in emergency management.

Highlights

► The paper presents the design of a collaborative system for emergency management. ► The work draws on our empirical work with emergency management professionals. ► The system visualizes information related group analysis and group awareness.


How to make query task synchronous in ArcGIS JSAPI - Geographic Information Systems

You are currently viewing an archived version of Topic GPU Programming for GIS Applications. If updates or revisions have been published you can find them at GPU Programming for GIS Applications.

Graphics processing units (GPUs) are massively parallel computing environments with applications in graphics and general purpose programming. This entry describes GPU hardware, application domains, and both graphics and general purpose programming languages.

Mower, J. (2018). GPU Programming for GIS Applications. The Geographic Information Science & Technology Body of Knowledge (4th Quarter 2018 Edition), John P. Wilson (Ed.). DOI:10.22224/gistbok/2018.4.5

This entry was published on October 24, 2018. No earlier editions exist.

Graphics Processing Unit (GPU): a computer hardware module composed of processors, registers, and dedicated random access memory. A GPU may be integrated within a desktop, mobile, or cloud computing platform.

Rendering Pipeline: a highly-optimized parallel processing environment intended for rendering 3D vertex data to a pixel. Ideally, all vertices (and all pixels) are processed independently from one another on dedicated processors. In practice, GPU physical processors host groups of virtual processors.

Framebuffer: a collection of video random access memory (VRAM) locations containing numeric color values mapped to display regions.

General-purpose computing on graphics processing units (GPGPU): GPU programming that is not primarily directed toward graphic output.

Client-Server Processing: The processing model used for most GPU computing where a CPU (central processing unit) is the client that launches a program on a GPU acting as a server. In some GPGPU environments this relationship may be described as host-device processing.

Texture: a 1- or 2-dimensional data structure whose elements specify color or generic numeric values at locations.

Shader Programming Language: a primarily graphics-oriented programming language (such as GLSL or HLSL) for code development on GPUs.

Shader Program (or Shader): a program written in a shader programming language for execution on a GPU and interaction with the rendering pipeline

Shader Stage: one of several programmable processing stages in the rendering pipeline. Each shader stage understands a unique programming language that varies slightly from others in the same language group.

Compute Shader Stage: a stage in a shader program that performs GPGPU-style computing without interacting with the rendering pipeline

Shader Instance: a single execution thread associated with a unique virtual processor of a given shader stage.

Uniform variable: a read-only variable with respect to a shader stage with a value supplied by the client at the start of the encompassing shader program’s execution.

Shader Storage Buffer Object (SSBO): a client side data buffer bound to a server buffer, frequently used for data transfer between the client and server.

This article discusses modern graphics processing unit (GPU) programming for spatial data handling on desktop and mobile computing platforms. Modern GPUs employ massively-parallel computing architectures that, for many application domains, improve overall data throughput over sequential computing alternatives (see Graphics Processing Units (GPUs)). Evolving from early GPUs containing as few as 16 processors (Robert et al. 1986), modern high-end desktop systems now contain over 3,800 processors (or cores) (see NVIDIA TITAN Xp specifications as an example). GPU cloud computing services host a variety of programming languages and development environments that provide remote access to high-performance GPUs for building graphics and general-purpose computing on graphics processing units (GPGPU) applications. These include machine learning and image processing tasks as well as 3D rendering applications (Esri currently offers the ArcGIS Desktop Virtualization environment as a cloud GPU solution for mapping graphics applications). The physical location of the GPU is largely irrelevant to the developer—the same programming languages apply to desktop and cloud systems.

Both graphics and GPGPU programs are launched from a "client" process (for graphics applications) or a "host" process (for GPGPU applications). Figure 1 illustrates the data flow between client and server for a typical graphics application.

Figure 1. The data flow between a client process, a server process, and a framebuffer in a graphics program (from Mower 2016).

The client (or host) process, most often executed on a local computing resource with a conventional CPU, compiles and links a set of GPU source code files into a program, binds (copies) the compiled program to the GPU (the server or device), and initiates its execution. The programming languages understood by a given GPU depend on the development tools and definitions contained in its device driver. The specific manner of server/device program compilation, linking, and execution is language-dependent. Table 1 categorizes and describes some of the more popular client-server (host-device) programming languages.

Table 1. GPU Programming Environment Characteristics
Environment Maintained By Application Domain Additional Supported Device / Server Languages
OpenGL Khronos Group Graphics GLSL
DirectX Microsoft Graphics HLSL
CUDA NVIDIA GPGPU Not Applicable
OpenCL Khronos Group GPGPU Not Applicable

Most GPU server programs operate on data supplied by the client in one or more ways. Graphics client-side programs typically copy a set of vertices (possibly interleaved with other data such as surface normals, texture coordinates, and so on) to the server-side program (usually referred to as a shader program) as an input stream for the ‘rendering pipeline,’ a highly-optimized series of processing stages that transform 3D vertex coordinates to color-valued pixels within a video framebuffer. Other data channels, such as textures (mapped 1D or 2D image attribute data), shader storage buffer objects (similar to arrays of C struct types) and uniform variables (read-only scalar data copied from the client), can supply the server with an almost unlimited variety of attribute data. While some data channels are ‘read only’ with respect to the server, others allow modifications that are visible to the client on completion of a rendering pass. The price for visibility is the sometimes substantial cost in the time required to copy data from the server back to the client. Programs in the "general-purpose computing on graphics processing units" or GPGPU category apply similar mechanisms to the transfer of data between the client and server (or host and device) but rarely interact with the rendering pipeline.

The efficiency of a parallel computing solution is largely a function of the proportion of processors that are kept busy working on a problem at any given time. Some types of problems, known as ‘embarrassingly parallel,’ require little synchronization through communication with other processors. For example, if every pixel in a framebuffer were represented by a unique processor, any one of them could query its current color value and swap it by consulting a lookup table. No communication with other pixels would be required. At the other extreme, ‘inherently sequential’ problems such as drainage accumulation modeling over a DEM require continuous flow updates through extensive interprocessor communication. When multiple processors need to reference a common memory location (as would two uphill vertices in a drainage network that drain to a third common downhill vertex), the programmer must somehow ensure that each uphill vertex sees up-to-date data in the common downhill vertex before adding its flow to its downhill neighbor (Figure 2).

Figure 2. A portion of a drainage network. Blue lines indicate direction of flow. Processors P1 through P4 are initialized with a reserve value R of 1 and accumulation value A of 0. On each pass, a processor first adds its R value to its A value, then adds its R value to its downhill neighbor’s R value, and then zeroes its own R value. Synchronous (atomic) addition ensures that every processor sees up-to-date values at other processors.

This can be achieved by allowing one uphill processor to "lock" the common downhill memory location until it has finished adding its contents through an "atomic" operation. This synchronization prevents the other uphill processor from adding its contents to a stale (invalid) value. Blocked processors remain idle until the blocking condition is cleared. Not doing so would likely produce undefined results (Figure 3). Synchronization usually implies a loss of efficiency as the number of simultaneously active processors is reduced.

Figure 3. An error due to asynchronous addition. P1 adds its reserve value of +1 to the current 0 value at P3. P2 does not see the posted value at P3 resulting from P1’s addition, instead observing the same 0 value that P1 saw (a stale value with respect to P2). P2 adds its reserve value of +1 to the stale value, and replaces R at P3 with 1 instead of the correct combined value of 2.

Most GPU environments provide a variation of single-instruction, multiple data (SIMD) computing in which a subset of processors (sometimes referred to as a workgroup) share a single, common processing thread executed in lockstep over each member processor’s local data. Processors within a given workgroup run asynchronously with respect to those in other workgroups, implying a single-program, multiple data (SPMD) programming model.

The performance advantage of a parallel program over a functionally-equivalent sequential program is referred to as its "speedup factor." If an array of 100 cells requires 100 units of time to process sequentially, the same array might be expected to require 1 unit of time on a GPU if 100 available physical processors, each hosting an individual array cell, were able to run its task in 1 time unit, completely asynchronously with respect to all other processors. Although this potential 100x speedup is unlikely to be attained on real systems, it can provide an upper-end estimate of performance expectations (see section 6, GPGPU Performance Characteristics below for more realistic speedup comparisons).

A GPU can host a variety of graphic and GPGPU processing models. Graphics programming interfaces like OpenGL and DirectX are primarily intended for building rendering applications that require framebuffers to be filled at animation rates (at least 24 complete framebuffer renderings per second) to limit visible pauses in motion. Both interfaces have their own sets of server-side shader languages (GLSL for OpenGL HLSL for DirectX) that optimize vertex processing by keeping inter-processor communication to a bare minimum. GPGPU languages like CUDA that do not generally interact with the rendering pipeline and therefore are not subject to video refresh constraints tend to provide a broader range of utilities for asynchronous processor control flow and inter-processor synchronization. The default behaviors of GLSL and HLSL limit synchronization options to promote high frame rates during rendering pipeline operations.

It is possible to execute GPGPU programs written in CUDA and graphics programs written in either OpenGL or DirectX from the same client program (see Stam 2009, What Every CUDA Programmer Should Know About OpenGL). Alternatively, both OpenGL and DirectX provide a compute shading stage that allows GPGPU computing without interacting with the rendering pipeline and that provides synchronization utilities approaching the levels of control supplied by pure GPGPU languages such as CUDA and OpenCL (see section 5 below, and see Khronos Group (2018), Core GLSL, and Microsoft (2018), HLSL). If the GPGPU tasks do not require elaborate blocking techniques, then the use of a compute shader before or after a rendering pipeline pass may be a viable alternative to cross linking a graphics-dominant application with CUDA.

Shaders are usually written to transform a set of 3D vertices to their 2D framebuffer counterparts such that each execution pass of the shader program creates a single image (or a single frame in an animation sequence). Figure 4 illustrates the rendering pipeline from the point of view of an OpenGL program (DirectX uses a very similar model).

Figure 4. The OpenGL rendering pipeline. Green indicates mandatory user programmable stages, orange stages are optional, and gray stages are both mandatory and non-user-programmable (From Mower 2016).

OpenGL and DirectX client-side programs can be written in any standard programming language but C and C++ are generally preferred for the ease with which they link to the respective graphics library functions. GLSL and HLSL are somewhat different from one another in their keywords and syntax, but both inherit many of the conventions of the C programming language and perform similar graphics operations. GLSL and HLSL are actually groups of languages different rendering stages require slightly different programming languages that may vary on the inclusion of stage-appropriate functions. For brevity, the remaining discussion pertains specifically to GLSL and OpenGL but is generally applicable to HLSL and DirectX at a pseudocode level.

OpenGL, currently maintained and licensed by the Khronos Group, is available to end users without licensing requirements. Derived as a functional alternative to IrisGL (the proprietary rendering interface introduced by Silicon Graphics in the 1980s), OpenGL 1.0, introduced in 1992, invoked GPU rendering pipeline functionality through a fixed-function interface in which users were limited to modifying GPU rendering parameters through pre-defined OpenGL functions (Khronos Group 2018, History of OpenGL). The first programmable pipeline version of OpenGL became available in 2004 with the introduction of the GLSL family of shading languages. Early versions limited direct user access to the vertex and fragment stages of the programmable pipeline by version 4.0 (introduced in 2010), all of the programmable stages in Figure 4 had been made available. Subsequent versions of OpenGL (and GLSL) have continued to increase access to functionality at each stage. Access is increased as manufacturers and language implementers "expose" additional features of the graphics hardware that were previously inaccessible to GPU programmers.

Client-side OpenGL programs are responsible for setting up the overall graphics environment, establishing a frame-by-frame rendering loop (for anything from a single frame to an animation sequence), and creating memory buffers for transferring data to and from the server. Library packages such as glut and glew standardize many of the video setup and display loop tasks across operating systems, making cross-OS development relatively straightforward.

Shader programs must include at least 2 mandatory stages: a vertex stage that typically transforms world or model coordinates to projected coordinates, and a fragment stage that determines the color values for fragments that in turn define mapped pixel values within a framebuffer. Referring back to Figure 4, the assembly of primitives (points, lines, triangles, and others) are performed by the non-programmable stages (in gray).

Each vertex shader instance transforms a single vertex in the client-side input stream independently from all other vertices. Each instance represents a unique execution thread hosted on a GPU processing core. A core may host more than one thread but threads do not communicate with one another, thereby eliminating any need for synchronization and any resulting decrease in performance.

The transformed vertices emitted from the vertex stage may be passed to optional tessellation and geometry stages depending upon the needs of the application. Both optional stages allow additional vertices to be added to the rendering pipeline beyond those supplied by the client. Figure 5 illustrates the use of tessellation shaders to create a Bezier patch. See Vlachos and others (2001) for more details.

Figure 5. Making a Bezier patch. The tessellation control shader (TCS) groups 3 vertices from the previous vertex stage to form an abstract patch. In this example, the labeled control points of the Bezier surface (defined in the TCS) determine the evaluation of intermediate points (at any desired resolution) calculated within the tessellation evaluation shader (TES) stage.

The mandatory, non-programmable primitive generator takes its input (as a collection of vertices representing a point, line, or shape) from the preceding vertex, tessellation, or geometry stage and constructs the selected type of graphic object (primitive). Primitives are passed to the clipping stage which rejects any vertices that fall outside the viewing volume, defined by the user’s viewing parameters and represented as a truncated pyramid (Figure 6). The following rasterizing stage converts the remaining vertices to screen coordinates and produces a rectangular tessellation of the primitive into fragments. Each resulting fragment has a unique location in the frame buffer specified by a UV coordinate pair in the range 0 to 1. Following rasterization, each fragment is represented by its own fragment shader instance which is primarily responsible for setting the color value of its associated pixel in the framebuffer. OpenGL and DirectX environments allow the creation of multiple framebuffers, one of which is for immediate display while the others serve as hosts for storing images as textures.

Figure 6. The clipping stage. In OpenGL, a triangle primitive with vertices falling outside the truncated viewing pyramid is clipped to its boundaries by interpolating new vertices (in red) at the intersection of the primitive and the bounding planes. Vertices outside the viewing pyramid are rejected.

A single client-side application may choose to execute more than one rendering pass to build up complex image products. Each preliminary pass uses a unique shader program that ultimately writes its output to a texture (or some other buffer) that is accessible to subsequent passes. The fragment shader of the final pass typically writes its output to the on-screen framebuffer for display.

OpenGL and DirectX both provide optional compute shaders that have no direct access to the rendering pipeline. Compute shaders are best used for performing non-graphic transformations of data where little inter-processor data sharing is required. If an application is not required to create any graphic output, then another language with finer control flow (such as CUDA) is often preferable.

Many excellent tutorials on GPU graphics programming are available. The author has made extensive use of the OGLdev Modern OpenGL Tutorials. Such tutorials provide essential information on coding, compilation, linking, and execution for a variety of client programming languages and operating systems.

GIS and remote sensing applications such as image stretching, coordinate transformations, image warping, and many others can benefit from the data throughput available through GPGPU programming. GPGPU programs, like shader programs, typically associate an individual data element with a unique virtual processor. See Christophe, Michel, and Inglada (2011) for a history of parallel solutions for remote sensing applications ranging from multi-core CPU implementations through early GPU solutions.

Table 2 lists a selection of the most common proprietary and open source GPGPU programming languages. Languages such as CUDA, developed and maintained as a proprietary product of NVIDIA, and OpenCL, an open-source language overseen by the Khronos Group, share many GPU programming methods and data structures. Due to the limitation of CUDA applications to NVIDIA hardware, OpenCL is more commonly applied for cross-platform development. However, OpenCL programs may require more manual optimization to achieve an equivalent level of performance to a functionally equivalent CUDA program. CUDA and OpenCL library bindings exist for many standard programming languages including C, C++, and Python. DirectCompute shares many characteristics of OpenCL but is proprietary and limited to execution within Microsoft Windows DirectX media environments.

Table 2. Common GPGPU Programming Languages
Language Open Source Developer Characteristics and Use
CUDA No NVIDIA Direct GPU programming and backend GPU processing through CPU applications (limited to NVIDIA hardware)
OpenCL Yes Khronos Group Cross-platform GPGPU standard
DirectCompute No Microsoft GPGPU programming for Microsoft Windows DirectX environments
OpenACC Yes Cray, CAPS Enterprise, NVIDIA, PGI Standard for mixed CPU/GPU parallel programming

CUDA, OpenCL, and DirectCompute programs begin execution on a host CPU and generally make data available to the GPU (the device) by using functions that create a memory space visible to both the host and the GPU. CUDA programs must additionally specify the number of threads that will run in parallel to solve the given problem and add a qualifying tag before a function definition indicating whether it should be allowed to run in parallel on the GPU. Simple CUDA applications benefit from their strong resemblance to comparable sequential applications with the addition of a relatively small number of keywords and functions. For example, the functions in Table 3 have familiar C++ counterparts:

Table 3. CUDA and C++ Cognate Functions
CUDA Library Function Operation Comparable C++ Function
cudaMemcpy() Copy data between host and device memcpy()
cudaFree () Free device memory free()
cudaBindTexture() Bind a memory area to a texture glBindTexture() // OpenGl library // function

CUDA also provides a number of vector data types that would be familiar to those who have used MATLAB or a modern graphics library. Some examples are float2, float3, and float4 for 2D, 3D, and 4D vector types with attributes x, y for float2, x, y, z, for float3 and x, y, z, w for float4. Matrix algebra operations compatible with these vector types (MatAdd(), MatMul(), etc) are similarly provided.

See NVIDIA 2018, GPU Accelerated Computing with C and C++ for explicit instructions for building and running sample CUDA applications, and The Khronos Group 2015, An Introduction to OpenCL C++ for similar information and sample code for OpenCL.

Many programmers will choose to use CUDA or another GPGPU language indirectly through an application such as MATLAB. Under this approach, the developer can continue to use a familiar programming interface without having to learn the details of GPU program execution. Many machine learning projects written for MATLAB use this approach.

OpenACC, introduced as a standard in 2012 and developed jointly by Cray, CAPS Enterprise, NVIDIA, and Portland Group (PGI), provides higher-level parallel processing solutions than do CUDA, OpenCL, DirectCompute, and other languages supporting manual coding of GPGPU programs. Promoted as a parallel programming environment requiring limited ‘hands on’ GPU coding, OpenACC programs are written in C, C++, or FORTRAN and contain embedded annotations tagging sections of code to be run on available multi-core CPU and GPU resources where present. As of August, 2018, the open-source gcc compiler supports OpenACC but other common development environments, notably Microsoft Visual Studio, do not.

The speedup factor of a parallel computing solution over a comparable sequential solution depends on many factors, the most important of which are listed here:

  • The degree of optimization for the compared sequential and parallel solutions
  • The number and performance characteristics of the tested CPU cores
  • The number and performance characteristics of the tested physical GPU cores
  • The amount of available RAM and VRAM
  • The data transfer speeds between GPU and CPU resources

Similarly, the relative speeds at which 2 functionally comparable parallel implementations run when written in 2 different languages depend largely upon the efficiency of the coded parallel programming solutions and the resulting compiled code. Gregg and Hazelwood (2011) advocate for controlled benchmarking in comparing GPU performance factors. Showing that in some cases data transfer elapsed times between the GPU and CPU can exceed 50x the GPU execution time, the authors provide a comparison framework accounting for all CPU and GPU program resources. Without such controls, speed comparisons between parallel implementations or between parallel solutions and sequential solutions are at best anecdotal.

Nonetheless, good examples of sufficiently controlled measurements of speedup values for GPGPU solutions can be found in the literature. For example, Moustafa and others (2016) report GPU speedup values of 12x to 70x for an image processing task that varied by the number of threads allocated to each image. Clearly, well-written GPGPU programs have the capacity for attaining large speedup values over comparable sequential solutions.

Cloud computing services are rapidly becoming preferred platforms for GPGPU applications. NVIDIA Nimbix, Amazon AWS, IBM Cloud, and Microsoft Azure all provide GPGPU environments for CUDA as well as more limited support for OpenCL, OpenGL, and DirectX. Vendors typically charge hourly rates for compute time. Of these services, NVIDIA Nimbix provides service interfaces that most resemble client-server programming tasks in a desktop environment with an attached GPU. Many other services provided by these vendors make implicit use of GPU computing in the background.

As high-performance GPUs in desktop, mobile, and cloud computing environments become more pervasive, it is reasonable to expect that an increasing number of graphics applications in cartography and GIS, and compute-intensive problems in remote sensing will see increases in the number of solutions that invoke GPU platform implementations. This will likely increase the range and complexity of available augmented reality and animated cartographic products and to facilitate projects in GIS and remotes sensing that operate over large spatial data collections.

Christophe, E., Michel J., Inglada, J. (2011). Remote Sensing Processing: From Multicore to GPU. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 4(3):643-652. DOI: 10.1109/JSTARS.2010.2102340.

Gregg, C., and Hazelwood, K. (2011). Where is the data? Why you cannot debate CPU vs. GPU performance without the answer, (IEEE ISPASS) IEEE International Symposium on Performance Analysis of Systems and Software, Austin, TX, 2011, pp. 134-144. DOI: 10.1109/ISPASS.2011.5762730.

Moustafa, M., Ebeid, H., Helmy, A., Nazmy, T. M., Tolba, M. F. (2016). Rapid Real-time Generation of Super-resolution Hyperspectral Images through Compressive Sensing and GPU. International Journal of Remote Sensing, 37(18): 4201-4224. DOI: 10.1080/01431161.2016.1209314

Mower, J. E. (2016) Real-Time Drainage Basin Modeling Using Concurrent Compute Shaders. Proceedings, AutoCarto 2016, Albuquerque, September 2016 pp. 113-127. Online at http://www.cartogis.org/docs/proceedings/2016/Mower.pdf, last accessed August 8, 2018.

Robert, F., Hopgood, A., Hubbold, R. J., Duce, D. A., Eds. (1986). Advances in Computer Graphics II. Berlin: Springer Verlag.

Vlachos, A., Peters, J., Boyd, C., Mitchell, J. L. (2001). Curved PN triangles. ACM Symposium on Interactive 3D Graphics. New York, ACM Press: 159–166. DOI: 10.1145/364338.364387.


Abstract

Planning sub terrestrial inner-city-railway-tracks is an interdisciplinary and highly complex task, which involves plenty of different stakeholders. Currently, the different planners work more or less separated in an asynchronous manner. To facilitate a collaborative planning process between these different stakeholders we developed a collaboration platform. Clearly, the integration of geographical information and geoprocessing results into the planning process and the different modelling tools will improve this process in a significant way. In this paper, we show how to describe the needed geographical information by so-called Geospatial Web Service Context Documents in a suitable way and how to integrate these pieces of information into the different planning tools via the collaboration platform in a unified, dynamic, and generic way.


How to make query task synchronous in ArcGIS JSAPI - Geographic Information Systems

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited.

Feature Papers represent the most advanced research with significant potential for high impact in the field. Feature Papers are submitted upon individual invitation or recommendation by the scientific editors and undergo peer review prior to publication.

The Feature Paper can be either an original research article, a substantial novel research study that often involves several techniques or approaches, or a comprehensive review paper with concise and precise updates on the latest progress in the field that systematically reviews the most exciting advances in scientific literature. This type of paper provides an outlook on future directions of research or possible applications.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to authors, or important in this field. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.


[ESRI software] Acronym for Arc Extensible Markup Language. A file format that provides a structured method for communication between all ArcIMS components. ArcXML defines content for services and is used … Read more

[standards] A metric areal unit of measure equal to 100 square meters. One are is equal to 1,076.39 square feet, or 0.025 acres.

1 [Euclidean geometry] A closed, two-dimensional shape defined by its boundary or by a contiguous set of raster cells. 2 [Euclidean geometry] A calculation of the size of a two-dimensional … Read more


Terms

In order to facilitate an understanding of the systems and methods discussed herein, a number of terms are defined below. The terms defined below, as well as other terms used herein, should be construed broadly to include, without limitation, the provided definitions, the ordinary and customary meanings of the terms, and/or any other implied meanings for the respective terms. Thus, the definitions below do not limit the meaning of these terms, but only provide example definitions.

Ontology: Stored information that provides a data model for storage of data in one or more databases. For example, the stored data may comprise definitions for object types and property types for data in a database, and how objects and properties may be related.

Database: A broad term for any data structure for storing and/or organizing data, including, but not limited to, relational databases (for example, Oracle database, mySQL database, and the like), spreadsheets, XML files, and text file, among others. The various terms “database,” “data store,” and “data source” may be used interchangeably in the present disclosure.

Data Item (Item), Data Object (Object), or Data Entity (Entity): A data container for information representing a specific thing, or a group of things, in the world. A data item may be associated with a number of definable properties (as described below). For example, a data item may represent an item such as a person, a place, an organization, an account, a computer, an activity, a market instrument, or other noun. A data item may represent an event that happens at a point in time or for a duration. A data item may represent a document or other unstructured data source such as an e-mail message, a news report, or a written paper or article. Each data item may be associated with a unique identifier that uniquely identifies the data item. The terms “data item,” “data object,” “data entity,” “item,” “object,” and “entity” may be used interchangeably and/or synonymously in the present disclosure.

Raw Data Item: A data item received by a data analysis system for analysis. Raw data items may be received, for example, from one or more network monitors and/or other data sources, as described below. It is understood that the term “raw data item,” as used in the present disclosure, may include data obtained through the performance of enrichments, including enrichments performed during pre-processing and/or post-processing.

Data Item Lead: A raw data item that has a calculated score, metascore, or alert level above a certain threshold, or has otherwise been flagged or designated for further analysis.

Item (or Entity or Object) Type: Type of a data item (for example, Person, Event, or Document). Data item types may be defined by an ontology and may be modified or updated to include additional data item types. An data item definition (for example, in an ontology) may include how the data item is related to other data items, such as being a sub-data item type of another data item type (for example, an agent may be a sub-data item of a person data item type), and the properties the data item type may have.

Properties: Also referred to herein as “attributes” or “metadata” of data items. A property of a data item may include any item of information associated with, and/or relevant to, the data item. At a minimum, each property of a data item has a property type and a value or values. For example, properties associated with a person data item may include a name (for example, John Doe), an address (for example, 123 S. Orange Street), and/or a phone number (for example, 800-0000), among other properties. In another example, properties associated with a computer data item may include a list of users (for example, user1, user2, and the like), and/or an IP (internet protocol) address, among other properties.

Property Type: The type of data a property is, such as a string, an integer, or a double. Property types may include complex property types, such as a series data values associated with timed ticks (for example, a time series), and the like.

Property Value: The value associated with a property, which is of the type indicated in the property type associated with the property. A property may have multiple values.

Link: A connection between two data objects, based on, for example, a relationship, an event, and/or matching properties. Links may be directional, such as one representing a payment from person A to B, or bidirectional.

Link Set: Set of multiple links that are shared between two or more data objects.

Embodiments of the present disclosure relate to a data analysis system (also referred to herein as “the system”) that may retrieve and enrich data from a monitored network or other data source, and present the data to an analyst via a user interface for further analysis and triage.

Detection of the presence of malware and/or other malicious activity occurring on a network is a highly important, but oftentimes challenging task. Detection of malware and malicious activity is of particular importance to organizations (for example, businesses) that maintain internal networks of computing devices that may be connected to various external networks of computing devices (for example, the Internet) because infection of a single computing device of the internal network may quickly spread to other computing devices of the internal network and may result in significant data loss and/or financial consequences.

Detection of the presence of malware and/or malicious activity on a monitored network may be performed through the examination of activity occurring on the monitored network over time. Previously, determination and identification of malware or malicious activity through the examination of network activity was a labor intensive task. For example, an analyst may have had to pore through numerous tracking logs and other information of the monitored network, manually discern patterns and perform analyses to gain additional context, and compile any information gleaned from such analyses.

In various embodiments of the data analysis system described herein, the system may receive data comprising a plurality of raw data items from one or more data sources, such as a monitoring agent located in a monitored network. The raw data items may comprise any type of information that may be relevant for analyzing and detecting network activity (for example, the presence of malware and/or malicious behavior on the network). For example, such information may include proxy requests from endpoints (or other devices) within the monitored network to outside domains, requests between network devices in the monitored network, processes running on network devices in the monitored network, user logins on network devices in the monitored network, etc. In the context of malware detection, one example of a raw data item may be information associated with a software process running on a computer (for example, the name of the process, any associated processes, a time the process was activated, any actions taken by the process, and/or the like). Another example of a raw data item in the context of malware detection may be information associated with communications between a network device and an external domain or IP address (for example, an identifier of the network device, a time of the connection, an internal IP address of the network device, the external domain and/or IP address connected to, an amount of data transferred, and/or the like).

Network devices of the monitored network may include, for example, any type of computerized device, such as a desktop or laptop computer, a point of sale device, a smartphone, a server (for example, a proxy server), a network router, and/or the like. Monitoring agents may include, for example, software applications running on a network device, a dedicated hardware device (for example, a router configured to monitor network traffic), and/or the like.

The received data may undergo initial filtering or analysis in order to eliminate non-relevant raw data items, such as by running the data against a whitelist and/or one or more rules. In addition, the data may be automatically subject to one or more enrichments in order to provide additional context to a user for analysis and/or triage. For example, the data may be run against one or more third party analysis services, such as virus or malware detection services. The data may also undergo contextual or temporal analysis (to, for example, analyze a frequency or spread of a particular event in the network, and/or identify other events that occur temporally close to a particular event) in order to provide the user with additional context.

In addition, the received data may be sorted, scored, or prioritized using one or more scoring rules/algorithms. The system may generate a score, multiple scores, and/or metascores for each received raw data item, and may optionally rank or prioritize the data items based on the generated scores and/or metascores. For example, high priority data items may indicate a higher likelihood of malware or malicious behavior, and thus be of greater interest to an analyst. Raw data items satisfying one or more score thresholds may be designated as “data item leads” such that they may be further investigated by a user using the system (as described below).

According to various embodiments, the data analysis system may be used by a user (also referred to herein as an “analyst”) to execute searches and/or additional enrichments against the received data item leads. Searches allow the user to access the various raw data items (including any enrichments, as mentioned above) associated with a data item lead in order to investigate a likelihood that the data item lead represents a data item of interest (for example, an indication of malicious activity, such as by malware). The use may use the system narrow the raw data item search results and/or view new sets of raw data items associated with the data item lead for analysis. Enrichments may be used by the user to supplement displayed raw data items with additional context beyond that provided by the initial analysis and/or initial enrichment. A user may, in some embodiments, also pre-apply one or more enrichments to a search, such that the search will be executed and the selected enrichments automatically applied to the retrieved search results (for example, the raw data items satisfying the search).

According to various embodiments, the data analysis system may group received raw data items based upon shared attribute values, allowing a user to, instead of having to pour through raw data items on an individual level, drill down or perform other types of actions on batches of raw data items that share common attributes values.

According to various embodiments, the data analysis system may be used to categorize received data and construct timelines, histograms, and/or other visualizations based upon the various attributes associated with the raw data items, allowing a user to quickly visualize the distribution of raw data items among different attribute values. For example, a user may categorize certain received raw data items, and construct a timeline of the raw data items of that category, allowing the user more insight into a chronology of events. Accordingly, in various embodiments, the user may determine a likelihood that a data item lead is associated with malicious (or other) activity by searching, enhancing, and analyzing various raw data items associated with the data item lead.

In various embodiments, and as mentioned above, the data analysis system may be used in various data analysis applications. Such applications may include, for example, financial fraud detection, tax fraud detection, beaconing malware detection, malware user-agent detection, other types of malware detection, activity trend detection, health insurance fraud detection, financial account fraud detection, detection of activity by networks of individuals, criminal activity detection, network intrusion detection, detection of phishing efforts, money laundering detection, and/or financial malfeasance detection. While, for purposes of clarity, the present disclosure describes the system in the context of malware (and/or other malicious activity) detection, examples of other data analysis applications are described in U.S. patent application Ser. No. 14/473,920, titled “External Malware Data Item Clustering and Analysis,” filed on Aug. 29, 2014, and in U.S. patent application Ser. No. 14/139,628, titled “Tax Data Clustering,” filed on Dec. 23, 2013. The entire disclosure of each of the above items is hereby made part of this specification as if set forth fully herein and incorporated by reference for all purposes, for all that it contains.

In the following description, numerous specific details are set forth to provide a more thorough understanding of various embodiments of the present disclosure. However, it will be apparent to one of skill in the art that the systems and methods of the present disclosure may be practiced without one or more of these specific details.


HydroHillChart Francis module. Software used to Calculate the Hill Chart of the Francis Hydraulic Turbines

1 ANALELE UNIVERSITĂŢII EFTIMIE MURGU REŞIŢA ANUL XXII, NR. 1, 2015, ISSN Dorian Nedelcu, Adelina Bostan, Florin Peris-Bendu HydroHillChart Francis module. Software used to Calculate the Hill Chart of the Francis Hydraulic Turbines The paper presents the Hydro Hill Chart - Francis module application, used to calculate the hill chart of the Pelton, Francis and Kaplan hydraulic turbine models, by processing the data measured on the stand. After describing the interface and menu, the input data is graphically presented and the universal characteristic for measuring scenarios =const. and n =const is calculated. Finally, the two calculated hill charts are compared through a graphical superimposition of the isolines. Keywords: turbine, Francis, runner, hill chart, Python 1. Introduction The hill chart expresses the functional dependence of the Francis turbine parameters n, Q,, &eta) graphically and is the instrument on which the industrial turbine is designed. This dependence arises from extensive experimental research carried out on Francis turbine models, leading to a library of optimized characteristics for the specific flows and falls of these turbines types. The test models have a complex character, because they are made in several operating conditions, high measuring accuracy and low cost compared to those made on the prototype of industrial turbines. Tests can be carried out on the industrial turbine when it is placed in service, but their reception is usually made by testing the model parameters in order to verify guaranteed parameters. 2. The HydroHillChart software The HydroHillChart software [1] is a complex one used to calculate and plot the hill chart for Pelton [2], [3], Francis and Kaplan turbines. The application was developed in the Python programming language, while the mathematical tool used for interpolation is the cubic spline type function. 295

2 3. The Francis module The Francis turbine option from the main menu displays a window with a specific Francis module interface, Figure 1, composed of: toolbar, measured data table called Puncte măsurate in which measured data for a model runner is loaded) and the table called Puncte de intersecție cu randament constant where the application stores values that result from the intersection of primary curves with constant efficiency). Figura 1. The interface of Francis module The Input data is taken from Excel and placed in the table called Puncte măsurate, by completing the following fields: ID point - represents the current number for the measured point n [rot/min] - represents the unitary speed Q [m 3 /s] - represents the unitary flow [mm] - represents the wicked gate opening &eta [%] - represents the efficiency Punct eliminat allows the removal of a measured point, by selecting a Check Box. Because the entire turbine operating range cannot be explored through measurements, the measurements are punctually made at a constant parameter and from the parametric curve interpolation, the hill chart of the model arises. For a Francis turbine model, measurements can be performed by using the following parameters =const., wicked gate opening, or n =const., unitary speed. Therefore, when reading the input data, the Francis module should be notified by the user about the measuring scenarios that were used = const. or n = const), by specifying the option when a new database is created. Although the input data fields are identical, for all measurement scenarios, graphic representations and calculation algorithms differ for the two scenarios. The resulting curves are different, but if the interpolations are sufficiently precise, what should coincide is the hill chart. Thereby, for a data set where the starting point n, Q,, &eta) is at the intersection of a =const. range of values with a n =const. range of values, the hill chart which arises from the 296

3 primary data considered to be measured at =const. should overlap with the one which arises from the primary data considered to be measured at n =const The Francis module toolbar The Francis module toolbar is located at the top of the window and includes control buttons marked with specific icons, figure 1, which fulfill the following functions: - informative icon for the Francis runner, without a related function New Open Info Data Hill Chart n / Q -n Excel Word - create a new database for Francis runners to create a new database the following information need to be completed: turbine type name, database filename, the period when the measurements took place, the person responsible for the performed measurements, additional information the measurement values are taken from an Excel file. - opening and loading an existing database for Francis runners after this operation, the Puncte măsurate table, Figure 1, will be emptied and then rewritten with the values from the selected database. - provides information about the current database turbine type name, database filename, the period when the measurements took place, the person responsible for the performed measurements, the file from which the measured data was taken, number of measured and eliminated points, minimum and maximum values for unitary speed, unitary flow, injector needle opening and efficiency. - input data visualization in graphic form: 3D curves and &eta = f n, Q ) 3D surface, respectively 2D parametric curves - calculating and plotting the hill chart for a number of specified efficiencies values - imposing a parameter double unitary speeds n or wicked gate opening ) and the intersection of the characteristic = f n, Q ) &eta with this parameter - imposing double unitary speeds n and unitary flow Q, followed by the characteristics intersection &eta = f n, Q ) in order to calculate the efficiency point Q, n ) - export results in an Excel file: input data and the numerical and graphical processing carried out - graphics export in a Word file 297

4 PDF - graphics export in a PDF file Exit - return to the main window of the HydroHillChart software The Graphics Toolbar For each graph generated by the HydroHillChart software, at the bottom of the window, a toolbar with command buttons marked with specific icons can be found, which perform the following functions: Home - Return to initial view Back - Back to previous view Forward - Forward to the next view Pan - Left click and hold to zoom, zoom in/out with the right mouse button pressed Zoom - Enlarge selected area Subplots - Chart configuration Save - Save chart format: EPS JPG, PGF, PDF PNG, PS RAW, SVG, TIF The "DATA" button The DATA button enables the graphical view of the input data, according to Table 1: Table 1. Input data view in graphical form =const. measuring scenarios Figure n =const. measuring scenarios Figure &eta = f n, Q ) 3D surface 2 = f n, Q ) 298 &eta 3D surface 10 = f n, Q ) 3D surface 3 = f n, Q ) 3D surface &eta = f n, Q ) 3D curves 4 = f n, Q ) &eta 3D curves 12 = f n, Q ) 3D curves 5 f n, Q ) = 3D curves 13 &eta = f n ) overlaid 2D curves 6 = f Q ) &eta overlaid 2D curves 14 Q Q = f ) overlaid 2D curves 7 = f ) overlaid 2D curves 15 n &eta = f n ) şi Q = f ) n 2D parametric curves 8 9 &eta = f a ) şi Q = f a ) o o 2D parametric curves 16 17

5 Figures 8 and 16 are identical to Figures 9 and 17, with the difference being that in the first figures only the measurement points are shown and in the last figures, the interpolated points which arise from the intersection of primary curves with efficiency constant values are also represented, which are used in the calculation of the hill chart. &eta Figure 2. = f n, Q ) 3D surface for =const. Figure 3. = f n, Q ) 3D surface for =const. Figure 4. &eta = f n, Q ) 3D curves for =const. Figure 5. = f n, Q ) 3D curves for =const. Figure 6. &eta = f n ) 2D curves for =const. Figure 7. f ) Q = 2D curves n for =const. 299

6 &eta and Q = f ) Figure 8. = f n ) 2D curves for =24 n &eta and ) &eta Figure 9. = f n ) Q = f n 2D curves with points = const. Figure 10. &eta = f n, Q ) 3D surface for n =const. Figure. = f n, Q ) 3D surface for n =const. Figure 12. &eta = f n, Q ) 3D curves for n =const. = f n Q 3D curves Figure 13., ) for n =const. Figure 14. &eta = f Q ) 2D curves for n =const. 300 = f Q 2D curves Figure 15. ) for n =const.

7 Figure 16. = f a ) &eta and Q = f a ) o 2D curves for n =70 o Figure 17. = f ) Q = f 2D curves with points = const. &eta and a ) o &eta 3.4. The "HILL CHART" button The Hill Chart button allows the user to specify the desired efficiency values for calculating and plotting the hill chart. By pressing this button, a window will open that provides information about the maximum and minimum efficiency of the current database and allows the imposition of a minimum and maximum efficiency, as well as the pitch for which the hill chart will be calculated and drawn. Also in this window, in the Valori particulare field, particular values of the intersection efficiency can be specified and the color map for the hill chart display can be selected. Plotting of the hill chart is done in several steps. In the first stage, the measured parametric primary curves intersect with the imposed efficiency values. These points shall be submitted in the Puncte de intersecție cu randament constant table. In the second stage, &eta = f n, Q ) the surface intersects with constant efficiency values. For a set of input data considered to be measured at =const., Figure 18 shows the &eta = f n, Q ) 3D surface, Figure 19 shows the 3D intersection curves with the constant efficiency values and Figure 20 shows the hill chart. &eta Figure 18. = f n, Q ) 3D surface for =const. Figure 19. Intersection curves with constant efficiency values for =const. 301

8 Figure 20. Universal characteristic for Francis runner at =const. For the same set of input data considered to be measured at n =const., Figure 21 shows the &eta = f n, Q ) 3D surface, Figure 22 shows the 3D intersection curves with the constant efficiency values and Figure 23 shows the hill chart. Figure 21. &eta = f n, Q ) 3D surface for n =const. Figure 22.Intersection curves with constant efficiency values for n =const. Figure 23. Universal characteristic for Francis runner at n =const. 3.5 The comparison of the hill charts Figure 24 shows the set of discrete points Q, n ) with associated values,&eta), used as input data for the =const. and n =const. measuring scenarios and for the calculation of the two hill charts, from Figures 20 and 23. Curves =const. and associated points are plotted on Figure 24. For version n =const., the points 302

9 used as input data are those located in the speed range of rpm, with a step of 5 rpm. The comparison of these characteristics is shown in Figure 25, where the continuous lines represent the isolines of the hill chart, calculated for =const measuring scenarios, and the dotted lines represent the isolines of the hill chart, calculated for n =const. measuring scenarios. As shown in the figure, the difference between the isolines is insignificant and that validates the interpolation algorithms used to calculate the hill chart with the HydroHillChart application n [%] Q [m3/s] Figure 24. Discrete points used as input data n [rpm] Isolines ao=const Isolines n=const Q [m3/s] Figura 25. The hill charts comparison for /n =const. scenarios 3.6 The "EXCEL" button The Excel button exports data to an Excel file that contains the measured points and the numerical / graphical results of the processing operations which have been performed on the data. The Excel file will contain: a sheet called Date măsu- 303


1.4. Conclusion

Designing efficient systems with fast access to lots of data isexciting, and there are lots of great tools that enable all kinds ofnew applications. This chapter covered just a few examples, barelyscratching the surface, but there are many more—and there
will onlycontinue to be more innovation in the space.

This work is made available under the Creative Commons Attribution 3.0 Unported license. Please see the full description of the license for details.


Style and approach

This book includes detailed explanations of the GIS functionality and workflows in ArcGIS Pro. These are supported by easy-to-follow exercises that will help you gain an understanding of how to use ArcGIS Pro to perform a range of tasks.

画面が切り替わりますので、しばらくお待ち下さい。 ※ご購入は、楽天kobo商品ページからお願いします。※切り替わらない場合は、こちら をクリックして下さい。 ※このページからは注文できません。詳しくはこちら

【新品/取寄品/代引不可】Site Scan for ArcGIS Operator License

配送区分:FサイズE0602KP※ご注文手続き後、当店より発送予定日または取寄商品の在庫有無・納期を記載したご注文確認メールをお送りいたしますので必ずご確認をお願い致します。 こちらの商品はお取り寄せの商品となり、通常、土日祝を除く7営業日前後での発送となります。(詳細納期はご注文後にメールにてご案内致します。) ※発注手配の可否をメールにてご確認させて頂く場合がございます。必ず当店からご案内するメールをご確認ください。 ※商品の手配ができない場合や、発注手配可否の確認が取れない場合には誠に申し訳ございませんがキャンセルとさせていただきます。 ※直送の場合別途送料が掛かる場合がございます ※キャンペーン期間特価の場合ご注文金額が変更となる可能性がございます。 ※メーカーや弊社取引先へ申請書のご記入・提出をお願いする場合がございます。ドローンの飛行計画/フリート(ドローン)管理/画像処理、および分析機能を提供するソフト ドローンの飛行計画、フリート(ドローン)管理、画像処理、 および分析機能を SaaS (Software as a Service) として提供します。 Site Scan for ArcGIS は、ドローンイメージング プロジェクトに 完全なエンドツーエンドのソリューションを提供します。 こちらの商品はメール納品となります詳しくはこちら

Programming ArcGIS 10.1 with Python Cookbook【電子書籍】[ Eric Pimpler ]

This book is written in a helpful, practical style with numerous hands-on recipes and chapters to help you save time and effort by using Python to power ArcGIS to create shortcuts, scripts, tools, and customizations."Programming ArcGIS 10.1 with Python Cookbook" is written for GIS professionals who wish to revolutionize their ArcGIS workflow with Python. Basic Python or programming knowledge is essential(?).画面が切り替わりますので、しばらくお待ち下さい。 ※ご購入は、楽天kobo商品ページからお願いします。※切り替わらない場合は、こちら をクリックして下さい。 ※このページからは注文できません。詳しくはこちら

Building Web Applications with ArcGIS【電子書籍】[ Hussein Nasser ]

If you are a GIS user or a web programmer, this book is for you. This book is also intended for all those who have basic web development knowledge with no prior experience of ArcGIS and are keen on venturing into the world of ArcGIS technology. The book will equip you with the skills to comfortably start your own ArcGIS web development project.画面が切り替わりますので、しばらくお待ち下さい。 ※ご購入は、楽天kobo商品ページからお願いします。※切り替わらない場合は、こちら をクリックして下さい。 ※このページからは注文できません。詳しくはこちら

Introduction to Human Geography Using Arcgis Online INTRO TO HUMAN GEOGRAPHY USING [ J. Chris Carter ]

INTRO TO HUMAN GEOGRAPHY USING J. Chris Carter ESRI PR2019 Paperback English ISBN:9781589485181 洋書 Computers & Science(コンピューター&科学) Technology詳しくはこちら

ArcGIS 8で地域分析入門 成文堂大場亨 / 大場亨 / 【中古】afb

商品状態:カバー無しです。 若干傷み、汚れがございますので格安出品致します。迅速・丁寧心がけます。140147著者/出演 大場亨 発売元 成文堂 発売日 2003年04月 JAN 9784792380519 部門名 分類詳しくはこちら

GIS Tutorial 2: Spatial Analysis for ArcGIS 10

商品基本情報 注意事項】 ・当店でご購入された商品は、原則として、「個人輸入」としての取り扱いになり、全てフランスからお客様のもとへ直送されます。 ・個人輸入される商品は、全てご注文者自身の「個人使用・個人消費」が前提となりますので、ご注文された商品を第三者へ譲渡・転売することは法律で禁止されております。 ・通関時に関税 ・輸入消費税が課税される可能性があります。課税額はご注文時には確定しておらず、通関時に確定しますので、商品の受け取り時に着払いでお支払いください。 詳細はこちらをご確認下さい。詳しくはこちら