I'm looking at different configurations of ArcGIS Server 10.2.1 and I can't understand the installation of ArcGIS Server running in diferent clusters.
Is it only a configuration setting that use the resource of the machine or is it necessary for different machines servers with one license for each machine?
With the answer of Vince, is this correct?
There are multiple meanings of "cluster", and I fear you may be confusing them…
In ArcGIS Server 10.1+ parlance, a "cluster" is a bound subset of servers within an ArcGIS Server site. The documentation contains this image, which shows how a subset of servers can be allocated to one aspect of GIS services.
In this diagram there are three (3) hosts running ArcGIS Server, all bound in one (1) site, with two (2) clusters (one for mapping services, and one for geoprocessing). Depending on the characteristics of the nodes (CPU cores & RAM), and the frequency of requests, this site could serve dozens or scores or hundreds of clients.
There are other ways to allocate services across servers, but clusters provide a convenient way to organize allocation of services to hosts in a large enterprise configuration.
In practice, you need to be careful to not have too many nodes in a single cluster, lest their internal communication impact overall throughput.
All of the nodes on which ArcGIS Server services are running must be licensed for ArcGIS.
Inside an ArcGIS Server site
ArcGIS Server , the work center of ArcGIS Enterprise , brings your organization's geographic information, analysis, and products to the web using infrastructure you manage.
Desktop products such as map documents, geoprocessing tools, and address locators are published to ArcGIS Server to become GIS services, available to your organization within its firewall and, optionally, to the greater internet. These services are consumable in web clients, from map viewers to mobile apps, and make it easy to share your resources across clients - even those without specialized GIS software.
This topic explains the structure and functions of ArcGIS Server from an administrator's perspective.
Esri Technical Support Blog
In a previous blog post, my esteemed colleague and board game nemesis Kelly Gerrow-Wilcox discussed the basics of capturing web traffic in a web browser using the built in browser developer tools. But what about when you’re consuming services in ArcGIS Pro, ArcMap or any other non-browser client? Enter: web traffic capturing tools. There are numerous free tools (such as Fiddler, Wireshark, Charles and others) which allow users to capture web traffic from their computers. This blog will focus on capturing HTTP/HTTPS traffic using Fiddler. I've chosen Fiddler because of its relatively simple interface and broad adoption within Esri Technical Support.
Basics: download, configuration and layout
Fiddler can be downloaded here. After installation, the only critical configuration that needs to occur is to enable it to capture traffic over HTTPS.
- With Fiddler open go to Tools > Options
- In the pane that opens, check Capture HTTPS CONNTECTs and Decrypt HTTPS traffic . (This allows you to capture any requests sent using HTTPS, which is slowly but inevitably replacing HTTP as the protocol for transferring data across the web).
The Fiddler application itself is split into two main sections the Web Sessions list and the…other pane (I couldn’t find an official name so for the purposes of this blog we’ll refer to it as the Details pane ).
The Web Sessions pane includes a sequential list of every request sent by the client to a web server. Important information contained here includes:
- The type of protocol used (HTTP or HTTPS)
- Which server the request was sent to, and the full URL
- The HTTP/HTTPS response code (here’s a description of what these mean)
- The size of the body of the response (in bytes)
- The content type (image, text, etc)
Note: the columns in the Web Sessions pane can be custom configured by right clicking anywhere in the headers and selecting Customize Columns . Some useful fields to turn on can be:
- Overall Elapsed (available under Collection: Session Timers. This is the overall time it takes for the request to be sent and returned)
- ClientBeginRequest (available under Collection: Session Timers. This is the time your software first began sending the request, using your computer’s time)
- X -HostIP (Select Collection: Session Flags and manually enter in the Header Name. This is the IP address of the server destination of the request)
If you click a single web session, that triggers the Details pane to populate with a wide variety of information for that specific request.
Intermediate: What do these details mean? Do they mean things? Let's find out!
There are numerous tabs in the Details pane. The most useful (for our purposes) are Timeline , Statistics , and Inspectors . The others are all advanced functionality outside of the scope of this blog.
The Statistics and Timeline tabs are both helpful when investigating any performance related issue, for example if a service is taking a long time to load in the Map Viewer. The Timeline tab is useful for identifying which request in a multi-request process is acting as a bottleneck. To utilize the Timeline tab, select multiple requests in the Web Sessions list. The timeline will display the requests in a sequential “cascade” format. Any requests taking an unusually long time will clearly stand out with a significantly longer bar in the timeline.
Statistics displays the exact times every step of the request took, from the client initially making a connection to the last step of the client receiving the response. This breakdown is useful to potentially identifying which step in the process of a single request is acting as a bottleneck. For example, if every step is taking a fraction of a second, but there is a multi-second pause between ServerGotRequest and ServerBeginResponse that would indicate that something on the server side is causing a slowdown.
Lastly, the Inspectors tab is the where the bulk of information is displayed and likely where the vast majority of any troubleshooting will be done. Here is where the curtain is drawn back to reveal the nitty gritty of how applications interact with web services. Inspectors is further divided into two main sections the Request information (everything related to the request sent by the client) and the Response information (everything related to the response returned by the server). Both divisions have a nearly identical set of subdivisions which display the content of the request/response in different formats. Below are the useful tabs for our purposes:
- Headers – A list of additional information that is not part of the main request. This may include information like security/authentication information, the data format of the request or response, the type of client making the request, etc. This is a good place to find an ArcGIS Online token, when relevant.
- WebForms (request specific) – Depending on the type of request, this will display a breakdown of each request parameter and the value of that parameter. For example, when submitting a search query this section will display the parameters of the query (like keywords, date ranges, etc).
- ImageView (response specific) – If the request is for an image, the ImageView will display the image which is returned. Obviously, this is particularly useful for requests involving tiled services.
- Raw – This will display the entire request or response in text format.
- JSON – If the request/response includes content in JSON format, this tab displays the content in a more human readable format. This is particularly useful for requests/responses to the REST api of ArcGIS Enterprise servers.
- XML – If the request/response includes content in XML format, this tab displays the content in a more human readable format. This is particularly useful for requests/responses to OGC services.
Advanced: That’s great Alan. But what am I supposed to actually do with this information?
H ow you use network traffic information is going to depend on what you’re trying to learn or solve. Checking network traffic can help identify the where and what of a problem but cannot tell you the solution. This is where your knowledge of your app, your web services and if all else fails, some good old fashioned web searching come into play. Here are a few common examples of ways to isolate the problem you’re facing:
- Check the HTTP/HTTPS response code in the Web Sessions pane. Anything that isn’t 200 should be investigated (it might not necessarily be a problem, but it’s worth looking at). Again, here’s a description of what these mean . Even a 200 response could contain error messages or other useful information.
- A 304 response from a server will trigger the client (web browser, ArcMap, etc) to use the client’s cache and Fiddler is therefore not actually capturing a complete response from the server. If there is a 304 response on a critically important request, try again either in Incognito mode or clear your client’s cache.
- A 401 or 403 response typically mean the server requires some sort of authentication. This would help, for example, identify an unshared feature service in a web map which is shared publicly.
- A 504 response typically means something timed out. Use this in conjunction with the Timeline , Statistics and Overall Elapsed column mentioned above to troubleshoot performance issues.
- Raw, JSON and XML contain the exact same information, just formatted differently.
- When errors occur, the error listed in the response may be more detailed than the error provided in the user interface of whichever application was being utilized.
- Find a way to ignore irrelevant requests!!
- One of the most challenging factors in troubleshooting network traffic is the volume of requests that are sent/and received for even minor actions. Below are strategies to help avoid cluttering your log with unnecessary requests.
- Turn off Capture (File > uncheck Capture Traffic) when you know Fiddler’s not capturing relevant information.
- Close any browser windows or background processes that don’t need to be running.
- If Fiddler is capturing traffic you know is not related to what you’re investigating, Filter it out of the Web Sessions by right clicking a session > Filter > select what session parameter you want to filter.
- If you’ve captured a number of requests that you know you don’t need, select and delete them.
- Target Fiddler to only capture requests from a single application by clicking the ‘Any Process’ button (next to the small bullseye icon), holding and then releasing your mouse over the application you want to capture from. This would be useful, for example, to capture all traffic coming from ArcMap while ignoring everything that occurs with your browsers.
Once you have isolated the request(s) relevant to the issue you’re investigating, the following tips can help determine what the actual problem is.
- If you can isolate the problematic request, consider what is the nature of that request in order to help determine any next steps.
- Is it a request of the service’s basic metadata? (e.g. https://sampleserver6.arcgisonline.com/arcgis/rest/services/SampleWorldCities/MapServer?f=json) That would imply there’s a problem with the service (or server) itself.
- Is service responding fine, but a single query, tile, or job request is failing? (e.g. http://sampleserver6.arcgisonline.com/arcgis/rest/services/SampleWorldCities/MapServer/export?dpi=96. ) That would imply there’s a problem with the data, or perhaps with the parameters of the specific request being submitted.
- Are ALL requests to the service failing? Perhaps the entire server is down or inaccessible.
- This is especially helpful for isolating a specific header or request parameter that might be problematic. Modify the information under WebForms or Headers to see if that fixes the problem you’re encountering or reproduces the problem you’re investigating.
- If you have a request that’s succeeding and one that’s failing, copy the headers or WebForms parameters one at a time from the request that’s working to the request that’s failing. Once the request works, you’ve successfully isolated the parameter/header in the requests that’s causing the problem.
- This is helpful for capturing issues which might be intermittent. Send the request 20 or 30 times automatically and see if hit the issue you’re looking for.
- Right click the query session
- Copy > Copy the URL
- Paste in a browser window
- Change the section in the URL “…f=json…” to “…f=html…”
- Click enter to browse to the page
Fiddler and other network capture software are not silver bullets to solve all web traffic related GIS issues, but they are useful tools to help. With a bit of practice, utilizing this type of software can help resolve a wide variety of issues when accessing web services in GIS applications.
Got any good Fiddler (or general network traffic logging) tips? Feel free to leave them in the comments!
ArcGIS Server site configurations
The standard ArcGIS Server site supports multiple machines deployed within a single site configuration, with each machine supporting a common set of published service configurations. ArcGIS 10.5 retains a legacy option for multiple clusters within a single site, with each cluster of machines able to support a different set of published service configurations. Multiple-cluster site configurations include an internal cluster aware load balancing service.
ArcGIS Server component functions
Figure 4.2 provides an overview of the key web components used to define the ArcGIS Server architecture.
ArcGIS Server key component functions.
- Web gateway: Web server software installed with the web adaptor.
- Web adaptor: Provides site-aware load distribution, reverse proxy, and failover.
- GIS server: Provides ArcGIS software for service computing.
- ConfigStore: File share that stores site service configuration parameters.
- SvrDirectories: File share that stores site directories.
- Data source: Common data source for site services.
ArcGIS Server site capabilities can be modified for improved performance and scalability.
Read-only mode (introduced with the ArcGIS 10.4 release)
Figure 4.3 shows a view of the ArcGIS Server read-only architecture. ArcGIS Server read-only mode improves site stability by reducing dependency on connection to the ConfigStore network share.
- ConfigStore cached on each local GIS Server machine.
- Service publishing and changes are not supported.
- Allows read operations in degraded mode.
Operates in degraded mode with lost connection to file share
- Geoprocessing services will not work.
- Consumption of cached services will not work unless they are highly available.
- Exporting a map or image service using a URL is not possible.
Optimum configuration for static read-only publishing
Read-only mode selection
Beginning at 10.4, ArcGIS Server provides a read-only mode that allows you to control changes to the site. Enabling this mode prohibits the publishing of new services and blocks most administrative operations. Your existing services function as they previously did. For example, you can still edit data in feature services when the site is in read-only mode.
To find general information about SQL Server security, see the following SQL Server topics:
Security Checklists for the Database Engine
To find general information about SQL Server security, visit the following Microsoft website. (This information includes best practices, various security models, and security bulletins.)
For more information about additional antivirus considerations on a cluster, see the following article:
250355 Antivirus software may cause problems with Cluster services
For general recommendations from Microsoft for scanning on Enterprise systems, see the following article:
822158 Virus scanning recommendations for Enterprise computers that are running currently supported versions of Windows
Applications that are installed on an instance of SQL Server can load certain modules into the SQL Server process (sqlservr.exe). You can do this to achieve a specific business logic requirement or enhanced functionality or for intrusion monitoring. To detect if an unknown module or a module from a third-party software was loaded into the process's memory space, check whether any of those modules is present in the output of the sys.dm_os_loaded_modules Dynamic Management View (DMV).
For information about third-party detours or similar techniques in SQL Server, see the following article:
920925Detours or similar techniques may cause unexpected behaviors with SQL Server
How Failover Clusters Work
While CA failover clusters are designed for 100 percent availability, HA clusters attempt 99.999 percent availability—also known as “five nines.” This downtime amounts to no more than 5.26 minutes yearly. CA clusters offer greater availability, but they require more hardware to run, which increases their overall cost.
High Availability Failover Clusters
In a high availability cluster, groups of independent servers share resources and data throughout the system. All nodes in a failover cluster have access to shared storage. High availability clusters also include a monitoring connection that servers use to check the “heartbeat” or health of the other servers. At any time, a least one of the nodes in a cluster is active, while at least one is passive.
In a simple two-node configuration, for example, if Node 1 fails, Node 2 uses the heartbeat connection to recognize the failure and then configures itself as the active node. Clustering software installed on every node in the cluster makes sure that clients connect to an active node.
Larger configurations may use dedicated servers to perform cluster management. A cluster management server constantly sends out heartbeat signals to determine if any of the nodes are failing, and if so, to direct another node to assume the load.
Some cluster management software provides HA for virtual machines (VMs) by pooling the machines and the physical servers they reside on into a cluster. If failure occurs, the VMs on the failed host are restarted on alternate hosts.
Shared storage does pose a risk as a potential single point of failure. However, the use of RAID 6 together with RAID 10 can help to ensure that service will continue even if two hard drives fail.
If all servers are plugged into the same power grid, electrical power can represent another single point of failure. The nodes can be safeguarded by equipping each with a separate uninterruptible power supply (UPS).
Continuous Availability Failover Clusters
In contrast to the HA model, a fault-tolerant cluster consists of multiple systems that share a single copy of a computer’s OS. Software commands issued by one system are also executed on the other systems.
CA requires the organization to use formatted computer equipment and a secondary UPS. In a CA failover cluster, the operating system (OS) has an interface where a software programmer can check critical data at predetermined points in a transaction. CA can only be achieved by using a continuously available and nearly exact copy of a physical or virtual machine running the service. This redundancy model is called 2N.
CA systems can compensate for many different sorts of failures. A fault tolerant system can automatically detect a failure of
- A hard drive
- A computer processor unit
- A I/O subsystem
- A power supply
- A network component
The failure point can be immediately identified, and a backup component or procedure can take its place instantly without interruption in service.
Clustering software can be used to group together two or more servers to act as a single virtual server, or you can create many other CA failover setups. For example, a cluster might be configured so that if one of the virtual servers fails, the others respond by temporarily removing the virtual server from the cluster. It then automatically redistributes the workload among the remaining servers until the downed server is ready to go online again.
An alternative to CA failover clusters is use of a “double” hardware server in which all physical components are duplicated. These servers perform calculations independently and simultaneously on the separate hardware systems. These “double” hardware systems perform synchronization by using a dedicated node that monitors the results coming from both physical servers. While this provides security, this option can be even more expensive than other options. Stratus, a maker of these specialized fault tolerant hardware servers, promises that system downtime won’t amount to more than 32 seconds each year. However, the cost of one Stratus server with dual CPUs for each synchronized module is estimated at approximately $160,000 per synchronized nodule.
17.2.1. Use of Secondary File Systems
Many installations create their database clusters on file systems (volumes) other than the machine's "root" volume. If you choose to do this, it is not advisable to try to use the secondary volume's topmost directory (mount point) as the data directory. Best practice is to create a directory within the mount-point directory that is owned by the PostgreSQL user, and then create the data directory within that. This avoids permissions problems, particularly for operations such as pg_upgrade , and it also ensures clean failures if the secondary volume is taken offline.
What SQL Server Clustering Can and Cannot Do
Many DBAs seem to have difficulty understanding exactly what clustering is. Following is a good working definition:
Microsoft Windows Failover Clustering is a high-availability option designed to increase the uptime of SQL Server instances. A cluster includes two or more physical servers, called nodes identical configuration is recommended. One is identified as the active node, on which a SQL Server instance is running the production workload, and the other is a passive node, on which SQL Server is installed but not running. If the SQL Server instance on the active node fails, the passive node becomes the active node and begins to run the SQL Server production workload with some minimal failover downtime. Additionally, you can deploy a Windows Failover Cluster to have both nodes active, which means running different SQL Server instances where any SQL Server instances can failover to the other node.
This definition is straightforward, but it has a lot of unclear implications, which is where many clustering misunderstandings arise. One of the best ways to more fully understand what clustering can and cannot do is to drill down into the details.
Clustering is designed to improve the availability of the physical server hardware, operating system, and SQL Server instances but excluding the shared storage. Should any of these aspects fail, the SQL Server instance fails over. The other node in a cluster automatically takes over the failed SQL Server instance to reduce downtime to a minimum.
Additionally, the use of a Windows Failover Cluster can help reduce downtime when you perform maintenance on cluster nodes. For example, if you need to update hardware on a physical server or install a new service pack on the operating system, you can do so one node at a time. To do so, follow these steps:
1. First, you upgrade the passive node that is not running a SQL Server instance.
2. Next, manually failover from the active node to the now upgraded node, which becomes the active node.
3. Then upgrade the currently passive node.
4. After it is upgraded, if you choose, you can fail back to the original node. This cluster feature helps to reduce the overall downtime caused by upgrades.
When running an upgrade, you need to ensure that you do not manually failover to a node that has not been upgraded because that would cause instability since the binary would not have been updated.
A Windows 2003 Failover Cluster cannot be upgraded to a Windows 2008 Failover Cluster because architecturally the two versions are different. Instead, create a Windows 2008 Failover Cluster and migrate the databases.
What Clustering Cannot Do
The list of what clustering cannot do is much longer than the list of what it can do, and this is where the misunderstandings start for many people. Clustering is just one part of many important and required pieces in a puzzle to ensure high availability. Other aspects of high availability, such as ensuring redundancy in all hardware components, are just as important. Without hardware redundancy, the most sophisticated cluster solution in the world can fail. If all the pieces of that puzzle are not in place, spending a lot of money on clustering may not be a good investment. The section “Getting Prepared for Clustering” discusses this in further detail.
Some DBAs believe that clustering can reduce downtime to zero. This is not the case clustering can mitigate downtime, but it can’t eliminate it. For example, the failover itself causes an outage lasting from seconds to a few minutes while the SQL Server services are stopped on one node then started on the other node and database recovery is performed.
Nor is clustering designed to intrinsically protect data as the shared storage is a single point of failover in clustering. This is a great surprise to many DBAs. Data must be protected using other options, such as backups, log shipping, or disk mirroring. In actuality, the same database drives are shared, albeit without being seen at the same time, by all servers in the cluster, so corruption in one would carry over to the others.
Clustering is not a solution for load balancing either. Load balancing is when many servers act as one, spreading your load across several servers simultaneously. Many DBAs, especially those who work for large commercial websites, may think that clustering provides load balancing between the cluster nodes. This is not the case clustering helps improve only uptime of SQL Server instances. If you need load balancing, then you must look for a different solution. A possibility might be Peer-to-Peer Transactional Replication.
Clustering purchases require Enterprise or Datacenter versions of the Windows operating system and SQL Server Standard, Enterprise, or BI editions. These can get expensive and many organizations may not cost-justify this expense. Clustering is usually deployed within the confines of a data center, but can be used over geographic distances (geoclusters). To implement a geocluster, work with your storage vendor to enable the storage across the geographic distances to synchronize the disk arrays. SQL Server 2012 also supports another option: multi-site clustering across subnet. The same subnet restriction was eliminated with the release of SQL Server 2012.
Clustering requires experienced DBAs to be highly trained in hardware and software, and DBAs with clustering experience command higher salaries.
Although SQL Server is cluster-aware, not all client applications that use SQL Server are cluster-aware. For example, even if the failover of a SQL Server instance is relatively seamless, a client application may not have the reconnect logic. Applications without reconnect logic require that users exit and then restart the client application after the SQL Server instance has failed over, then users may lose any data displayed on their current screen.
Choosing SQL Server 2012 Clustering for the Right Reasons
When it comes right down to it, the reason for a clustered SQL Server is to improve the high availability of the whole SQL Server instances which includes all user/system databases, logins, SQL Jobs but this justification makes sense only if the following are true:
- You have experienced DBA staff to install, configure, and administer a clustered SQL Server.
- The cost (and pain) resulting from downtime is more than the cost of purchasing the cluster hardware and software and maintaining it over time.
- You have in place the capability to protect your storage redundancy. Remember that clusters don’t protect data.
- For a geographically dispersed cluster across remote data centers, you have a Microsoft certified third-party hardware and software solution.
- You have in place all the necessary peripherals required to support a highly available server environment (for example, backup power and so on).
If all these things are true, your organization is a good candidate for installing a clustered SQL Server, and you should proceed but if your organization doesn’t meet these criteria, and you are not willing to implement them, you would probably be better with an alternative, high-availability option, such as one of those discussed next.
Maptitude for Redistricting Software
Maptitude for Redistricting is now available as an ArcGIS extension. As a result, you have access to the redistricting functionality and ease-of-use of Maptitude for Redistricting while leveraging your in-house expertise in ArcGIS software and all of your existing Oracle, SQL Server, DB2, and other data through the ArcGIS DBMS support.
With the ArcGIS redistricting extension, you can integrate online content from Esri and Microsoft's Virtual Earth to create more informative maps for building districts and more professional maps for printing. You can also export your plans to Google Earth.
The ArcGIS redistricting extension features a Plan Manager that serves as your redistricting control center. Use it to organize plans, create any number of plan types (e.g., congressional, state house, state senate, and school district), and save them as Plan Templates. To create a plan, simply choose a template or an existing plan and enter a new name. You can organize plans in libraries by plan type, user, security access, etc., and you can locate, copy, change settings, and password-protect plans.
The ArcGIS redistricting extension includes a custom menu and toolbox that let you:
- Create a new plan from a map or template. You only have to enter the settings once. From then on, you can create a new plan by picking the appropriate template or existing plan.
- Import and merge plans created with the extension or other redistricting software.
- Designate the control field, number of districts, ideal value, and summary fields.
- Include any number of data fields in the same plan a one-button toggle between field sets lets you display only the data of interest at any particular time.
- Assign both an ID and a long name to districts, rename districts, lock districts, and manage multiple districts.
- Add areas from any geographic layer to a district by clicking, by dragging a box, by circle, by polygon, or by line. You can select from the entire jurisdiction or limit the selection to unassigned areas or one district. As you add areas to a district, Maptitude for Redistricting redraws the district boundaries and updates the control and summary fields to reflect changes to the current plan.
- Capture the current status of a plan as a snapshot. Each plan can have one or more snapshots organized by date and time under the same plan name. Return to any snapshot, and use it as a departure point in the evolution of the plan or as the starting point for a new plan.
- Find unassigned areas and noncontiguous districts. Toolboxes make it easy to zoom to and fix problems.
- Identify communities of interest. Keep them intact within the same district, and lock them so that you cannot accidentally reassign them to different districts. Alternatively, for communities that you do split into multiple districts, run the Communities of Interest reports to calculate the total and percent population of the community in each district.
- Compute measures of compactness to assess or defend the districts in a plan. Maptitude for Redistricting computes all of the recognized measures of geographic compactness including the Reock, Schwartzberg, Perimeter, Polsby-Popper, Length-Width, Population Polygon, Population Circle, and Ehrenburg metrics.
- Export to standard equivalency file formats that can be read by other redistricting software and the Department of Justice.
- Generate and print over 35 reports including population summary, error check, political subdivision splits, incumbents, plan statistics, plan components (bill language), plan comparison, communities of interest, measures of compactness, and more. Create custom reports and add them to the report menu.
The Maptitude for Redistricting Extension for ArcGIS is compatible with ArcGIS 9 and newer.
Easy to Learn and Use
Maptitude for Redistricting Extension for ArcGIS includes online help and detailed manuals packed with step-by-step instructions and tutorials. The Plan Manager leads you through the process of creating your first plan and speeds the creation of all additional plans. Caliper also offers classroom training at your site or at our headquarters in Newton, Massachusetts.
Complete Organizational Solution
With the Maptitude for Redistricting Extension for ArcGIS Plan Manager you can easily manage an unlimited number of plans stored on a computer network. You can quickly find a plan by type, creator, date, key word, etc. The Plan Manager lets you distribute plans for viewing and perform plan management functions across your organization. Data and plans can be stored on individual machines or on one central server with access controlled by both the extension and the network administration software.
For more information on the Maptitude for Redistricting ArcGIS Extension, please download the brochure or contact Caliper Corporation (email: [email protected] or call +1 617-527-4700).
Other Redistricting Services
Caliper provides database development, classroom and web-based training, software customization, web design, telephone support, on-site support, priority support, and other related consulting services on a time and materials basis.
Caliper Corporation is a leading developer of transportation and Geographic Information System (GIS) software and applications. Caliper software is used by more than 70,000 users in over 70 countries.
Efficiency Gap and other partisan competitiveness reports
Check out the district measures and reports that have always been included with Maptitude for Redistricting!
Unable to add new 2019 HyperV node to 2012R2 HyperV Cluster
I could really use some help here. I evicted an failed HyperV node from our 2 node 2012R2 Hyper-V cluster. The node has been rebuilt with server 2019 and am trying to re-inject it back into the cluster.
NOTE1: The host was rebuilt using the same node name.
NOTE2: The network adaptor(s) layout is is slightly different however communication is working from all adaptors.
When i try to add the node i get the following error:
'Cluster service on node SERVER2 did not reach the running state. The error code is 0x5b4. For more information check the cluster log and the system event log from node SERVER2. This operation returned because the timeout period expired.'
In the FailoverClustering event log i see:
cxl::ConnectWorker::operator (): HrError(0x00000002)' because of '[SV] Schannel Authentication or Authorization Failed'
mscs_security::SchannelSecurityContext::AuthenticateAndAuthorize: (2)' because of 'Cluster certificate doesn't exist'
I did a search for the above errors however none of the below has worked:
- Confirmed i am am able to ping the hosts via the managment, production, clustercommunication and livemigration network interfaces.
- Confirmed iSCSI could connect to CSV volumes
- Ensured IPV6 is enabled
- Ensured the DCHP client service is running
- Ensured the system could access the QUORUM witness (able to write to the directory)
- Checked 'ComputerHKEY_LOCAL_MACHINESYSTEMCurrentControlSetServicesTcpipParameters' to check for 'DisabledComponents'
- Did a search in AD for a duplicate computer account hostname matching the the node to be added.
- Tried re-adding the node at least 12 times
- Checked the registry path 'ComputerHKEY_LOCAL_MACHINESOFTWAREMicrosoftCryptography` to ensure the Crypto RSA MachineKey within 'C:ProgramDataMicrosoftCryptoRSAMachineKeys' matched the GUID.
I am lost of ideas, the only unique thing here is that i am trying to add a 2019 Hyper-V node to a 2012R2 node Hyper-V cluster.
Watch the video: Видеолекция Практика в ArcGIS PRo (October 2021).
- One of the most challenging factors in troubleshooting network traffic is the volume of requests that are sent/and received for even minor actions. Below are strategies to help avoid cluttering your log with unnecessary requests.