More

Syntax error running script in PythonWin after success in ArcPy


I am building a script to automate a water outage map(I'm new to Python). In the ArcGIS Desktop 10.1 window the script runs fine. However in Pythonwin and IDLE I get syntax errors(In Pythonwin reads: Failed to run script-syntax error-Invalid syntax). The goal is to use task scheduler to run the script outside of ArcGIS.

import arcpy mxd = arcpy.mapping.MapDocument(r"X:Mikes_WorkspacesOutageOutage.mxd") df = arcpy.mapping.ListDataFrames(mxd,"Layers")[0] # Script arguments OutCurrent = arcpy.GetParameterAsText(0) if OutCurrent == '#' or not OutCurrent: OutCurrent = "X:GeodatabasesWebDataWater_Service.gdbOutCurrent" # provide a default value if unspecified # Local variables: Service_Group = "Service_Group" Update_ = "X:Mikes_WorkspacesOutageOutage_Board.xlsUpdate$" Group_Out = "Service_Group" # Process: Add Join arcpy.AddJoin_management(Service_Group, "Group_", Update_, "Group_Out", "KEEP_COMMON") # Process: Copy Features arcpy.CopyFeatures_management(Group_Out, OutCurrent, "", "0", "0", "0") # Process: Symbology arcpy.ApplySymbologyFromLayer_management("OutCurrent", "X:Mikes_WorkspacesOnline ShapefilesOutage_today.lyr") # Process: Remove Join arcpy.RemoveJoin_management(Service_Group, "") mxd.save() arcpy.RefreshActiveView()

It appears your Service_Group and Group_Out are just string variables and not an actual GIS layer. You are not setting an environmental workspace, so the various methods will fail that reference these variables. Try setting the environmental workspace before the layer variables are assigned.


arcpy.GetParameterAsText is for accessing parameters in Script Tools.

If you're going to run this outside of ArcGIS, you'll need another way to pass parameters in a command line, such as sys.argv.

Have a look at this page for some examples uses: http://www.tutorialspoint.com/python/python_command_line_arguments.htm

Here's more information on the sys module: https://docs.python.org/2/library/sys.html


Check your code at http://pep8online.com/ It will find things line line 19 where "# Process: Symbology" is indented and python doesn't like that.

I often use print statement to see the progress and variable values, however if it already runs in the python window, your code should be ok, the IDLE and pythonwin are likely to fail on the bad syntax format errors. That website should help clean it up.


The beforeSend property is set to function() < textreplace(description) >instead of textreplace(description) . The beforeSend property needs a function.

One also may use the following to catch the errors:

I was having the same issue and fixed it by simply adding a dataType = "text" line to my ajax call. Make the dataType match the response you expect to get back from the server (your "insert successful" or "something went wrong" error message).

You can implement error-specific logic as follows:

This may be an old post but I realized there is nothing to be returned from the php and your success function does not have input like as follows, success:function(e)<> . I hope that helps you.

This may not solve all of your problems, but the variable you are using inside your function (text) is not the same as the parameter you are passing in (x).

seems like it would do some good.

You are sending a post type with data implemented for a get. your form must be the following:


I use the following to enable history on python shell.

This is my .pythonstartup file . PYTHONSTARTUP environment variable is set to this file path.

You will need to have the modules readline, rlcompleter to enable this.

In IDLE, go to Options -> Configure IDLE -> Keys and there select history-next and then history-previous to change the keys.

Then click on Get New Keys for Selection and you are ready to choose whatever key combination you want.

Alt + p for previous command from histroy, Alt + n for next command from history.

This is default configure, and you can change these key shortcut at your preference from Options -> Configure IDLE.

You didn't specify which environment. Assuming you are using IDLE.

From IDLE documentation: Command history:

Ctrl+p is the normal alternative to the up arrow. Make sure you have gnu readline enabled in your Python build.

ALT + p works for me on Enthought Python in Windows.

On Ubuntu Server 12.04, I had this problem after installing a version of Python from source (Python3.4).

Some of the comments here recommend installing Ipython and I want to mention that I have the same behavior even with Ipython. From what I can tell, this is a readline problem.

For Ubuntu 12.04 server, I had to install libncurses-dev and libreadline-dev and then install Python from source for up-history (readline) behavior to be enabled. I pretty much did this:

After that, I deleted the previously installed Python (NOT THE SYSTEM PYTHON, the one I had installed from source!) and reinstalled it from source and everything worked as expected.

I did not have to install anything with pip or edit .pythonstartup.

By default use ALT+p for previous command, you can change to Up-Arrow instead in IDLE GUi >> OPtions >> Configure IDLE >>Key >>Custom Key Binding It is not necesary to run a custom script, besides readlines module doesnt run in Windows. Hope That Help. :)

and then recompile python 3.4.

On OpenSUSE, I fix this by

Referring to this answer:https://stackoverflow.com/a/26356378/2817654. Perhaps "pip3 install readline" is a general solution. Haven't tried on my CentOS.

I find information that I copied below answer the question

Adapt yourself to IDLE: Instead of hitting the up arrow to bring back a previous command, if you just put your cursor on the previous command you want to repeat and then press "enter", that command will be repeated at the current command prompt. Press enter again, and the command gets executed.

Force IDLE to adapt itself to you: If you insist on making the arrow keys in the IDLE command prompt window work like those in every other command prompt, you can do this. Go to the "Options" menu, select "Configure IDLE", and then "Keys". Changing the key that is associated with the "previous command" and "next command" actions to be the up arrow, and down arrow, respectively.


Syntax error running script in PythonWin after success in ArcPy - Geographic Information Systems

Review these topics to understand and to use the Pre-Upgrade information tool ( preupgrade.jar ).

The Pre-Upgrade information tool is a legacy utility. Oracle recommends that you use the AutoUpgrade utility instead.


    To determine if your system is ready for upgrading, you can use the legacy Pre-Upgrade Information Tool ( preupgrade.jar )
    You can run preupgrade scripts that the Pre-Upgrade Information Tool generates to fix many issues before you upgrade to the new Oracle Database release.
    After the upgrade, you can run the postupgrade scripts that the Pre-Upgrade Information Tool generates to complete fixups of your upgrade target database.
    Before you run the Pre-Upgrade Information Tool, set up the user environment variables for the Oracle user that runs the tool.
    Use Pre-Upgrade Information Tool ( preupgrade.jar ) commands to check your system before upgrades.
    The Pre-Upgrade Information Tool ( preupgrade.jar ) creates fixup scripts and log files in the output directory that you specify with the DIR command-line option.
    In this example, you can see how the Pre-Upgrade Information Tool displays recommended fixes, but does not carry out fixes automatically.
    Analyze any Pre-Upgrade Information Tool warnings before you upgrade to the new release of Oracle Database. For each item that the tool reports, it provides you with information about how to fix the issue or warning.

About the Pre-Upgrade Information Tool

To determine if your system is ready for upgrading, you can use the legacy Pre-Upgrade Information Tool ( preupgrade.jar )

To help to ensure a successful upgrade, Oracle strongly recommends that you run the AutoUpgrade Utility in Analyze processing mode, and to use the AutoUpgrade Utility Fixup, and then complete your upgrade with the method that you prefer. However, you can continue to perform these tasks by using the Pre-Upgrade Information Tool before you begin your upgrade.

If you use the legacy Pre-Upgrade Information tool, then after the upgrade is complete, you can use the postupgrade scripts that it generates to help to assist you with fixing any issues that the tool discovers. To obtain the latest updates, Oracle recommends that you download the most recent version of the tool from My Oracle Support Note 884522.1.

You can run the tool from the operating system command line. The Pre-Upgrade Information Tool creates preupgrade scripts, which fix some issues before you start an upgrade, and postupgrade scripts, which fix some issues after an upgrade is completed.

The Pre-Upgrade Information Tool ( preupgrade.jar ) creates the following files:

The log file preupgrade.log .

The log file contains the output of the Pre-Upgrade Information Tool.

The preupgrade_fixups_ pdbname .sql for PDBs, where pdbname is the name of the PDB).

Before you run the upgrade, you can run the preupgrade fixups script manually in SQL*Plus to resolve many of the issues identified by the preupgrade tool.

The postupgrade_fixups_ pdbname .sql (for PDBs, where pdbname is the name of the PDB) or postupgrade_fixups.sql script (Non-CDB databases).

You can run this script to fix issues after the database upgrade is completed.

Related Topics

Preupgrade Scripts Generated By the Pre-Upgrade Information Tool

You can run preupgrade scripts that the Pre-Upgrade Information Tool generates to fix many issues before you upgrade to the new Oracle Database release.

The location of the preupgrade_fixups.sql and log files depends on how you set output folders, or define the Oracle base environment variable.

If you specify an output directory by using the dir option with the Pre-Upgrade Information Tool, then the output logs and files are placed under that directory in the file path / cfgtoollogs/ dbunique_name /preupgrade , where dbunique_name is the name of your source Oracle Database. If you do not specify an output directory when you run the Pre-Upgrade Information Tool, then the output is directed to one of the following default locations:

If you do not specify an output directory with DIR , but you have set an Oracle base environment variable, then the generated scripts and log files are created in the following file path:

Oracle-base /cfgtoollogs/ dbunique_name /preupgrade

If you do not specify an output directory, and you have not defined an Oracle base environment variable, then the generated scripts and log files are created in the following file path:

Oracle-home /cfgtoollogs/ dbunique_name /preupgrade

The fixup scripts that the Pre-Upgrade Information Tool creates depend on whether your source database is a Non-CDB database, or a CDB database:

A log file ( preupgrade.log ).

The log file contains log output for the Pre-Upgrade Information Tool.

Pre-upgrade fixups SQL scripts, depending on your source database type:

CDB : Two different sets of scripts:

preupgrade_fixups.sql : A consolidated script for all PDBs.

Multiple preupgrade_fixups_ pdbname .sql scripts, where pdbname is the name of the PDB for which a script is generated: Individual scripts, which you run on specific PDBs.

Run the scripts either by using catcon.pl , or by using SQL*Plus commands. You must run these scripts to fix issues before you start the database upgrade. The scripts resolve many of the issues identified by the preupgrade tool.

Each issue that the scripts identify includes a description of the problem, and a task that you can carry out to resolve the problem. The preupgrade tool itself does not make changes to your database to correct errors. However, you can run the scripts that it generates to correct identified errors. The scripts fix only those issues that an automated script can fix safely. Preupgrade issues that the automated script cannot fix safely typically require DBA knowledge of user applications. You can address those issues manually.

Postupgrade Scripts Generated By the Pre-Upgrade Information Tool

After the upgrade, you can run the postupgrade scripts that the Pre-Upgrade Information Tool generates to complete fixups of your upgrade target database.

The Pre-Upgrade Information Tool generates postupgrade fixup scripts, which you can run after the upgrade to fix issues that can be fixed after the upgrade.

The location of the postupgrade SQL scripts and log files depends on how you set output folders, or define the Oracle base environment variable. The postupgrade fixup scripts are placed in the same directory path as the preupgrade fixup scripts.

If you specify an output directory by using the dir option with the Pre-Upgrade Information Tool, then the output logs and files are placed under that directory in the file path / cfgtoollogs/ dbunique_name /preupgrade , where dbunique_name is the name of your source Oracle Database. If you do not specify an output directory when you run the Pre-Upgrade Information Tool, then the output is directed to one of the following default locations:

If you do not specify an output directory with DIR , but you have set an Oracle base environment variable, then the generated scripts and log files are created in the following file path:

Oracle-base /cfgtoollogs/ dbunique_name /preupgrade

If you do not specify an output directory, and you have not defined an Oracle base environment variable, then the generated scripts and log files are created in the following file path:

Oracle-home /cfgtoollogs/ dbunique_name /preupgrade

The postupgrade fixup scripts that the Pre-Upgrade Information Tool creates depend on whether your source database is a Non-CDB database, or a CDB database:

CDB : Two different sets of scripts:

postupgrade_fixups.sql : A consolidated script for all PDBs

Multiple postupgrade_fixups_ pdbname .sql scripts, where pdbname is the name of the PDB for which a script is generated: Individual scripts, which you run on specific PDBs.

Postupgrade issues that the automatic script cannot fix safely typically require DBA knowledge of user applications. You can address those issues manually.

Guidelines for Running Postupgrade Fixup Scripts for Non-CDB Databases

Oracle recommends that when you run the postupgrade scripts, you set the system to spool results to a log file so you can read the output. However, do not spool results to the admin directory:

After you run postupgrade scripts, you can run the Post-Upgrade Status Tool to check the status of your server.

Related Topics

Setting Up Environment Variables for the Pre-Upgrade Information Tool

Before you run the Pre-Upgrade Information Tool, set up the user environment variables for the Oracle user that runs the tool.

You must set up the user environment variables for the Pre-Upgrade Information Tool. This example shows how to use shell commands to set up user environment variables to point to an earlier release Oracle home. For multitenant architecture upgrades, you must also open up all the PDBs that you want the tool to analyze.

In this example, the operating system is Linux or Unix, the system identifier is sales01 , and the earlier release Oracle home path is /u01/app/oracle/product/12.1.0/dbhome_1

  1. Log in as the Oracle installation owner ( oracle ).
  2. Set up the user environment variables to point to the earlier release Oracle home that you want to upgrade.

Pre-Upgrade Information Tool (preupgrade.jar) Command

Use Pre-Upgrade Information Tool ( preupgrade.jar ) commands to check your system before upgrades.

The Pre-Upgrade Information Tool is in the new release Oracle home, in the file path ORACLE_HOME/rdbms/admin/preupgrade.jar . Oracle has configured it with the system checks necessary for the new Oracle Database release. However, the checks that the tool performs are carried out on the earlier release Oracle Database home. Set up the Oracle user environment variables so that they point to the earlier release Oracle home.

Run the Pre-Upgrade Information Tool by using the Java version in your earlier release Oracle home. For multitenant architecture (CDB and PDB) upgrades, open up all the PDBs that you want the tool to analyze before you run the tool.

Set the environment variables for your user account to point to the earlier release ORACLE_HOME, ORACLE_BASE, and ORACLE_SID.

The preupgrade.jar file is located in the new Oracle home:

You can also copy the preupgrade.jar binaries to a path of your choosing. For example:

Script output location. Use FILE to direct script output to a file. Use TERMINAL to direct output to the terminal. If you do not specify a value, then the default is FILE . If you specify TERMINAL , then screen output is directed to the display, and scripts and logs are placed in the output directory path.

Output type. Use XML to specify XML output. If you do not specify an output type, then the default is TEXT .

Directs the output to a specific directory. If you do not specify an output directory with the DIR option, then the output is directed to one of the following default locations:

If you do not specify an output directory with DIR , but you define an ORACLE_BASE environment variable, then the generated scripts and log files are created.

If you do not specify an output directory, and ORACLE_BASE is not defined, then the generated scripts and log files are created in the following path:

ORACLE_HOME /cfgtoollogs/ dbunique_name /preupgrade

-c 'pdb1 pdb2 pdb3' (Linux and UNIX)

Specifies a list of containers inside a CDB that you want to include for processing (an allow list). Provide a space-delimited list of PDBs that you want processed. To specify the list, use single quotes on Linux and UNIX operating systems, and use double quotes on Windows systems.

If you do not specify either -c or -C , then all PDBs in a CDB are processed.

-C 'pdb1 pdb2 pdb3' (Linux and UNIX)

Specifies a list of containers inside a CDB that you want to exclude from processing (a block list). Provide a space-delimited list of PDBs that you want to exclude from processing. To specify the list, use single quotes on Linux and UNIX operating systems, and use double quotes on Windows systems.

If you do not specify either -c or -C , then all PDBs in a CDB are processed.

Loads the DBMS_PREUP package into the database when it is in READ WRITE mode, without carrying out any other action.

You can use this parameter to prepare a given Non-CDB or CDB database so that the DBMS_PREUP package is loaded when you run the Pre-Upgrade Information Tool, and the DB (DB or Container) is in READ-ONLY mode. If you want use the tool to analyze a database in read-only mode, then you must use this command to load the DBMS_PREUP package into the database while it is in READ WRITE mode, before you set it to READ-ONLY mode.

You can also use this parameter with a Read-Only Standby database, in which you load the package in the primary database, and run the package in the standby.

Provides the password for the user.

If you do not use operating system authentication to connect to the database, then use the -p option to specify a password on the command line. If a username is specified on the command line with -u , but no password specified with -p , then the tool prompts you for a password.

Provides the user name of the user that you want to use to connect as SYSDBA to the database that you want to check. Use this option only if you do not use operating system authentication to connect to the database

For example, You log in as a user that is not a member of the OSDBA group for the database that you want to check. In that case, the user account does not have operating system authentication privileges for the SYSDBA system privilege. Use the -u and -p option to provide data dictionary authentication to log in as a user with SYSDBA system privileges.

Specifies an Oracle home that you want to check. Provide the path of the Oracle home that you want to check.

If you do not specify an Oracle home path to check, then the Pre-Upgrade Information Tool defaults to the path specified by the user environment variable for the Oracle home. That variable is $ORACLE_HOME on Linux and Unix systems, and %ORACLE_HOME% on Windows systems.

Specifies an Oracle system identifier that you want to check. Provide the ORACLE_SID of the database that you want to check.

Displays the command-line syntax help text.

Example A-1 Non-CDB In the Source Oracle Home Example

Set your user environment variables to point to the earlier release Oracle home.

Run the new release Oracle Database Pre-Upgrade Information Tool on the earlier release Oracle Database server using the environment settings you have set to the earlier release Oracle home. For example:

Example A-2 CDB in a Source Oracle Home

Open all the pluggable databases

Set your user environment variables to point to the earlier release Oracle home.

Run the Pre-Upgrade Information Tool with an inclusion list, using the -c option. In this example, the inclusion list is PDB1 and PDB2, and the command is run on a Linux or UNIX system. The output of the command is displayed to the terminal, and the output is displayed as text.

Output of the Pre-Upgrade Information Tool

The Pre-Upgrade Information Tool ( preupgrade.jar ) creates fixup scripts and log files in the output directory that you specify with the DIR command-line option.

When you run the Pre-Upgrade Information Tool, it generates the following files inside the directory that you specify as the output directory.

The file preupgrade.log is the report that the Pre-Upgrade Information Tool generates whenever you run the command with the FILE option. The log file contains all the tool recommendations and requirements for upgrade. The log file is located in the following path, where timestamp is the date and time when the command is run: $ORACLE_BASE/cfgtoollogs/dbua/upgrade timestamp /SID/ . If you run the command with the TERMINAL option, then the content of this file is output to the display. Refer to the section "Pre-Upgrade Information Tool Output Example" for an example of a log file.

If you specify XML file output on the Pre-Upgrade Information Tool command line, then it generates the upgrade.xml file instead of preupgrade.log .

Preupgrade Fixup File (preupgrade_fixups.sql) and Postupgrade Fixup File (postupgrade_fixups.sql)

The Pre-Upgrade Information Tool identifies issues that can block or hinder an upgrade.

Some issues require a DBA to resolve, because it is not possible for the automated script to understand the specific goals of your application. However, other issues do not present any difficulty in resolving. In these cases, the Pre-Upgrade Information Tool automatically generates scripts that contain the SQL statements necessary to resolve the issues. Using these scripts can perform, track, and simplify the work that DBAs must do to resolve potential upgrade issues. The SQL statements that resolve issues before upgrade are placed in the preupgrade_fixups.sql script. The SQL statements that resolve issues after upgrade are placed in the postupgrade_fixups.sql script. When you run the Pre-Upgrade Information tool on a multitenant architecture Oracle Database, you can run the consolidated scripts preupgrade_fixups.sql script and postupgrade_fixups.sql across all the containers. Run the consolidated scripts using catcon.pl .

Both of these fixup files are generated in the output directory that you specify with the Pre-Upgrade Information Tool DIR command-line option.

The script carries out the following steps to resolve pre-upgrade or post-upgrade issues:

For each issue that the Pre-Upgrade Information Tool identifies, it reruns the individual Pre-Upgrade Information Tool check again, to determine if the issue is still present.

If there is an Oracle-supplied fixup routine for that issue, then the script executes that fixup routine. It then reruns the Pre-Upgrade Information Tool check again, to confirm that the issue is resolved. If the issue is resolved, then the script displays a message that the issue is resolved.

If there is no Oracle-supplied fixup routine, then the script displays a message that the issue is still present.


Troubleshooting topics

Make sure certificates are set up correctly

Docker Desktop ignores certificates listed under insecure registries, and does not send client certificates to them. Commands like docker run that attempt to pull from the registry produces error messages on the command line, like this:

As well as on the registry. For example:

For more about using client and server side certificates, see How do I add custom CA certificates? and How do I add client certificates? in the Getting Started topic.

Volumes

Permissions errors on data directories for shared volumes

When sharing files from Windows, Docker Desktop sets permissions on shared volumes to a default value of 0777 ( read , write , execute permissions for user and for group ).

The default permissions on shared volumes are not configurable. If you are working with applications that require permissions different from the shared volume defaults at container runtime, you need to either use non-host-mounted volumes or find a way to make the applications work with the default file permissions.

Volume mounting requires shared folders for Linux containers

If you are using mounted volumes and get runtime errors indicating an application file is not found, access is denied to a volume mount, or a service cannot start, such as when using Docker Compose, you might need to enable shared folders.

With the Hyper-V backend, mounting files from Windows requires shared folders for Linux containers. Click and then Settings > Shared Folders and share the folder that contains the Dockerfile and volume.

Support for symlinks

Symlinks work within and across containers. To learn more, see How do symlinks work on Windows? in the FAQs.

Avoid unexpected syntax errors, use Unix style line endings for files in containers

Any file destined to run inside a container must use Unix style line endings. This includes files referenced at the command line for builds and in RUN commands in Docker files.

Docker containers and docker build run in a Unix environment, so files in containers must use Unix style line endings: , not Windows style: . Keep this in mind when authoring files such as shell scripts using Windows tools, where the default is likely to be Windows style line endings. These commands ultimately get passed to Unix commands inside a Unix based container (for example, a shell script passed to /bin/sh ). If Windows style line endings are used, docker run fails with syntax errors.

For an example of this issue and the resolution, see this issue on GitHub: Docker RUN fails to execute shell script.

Virtualization

Your machine must have the following features for Docker Desktop to function correctly.

WSL 2 and Windows Home

  1. Virtual Machine Platform
  2. Virtualization enabled in the BIOS
  3. Hypervisor enabled at Windows startup

Hyper-V

On Windows 10 Pro or Enterprise, you can also use Hyper-V with the following features enabled:

    installed and working
  1. Virtualization enabled in the BIOS
  2. Hypervisor enabled at Windows startup

Docker Desktop requires Hyper-V as well as the Hyper-V Module for Windows Powershell to be installed and enabled. The Docker Desktop installer enables it for you.

Docker Desktop also needs two CPU hardware features to use Hyper-V: Virtualization and Second Level Address Translation (SLAT), which is also called Rapid Virtualization Indexing (RVI). On some systems, Virtualization must be enabled in the BIOS. The steps required are vendor-specific, but typically the BIOS option is called Virtualization Technology (VTx) or something similar. Run the command systeminfo to check all required Hyper-V features. See Pre-requisites for Hyper-V on Windows 10 for more details.

To install Hyper-V manually, see Install Hyper-V on Windows 10. A reboot is required after installation. If you install Hyper-V without rebooting, Docker Desktop does not work correctly.

From the start menu, type Turn Windows features on or off and press enter. In the subsequent screen, verify that Hyper-V is enabled.

Virtualization must be enabled

In addition to Hyper-V or WSL 2, virtualization must be enabled. Check the Performance tab on the Task Manager:

If you manually uninstall Hyper-V, WSL 2 or disable virtualization, Docker Desktop cannot start. See Unable to run Docker for Windows on Windows 10 Enterprise.

Hypervisor enabled at Windows startup

If you have completed the steps described above and are still experiencing Docker Desktop startup issues, this could be because the Hypervisor is installed, but not launched during Windows startup. Some tools (such as older versions of Virtual Box) and video game installers disable hypervisor on boot. To reenable it:

  1. Open an administrative console prompt.
  2. Run bcdedit /set hypervisorlaunchtype auto .
  3. Restart Windows.

You can also refer to the Microsoft TechNet article on Code flow guard (CFG) settings.

Windows containers and Windows Server

Docker Desktop is not supported on Windows Server. If you have questions about how to run Windows containers on Windows 10, see Switch between Windows and Linux containers.

You can install a native Windows binary which allows you to develop and run Windows containers without Docker Desktop. However, if you install Docker this way, you cannot develop or run Linux containers. If you try to run a Linux container on the native Docker daemon, an error occurs:

Running Docker Desktop in nested virtualization scenarios

Docker Desktop can run inside a Windows 10 VM running on apps like Parallels or VMware Fusion on a Mac provided that the VM is properly configured. However, problems and intermittent failures may still occur due to the way these apps virtualize the hardware. For these reasons, Docker Desktop is not supported in nested virtualization scenarios. It might work in some cases, and not in others.

For best results, we recommend you run Docker Desktop natively on a Windows system (to work with Windows or Linux containers), or on Mac to work with Linux containers.

If you still want to use nested virtualization

Make sure nested virtualization support is enabled in VMWare or Parallels. Check the settings in Hardware > CPU & Memory > Advanced Options > Enable nested virtualization (the exact menu sequence might vary slightly).

Configure your VM with at least 2 CPUs and sufficient memory to run your workloads.

Make sure your system is more or less idle.

Make sure your Windows OS is up-to-date. There have been several issues with some insider builds.

The processor you have may also be relevant. For example, Westmere based Mac Pros have some additional hardware virtualization features over Nehalem based Mac Pros and so do newer generations of Intel processors.

Typical failures we see with nested virtualization

Slow boot time of the Linux VM. If you look in the logs and find some entries prefixed with Moby . On real hardware, it takes 5-10 seconds to boot the Linux VM roughly the time between the Connected log entry and the * Starting Docker . [ ok ] log entry. If you boot the Linux VM inside a Windows VM, this may take considerably longer. We have a timeout of 60s or so. If the VM hasn’t started by that time, we retry. If the retry fails we print an error. You can sometimes work around this by providing more resources to the Windows VM.

Sometimes the VM fails to boot when Linux tries to calibrate the time stamp counter (TSC). This process is quite timing sensitive and may fail when executed inside a VM which itself runs inside a VM. CPU utilization is also likely to be higher.

Ensure “PMU Virtualization” is turned off in Parallels on Macs. Check the settings in Hardware > CPU & Memory > Advanced Settings > PMU Virtualization.

Networking issues

IPv6 is not (yet) supported on Docker Desktop.


Powershell script executed by SQL Server doesn't take effect

I'm doing some testing and I want to execute a PowerShell script through xp_cmdshell. Below is the PowerShell script ( c: empsqltotext.ps1 ).

Then I execute the script through xp_cmdshell like this.

It runs successfully and I can find the file c: emphaha.txt with the content haha .

However, when I change the content of c: empsqltotext.ps1 to:

and execute the same aforementioned TSQL command, the TSQL reports success but I didn't get the expected result (the execution policy in all scopes were NOT changed).

When I execute the PowerShell script manually (in a PowerShell console and type c: emp>.sqltotext.ps1 ), it works as expected (the execution policy in all scopes were changed). Why this happens?

I did some investigation though. EXEC xp_cmdshell 'whoami.exe' reports I'm running with nt servicemssqlserver . I also manually added nt servicemssqlserver into the administrators local user group. By using process explorer, I can confirm the PowerShell session indeed was started and all related processes have admin permission (Integrity = High).

The cmd.exe command line looks like this:

When I execute this command manually, it can change the execution policy without any issue (I changed all execution policy back to the original values after this).

The PowerShell process command line looks like this:

When I execute the PowerShell process command line manually, I can change the execution policies too. I don't have any idea why when I run the command through xp_cmdshell , it doesn't change anything.

BTW, I know there are multiple ways to write the command. I'm just talking about the technical skills here, so please don't suggest me changing command syntax etc.


Melissa Thrush's UWF GIS Online Blog

Module 7 was definitely a challenge. We covered a lot of material in two chapters (6 and 7) of our text. We learned how to check for, describe, and list data. Next, we worked with lists, tuples, and dictionaries. Lists are used to facilitate batch processing and exist for different types of elements. "For" loops can be used to iterate over the elements in a list. Elements in a list can be modified through operations such as deleting, appending, and removing. Tuples are similar to lists but their elements are immutable. If you use an operation that would modify the elements in a tuple it returns another tuple. Elements in lists and tuples can be identified by their index number or location in the list or tuple. Dictionaries contain item pairs. The pair matches a key to its corresponding value. Dictionaries can be modified however "keys" have to be unique but "values" do not. For example a dictionary containing city and state pairs might have a "key" of the city "Greenville". The corresponding state "values" for the city of Greenville could be different, such as Alabama, North Carolina, South Carolina.

Chapter 7 introduced us to using cursors to access data. Cursors work similarly to list functions and can be used to iterate over rows in a table. There are three types of cursors. Search cursors are used to retrieve rows. Insert cursors are used to insert rows. Update cursors are used to update and remove rows. All cursor types have two required arguments: and input table and a list of field names. A SQL query can be used to establish criteria for the optional "where_clause" parameter of the cursor object.

This assignment tasked us to use Python code to create an empty geodatabase (gdb) and then copy the shapefiles from our Module7/Data folder into the new gdb. Using the "cities" feature class that was now in the gdb we needed to populate a newly created dictionary with the names and population of every "County Seat" city in the state of New Mexico. To perform these tasks it was important to understand the examples provided in our text and module exercise. Syntax was very important in creating the python code needed for the search cursor that would find the cities that were defined as "County Seats". The syntax format differs depending on the feature class type. Each beginning single quote, double quote, parenthesis, or bracket needs to have a corresponding ending. Placing "print" statements within the script helped identify the location of errors when the script failed to execute entirely.

Learning how to update a dictionary was also new. This step was the most difficult for me to understand. I had to think about where the search cursor was in the table and think about what were the "key" and "value" pairs. Using the examples in the text for printing rows in the table, helped me understand where the cursor was in the table and how to update the dictionary.


How To: Use Alteryx.installPackages() in Python tool

Installing a package from the Python tool is an important task. In this article, we will review all the possible functionality included with the Python method Alteryx.installPackages().

Prerequisites

Background Information

First of all, don't get confused: you can use either Alteryx.installPackage() , Alteryx.installPackages() , or Package.installPackages() to achieve same result.

By default, packages are installed under:

%ALTERYX%inMiniconda3PythonTool_venvLibsite-packages until 2019.2

%ALTERYX%inMiniconda3envsJupyterTool_vEnvLibsite-packages for 2019.3+

As a result, you may need to start Designer with administrator rights if the installation folder does not allow write access to a standard account, like for an admin version for example.

Typically, people use installPackage() with a single argument (the package name(s)). But, looking at the method itself, there are in fact 3 parameters:

Optional: default value: False

Add some details to the output

In reality, Alteryx.installPackages() is nothing more than a wrapper for the pip (Python Package Manager) command.

This means the following command:

This can be seen using debug parameter:

Output (duplicates due to debug mode):

Procedure: Standard Installation

In this case, only the package argument is specified.

Procedure: Installation from GitHub

Git must be installed on the machin e to use this method. It could be downloaded from https://git-scm.com/downloads.

Instead of the package name, specify git URL prefixed with git+ .

Procedure: Installation of a Module in the User Folder

This method uses parameter --user to specify that package must be installed in user folder ( %APPDATA%/Python/Python36 as perhttps://www.python.org/dev/peps/pep-0370/#windows-notes) .

Now, in order to use it, package location must be added to default path: %APPDATA%PythonPython36site-packages

Procedure: Installation of a Module in a Different Folder

This method uses parameter --target to specify the destination and creates it if needed.

Now, in order to use it, the package needs to be imported using Alteryx.importPythonModule(%MODULE_PATH%) [2018.4+]

Remark : With this method, a module does not appear as installed in PythonTool environment

Procedure: Installation from local directory or tar.gz

The package must exist in a place accessible by the machine (such as C:UsersDocumentsPersonalPythonPckg).

Procedure: Installation with a proxy in place

This allows the option of adding a proxy and proxy credentials to the installation argument. Credentials can be left off or included depending on the environment.

Procedure: Installation from Wheels

In this case, we use --no-index and --find-links with the local repository to ensure that package is not going to be downloaded.

Here, C:ProgramDataPythonWheels contains following files (numpy and Pillow are dependencies of wordcloud):

Procedure: Uninstall Package(s)

Specify uninstall as the install_type parameter and either a string with the package name or a list of strings with package names.

Procedure: Download Wheels or Archive Files

In this case, the command to use is download instead of the default install . One may also specify a destination folder with parameter --dest

Procedure: List the Currently Installed Modules

The following procedure provides a basic way to list the module names and versions installed along with Python tool.

Additional Resources

This article is awesome! Thanks for all the help @PaulN

to a network drive that has spaces in some of the folder names.

Alteryx.installPackages(package="hyperapi",
install_type='install --no-index --find-links="xyx.org.comdwSpace HereAlteryxSystemsFor System UseHyper API"')

I tried multiple ways with quotes. Is using network drive even possible? I wish whoever made this didn't put spaces in file path names.

Thanks for the article. The machine I use doesn't have admin rights to intall any s/w.

In that case am not able to run my codes that were running in IDE to adapt to alteyx due to package unavailability.

Its practically not possible for us to freely develop codes using the Alteryx python module when we have user restrictions on installing the packages. Having admin rights is the only way to install the packages to miniconda in Alteryx?

Can anything be done on this?

If this is not doable, I would suggest to have all the basic packages for data analysis pre installed in the Alteryx bundle without which this feature is not gonna be of great value add to the tool.

After going through the same things, yes you absolutely need t o have an admin install this for you. I tried every work around I could think of, and at the end of the day had to wait for the admin to get back from vacation so I could get some tools installed.

We seem to be hitting some kind of SSL error when trying to import packages:

Does anyone have any insight or support with this?

Could you please try the following:

    Create File pip.ini (case is important) under %APPDATA%pip (example C:Usersmy_accountAppDataRoamingpip where my_account is the Windows login).
    pip folder will have to be created if it does not already exist.

3. Install package again via Python tool

Sr Customer Support Engineer, Alteryx

When running a workflow on alteryx server I can't install dependencies using Alteryx.InstallPackages(). It always causes a CalledProcessError with the error message just being that the command returned non-zero exit status 1. Even with debug=True set I don't get more information or even the stack trace like I would when running locally within the notebook. Are there any known issues when attempting to install python dependencies on the server? or is there configuration that may be interfering?

For reference the alteryx server is a corporate one and I do not have administrative access to it.

Your last comment is exactly why. You have to have admin privileges. The only way at this point to get it installed is to email your admin and make that request.

This is very helpful. Also, Is there a way to inventory all python libraries installed including ones that did not come with Alteryx?

Thanks for the comment! So if you are trying to list the libraries available to the Python tool, last section of the article will do the trick.It will list the different packages "visible" in the Python environment used for the tool.

We have proxy internal URL to download/ install packages as follows.

pip install PyPDF4 -i (Internal URL)

from cmd. How to achieve the same using alteryx.

Please assist. Where to point source URL in the following

As mentioned in the article, Alteryx.installPackage() is a wrapper to pip (pip install by default).

Awesome! Got it. It is working in my designer.

The solution is working fine in our Designer.

we need to publish in our Alteryx Private Gallery Server.

By Default, those non-standard packages will not be available.

1) How to install those similar libraries in our Gallery server. Do we need to login into server or we can install from local Designer. Please suggest, if you have steps to move forward. Thanks.

Hence, we can publish the workflow and our user can trigger from Gallery.

@ganeshkumars82 Yes, you need to login to the server and install them. I was screen sharing with our admin when he installed some stuff I needed, and he opened up the alteryx on the server and installed it from there using the same way that we installed it on our local alteryx machine.

I am facing the same issue as @joejoe317 :

How to pass a path of a network drive that has spaces in some of the folder names?

Alteryx.installPackages(package="openpyxli",
install_type='install --no-index --find-links #FF0000">folder name with space subfolder"')

Also how to pass relative path in install_type when the workflow is saved at same location.

@NanChaw It's been a while now. I think I resolved it by using a different command. I also could have resolved it by moving my path, I really do not remember. I will see if I still have any of the testing workflows I created to see which path I chose.

EDIT: Quick answer is no spaces allowed.

Longer answer of different methods to get it to work.

I ended testing two things. I do not have access to our server, so I had to work with the server admin on this.

The first I already mentioned. I just moved it to a network drive that didn't have any spaces. On our alteryx server we have a public area that we can put files. The security is based on groups, so I can only see what I put in there. This did not have any spaces, and obviously that worked.

The second was working with our server admin.

I had him do the following.

These are the steps that it took to install the Python API for hyper data file manipulations.

RDP into the alteryx server

open ADMIN Command line window

c:> cd "%PROGRAMFILES%AlteryxinMiniconda3PythonTool_venvScripts"

c:> pip install "server-pathDropBoxwhl ableauhyperapi-0.0.8953-py3-none-win_amd64.whl"

You can see that this path also does not have spaces, but we are installing using pip, so you can have spaces here. The problem I believe with alteryx is they use their own functional wrapper. It seems like that are using the space as a split into an array of items. You can kind of see this in the error you get.

I have tried it with an encoded path and other methods, but have not been successful.

The second method bypasses alteryx all together in order to install the python package. Once it is installed, you can use it in alteryx. Obviously if you go this route, make sure the paths are correct, they may be different than ours.


Brando's GIS Odyessy

Good Day GIS enthusiasts,
Welcome to my continuation of Homeland Security GIS topics. We are continuing our look from last week at Minimum Essential Data Sets (MEDS). But this week we are looking at practical application. This week revolves around taking the data sets and layers generated last week and applying them with some additional data, particularly some LiDAR derived rasters, and looking at a real world situation. The situation in question is the Boston Marathon bombing of 2013. Here we are taking the MEDS Boston data, and looking more so to prevention through establishment of surveillance, security, and observation points within view of the finish line and surrounding area. With these points I am utilizing specific analysis tools available through ArcMAP and ArcScene. The overall objectives for this week were to explore the LiDAR data using it to generate Hillshading, perform a Viewshed analysis, and create a Line Of Sight analysis utilizing our created observation points. Two maps were generated using predominately ArcMAP, and a little ArcScene on the second.
Lets look at the first map.

This is an overview map of the event area. This shows a 3 mile buffer area around the marathon finish line. Identified around the finish line area are the 10 closest hospitals and medical centers. All of these have been identified as needing increased security during the event. A 500 ft buffer has been placed around each of these critical infrastructure facilities. This is a fairly simplistic view of the area showing the various levels of road features throughout. The primary, secondary, and local roads are all symbolized appropriately for easy acquisition and understanding. The lower inset which is an up close look at the finish line highlights additional security locations by placing checkpoints at each road intersecting with the 500 foot buffer from the finish line. Additionally, another inset highlights the 6 counties that are a part of the Boston Metropolitan Statistical Area.

There is much more deliberate analysis in this map. The first section at the top is a straightforward look at 15 identified observation points around the block within view of the finish line, highlighted in the center. These are also labeled with the elevation of the best observation height for the point. The second frame down combines a multitude of analysis. Most clearly visible is the Viewshed analysis. This is the pink and green layer symbolized by pink meaning an area is not visible from the closest associated observation point, and green meaning the view is unobstructed. This layer is generated by a Hillshade layer which provides the gray shadowing underlying the Viewshed. Over top of this layer is a line of sight look from each point to the finish line. The red areas indicate some form of obstruction and the green areas are clear. This view is further broken down by the most obstructed point (# 4), and shown in the graph just under and to the right of the line of sight look. Additionally all of the line of sights are evaluated in the 3D environment of ArcScene. The lower left inset shows an exported feature from ArcScene depicting the lines of sight in a 3D relationship with the surroundings. This layer is also oriented with the same SW - NE look as the other data frames above for continuity of reference. Also an inset for this area is provided much closer in than the base map above.
I dont know the exact process that was in place on that fateful day. However with this type of planning and technical ability we can hope that we can better plan and prevent such acts in the future. This is an excellent look at some of the analysis that goes into such large events drawing tens to hundreds of thousands of people. Thank you.


Additional Docker container resource options. For a list of options, see " docker create options."

You can use special characters in path, branch, and tag filters.

  • * : Matches zero or more characters, but does not match the / character. For example, Octo* matches Octocat .
  • ** : Matches zero or more of any character.
  • ? : Matches zero or one single character. For example, Octoc?t matches Octocat .
  • + : Matches one or more of the preceding character.
  • [] Matches one character listed in the brackets or included in ranges. Ranges can only include a-z , A-Z , and 0-9 . For example, the range [0-9a-z] matches any digit or lowercase letter. For example, [CB]at matches Cat or Bat and [1-2]00 matches 100 and 200 .
  • ! : At the start of a pattern makes it negate previous positive patterns. It has no special meaning if not the first character.

The characters * , [ , and ! are special characters in YAML. If you start a pattern with * , [ , or ! , you must enclose the pattern in quotes.

Patterns to match branches and tags

Parameter Comment
package A string or list of strings of package name(s)
install_typeOptional: default value: "install". Pip command to use.
debug
PatternDescriptionExample matches
feature/* The * wildcard matches any character, but does not match slash ( / ). feature/my-branch

Patterns to match file paths

Path patterns must match the whole path, and start from the repository's root.


Watch the video: Python in ArcGIS. Урок 1. arcpy: введение (October 2021).