I have the data for several cities in a layer. I'd like to run a batch export, that goes through a list and zooms in on one city (coordinate + zoomlevel), then the next, etc and exports each results based into a pdf. Is this possible?
You can use the Atlas functionality of the Print Composer. You center-coordinate could be a point layer, with a point for the center of the city map or a bound of a polygon if you city is a polygon. The layer defining bound of map is called the coverage layer. The zoom level is either a margin on geometry or scale. You cannot data bind to the zoom level (yet), if you have zoom level as an attribute of your coverage layer (city layer). So you zoom level is either based on a margin around a polygon, a fixed scale or a best fit scale.
Setting zoom level of multiple Atlas prints.
What you want is the Atlas Generator in the Print Composer. This is exactly what it was created for.
There are a couple of tutorials which will help you achieve this:
How to set up a Coverage Layer for Atlas Plugin?
BATCH CONVERT TOOL
Follow the steps below to convert location data from one format (osgb36 to wgs84 or wgs84 to osgb36) to another. Your data can be tab or comma separated (the character between your data fields).
You can paste data directly from your Excel spreadsheet or Google spreadsheet document (the format will be detected automatically as tab and you only need to identify the columns holding your location data i.e. the columns holding the lat lng, grid reference or X Y values.
When the location data columns are identified press convert. You can copy the converted data directly back in to your spreadsheet.
For postcode data use our Batch Postcode Converter here
Copy and paste the three lines below. Paste them in to the text box in Step One and then press convert. The data will have the grid reference and X Y co-ordinates appended to it in the text box in Step Six below.
Spatial analysis in QGIS and R - An Introduction for Environmental Scientists
The 13-16 July 2021 course is sold out!
Professionals £449 (later £499)
(Early Bird discount -then £50 more)
Short Course Description:
Running over four half-day sessions, this hands-on and interactive online course will give you an introduction to spatial data analysis in an open-source environment. The course will focus on the use of QGIS and R as well as providing a theoretical background to working with spatial data and Geographical Information Systems (GIS).
The course will help you to understand the key principals of working with spatial data in a Graphical User Interface (GUI) using QGIS and in a programming environment using R and show you how to perform the same tasks across both platforms.
You will learn best practices for processing spatial data and producing maps, allowing you to create high-quality outputs for environmental science. The course will have demonstrations alongside practical sessions, where you will learn by completing exercises, with skilled trainers on hand to guide and assist. Detailed documentation will also accompany the training, helping to walk you through the material and providing a useful resource for future learning.
By the end of the course, you will be able to
- Import, edit and export different types of spatial data in QGIS and R
- Understand best practices in data processing
- Work with base maps, plugins and visualisation tools to design effective maps
- Process GIS datasets and understand the spatial relationship between them
- Perform basic data analysis and validate datasets
- Produce publication standard outputs
- Gain awareness of different spatial data types commonly used across environmental science
- Identify and fix common errors/issues in data
- Learn the basics of good visualisation and best practice for cartography
- Become familiar with the QGIS interface, including useful plugins and extensions
- Learn how to use R (in the RStudio environment) as a GIS, including useful packages and visualisation tools.
- Explore the capabilities of QGIS and R for creating interactive and visually striking content.
- Feel confident in using open source software for spatial data analysis
Hardware/ Software requirements:
You will need a laptop or desktop computer. A second external screen will be an advantage (but is not essential).
We will use Zoom to deliver the training course. There are 5 ways to join Zoom (and at least one of them will work for you!). We will provide more information about Zoom with the joining instructions and at the start of the course. You can find more information about Zoom on our FAQ page.
We will do lots of practical exercises, so you can continue working on your skills immediately after the training course. You will need to install this software before the course starts.
- R programming language 4.0+ (free open source software)
- R Studio Desktop (free open source software)
- QGIS version 3.16+ (free open source software)
The course joining instructions will detail which specific software versions to download as well as guiding the installation process. It will be helpful if all participants use the same version for the course.
Some knowledge of Geographical Information Systems and data analysis will be beneficial, but not essential. A basic understanding of the R programming language will be expected. UKCEH will provide links to introductory self-learning materials before the course.
Anyone who is looking to work with spatial data in a reproducible manner and currently works mainly in spreadsheets or is looking to prepare spatial data for analysis in R.
Anyone who is looking to gain an understanding of spatial data in the environmental sector and wants to learn about free and open source software for GIS analysis.
e.g. MSc / PhD / Early Career Researchers, Ecologists & Environmental Scientists, Environmental Consultants etc.
Philip Taylor, Environmental Data Scientist, UKCEH
Philip is a data scientist and GIS expert who has worked with a large variety of environmental data for over 15 years, specialising in ecology, forestry, climate change and hydrology. An active member of the UK QGIS community, he helps with the QGIS Scotland chapter and was on the organising committee for FOSS4G UK 2019 (http://uk.osgeo.org/foss4guk2019/). He has run training courses in data handling, QGIS and R both nationally and internationally and is a member of the Centre of Excellence in Environmental Science (CEEDS).
Ed Carnell, Spatial Data Analyst, UKCEH
Ed is a Spatial Data Analyst, specialising in the modelling of atmospheric emissions and their effect on human health and to sensitive habitats. His work includes producing high-resolution emission maps of air pollutants and greenhouse gases for the UK National Atmospheric Emission Inventory, as well as collaborating with international partners. He uses a code-based approach for data analysis and is a keen advocate of data transparency and quality assurance. He has taught training courses in QGIS, R and transforming environmental data.
Commercial Data in Sentinel Hub
The Commercial Data webinar was held live on January 14 th 2021. It presents the availability of commercial data (Airbus Pleiades, Airbus SPOT or PlanetScope) in Sentinel Hub, and the Network of Resources, an ESA-sponsored Sentinel Hub accounts and access to commercial data available for research purposes and pre-commercial exploitation and validation. General agenda:
- Use Requests Builder to search and order commercial data (Airbus Pleiades, Airbus SPOT or PlanetScope) easily
- Organize your tiles using collections
- Create custom configurations and visualize your data in EO Browser
- Integrate commercial data into your own GIS using Sentinel Hub APIs
- Generate yearly time-series, visualize it and create statistics
- Learn how to apply for the Network of Resources to get commercial data for free
See our teaser below and dive into the recording of the webinar:
2 Answers 2
The workflow this script is set up for is the Data Merge >> Export PDF, which as you noted spits out a single, massive PDF.
The script runs in Acrobat. It takes the CSV file as an input (said CSV must have a column named "filename" containing the individual name for the extracted PDF. It counts the total number of pages in the PDF and divides by the total number of records in the CSV in order to calculate the number of pages per individual PDF, then iterates through the document, extracting pages as it goes.
There is user input for text to prefix and/or suffix the output file names.
I've not tested this, but it looks straightforward (other than the parsing of the CSV, which came from someone on stackoverflow and I didn't look over in detail).
- Georeferencing is crucial to make aerial and satellite imagery, usually raster images, useful for mapping as it explains how other data, such as the above GPS points, relate to the imagery.
- Very essential information may be contained in data or images that were produced at a different point of time. It may be desired either to combine or compare this data with that currently available. The latter can be used to analyze the changes in the features under study over a period of time.
- Different maps may use different projection systems. Georeferencing tools contain methods to combine and overlay these maps with minimum distortion.
- Using georeferencing methods, data obtained from surveying tools like total stations may be given a point of reference from topographic maps already available.
- It may be required to establish the relationship between social survey results which have been coded with postal codes or street addresses and other geographic areas such as census zones or other areas used in public administration or service planning.
There are various GIS tools available that can transform image data to some geographic control framework. One can georeference a set of points, lines, polygons, images, or 3D structures. For instance, a GPS device will record latitude and longitude coordinates for a given point of interest, effectively georeferencing this point. A georeference must be a unique identifier. In other words, there must be only one location for which a georeference acts as the reference.
Images may be encoded using special GIS file formats or be accompanied by a world file.
To georeference an image, one first needs to establish control points, input the known geographic coordinates of these control points, choose the coordinate system and other projection parameters and then minimize residuals. Residuals are the difference between the actual coordinates of the control points and the coordinates predicted by the geographic model created using the control points. They provide a method of determining the level of accuracy of the georeferencing process.
In situations where data has been collected and assigned to postal or area codes, it is usually necessary to convert these to geographic coordinates by use of a definitive directory or gazetteer file. Such gazetteers are often produced by census agencies, national mapping organizations or postal service providers. At their simplest, these may simply comprise a list of area codes or place names and another list of corresponding codes, names or coordinate locations. The range and purpose of the codes available is country-specific. An example is the UK's National Statistics Postcode Directory which shows each postcode's membership of census, administrative, electoral and other geographical areas. In this case, the directory also provides dates of creation and deletion, address counts and an Ordnance Survey grid reference for each postcode, allowing it to be mapped directly. Such gazetteer files support many web-based mapping systems which will place a symbol on a map or undertaken analysis such as route-finding, on the basis of postal codes, addresses or place names  input by the user.
Org-mode batch export: Missing syntax highlighting
I'm having issues achieving any syntax highlighting in the results of the following command:
I know this question has been asked a few times already in various forms, but none of the advertised solutions I've found have worked for me (linked below).
- spacemacs (via emacs 25.3) on the develop branch, current as of 2018 January 29
- htmlize-20171017.141 as a dependency of the org layer
- Attempting to highlight source blocks of haskell , lisp , yaml , and nix modes
What I'm Seeing
Naively performing the command above gives many lines of:
I tried copy-pasting the htmlize.el found in my .emacs.d/elpa/.. and refering to it manually:
. and this makes the "Cannot find. " errors go away, but the output is still not highlighted properly. Actually, the lisp block does have one of its keywords bolded, but nothing else looks right. Similar hacks to manually include lisp code from say haskell-mode can achieve bold/italics-only highlighting, which is not what I want.
--batch implies -q , which ignores all user config (to speed up start-up, I'm assuming). This seems to have the effect of ignoring all the Emacs packages I have installed (via spacemacs), and so the batch process can't see htmlize or any of my major modes to achieve proper colouring.
- Is there a way for me to tell a batch'd emacs process to respect my config and see all the packages I have? (this might tank each run of it though)
- Is there something else simple that I'm missing to achieve this highlighting properly?
My goal is to only commit my .org files for a blog, and have my server generate each .html properly on-the-fly during a deploy. I could just generate the nice .html files on my own machine and commit those, but I really don't want to do that.
That should be possible using ImageMagick, which is available cross-platform (Linux, Mac, Windows). This software ships with a bunch of command-line utilities, perfectly fitting your needs.
Let's give me an example: Say you've stored a bunch of your *.svg in the current directory, and want to convert them to *.png – that would be a one-liner:
A more detailed description can be found with the ImageMagick examples, or a simple Google-search for "imagemagick convert svg to png". As I've not used ImageMagick in this manner, I cannot give you the exact parameters for your "color issue" – but I'd wonder if that wouldn't be possible.
2 Answers 2
Make sure 3DS importer and Three.js exporter are installed and enabled
Loop over all the files and perform below actions for every single one,
you may use glob.glob() or os.walk() unless you want to select the files via UI
(see the Operator Import tempate, you will need to provide
files = bpy.props.CollectionProperty(type=bpy.types.PropertyGroup) , docs)
Clear scene if desired, e.g.
Import 3DS, basically call the operator bpy.ops.import_scene.autodesk_3ds(filepath="the_path_to_file.3ds")
The importer should select the imported files, but there will usually not be an active object. To run operations on the imported data or for some exporters you might need to set one object active (or process each individually and make one active after another).