I have a separate question going regarding the poor performance of GDAL CreateCopy() on a large (4096x4096) jp2 file, so here's a look at a different angle:
I have a C++ openCV program that takes a JP2 image and makes a copy after doing some manipulation. The new JP2 image file is missing the GDAL metadata (see example below). Would it be faster to load the GDALDataSet from the original image, and then call GDAL SetMetaData() for each Metadata item?
If so, does anyone have sample C++ code for iterating through the metadata items in the original GDALDataSet?
Original GDADINFO metadata:
Driver: JP2OpenJPEG/JPEG-2000 driver based on OpenJPEG library Files: mask.jp2 Size is 4096, 4096 Coordinate System is: GEOGCS["WGS 84", DATUM["WGS_1984", SPHEROID["WGS 84",6378137,298.257223563, AUTHORITY["EPSG","7030"]], AUTHORITY["EPSG","6326"]], PRIMEM["Greenwich",0], UNIT["degree",0.0174532925199433], AUTHORITY["EPSG","4326"]] Origin = (-97.025756835937500,32.398681640625000) Pixel Size = (0.000001341104507,-0.000001341104507) Image Structure Metadata: INTERLEAVE=PIXEL Corner Coordinates: Upper Left ( -97.0257568, 32.3986816) ( 97d 1'32.72"W, 32d23'55.25"N) Lower Left ( -97.0257568, 32.3931885) ( 97d 1'32.72"W, 32d23'35.48"N) Upper Right ( -97.0202637, 32.3986816) ( 97d 1'12.95"W, 32d23'55.25"N) Lower Right ( -97.0202637, 32.3931885) ( 97d 1'12.95"W, 32d23'35.48"N) Center ( -97.0230103, 32.3959351) ( 97d 1'22.84"W, 32d23'45.37"N) Band 1 Block=1024x1024 Type=Byte, ColorInterp=Red Overviews: 2048x2048, 1024x1024, 512x512, 256x256 Overviews: arbitrary Band 2 Block=1024x1024 Type=Byte, ColorInterp=Green Overviews: 2048x2048, 1024x1024, 512x512, 256x256 Overviews: arbitrary Band 3 Block=1024x1024 Type=Byte, ColorInterp=Blue Overviews: 2048x2048, 1024x1024, 512x512, 256x256 Overviews: arbitrary
One suggestion is to use a VRT(Virtual Raster).
There are a number of ways to do this.
Use the VRT driver to create a copy of the original JP2 in memory (as a VRT XML string) using the /vsimem virtual memory filesystem, edit the in-memory VRT XML and change the SourceFilename element to point to the new processed raster (using
VSIFWrite) and then open the file using GDALOpen/GDALOpenShared
Build the VRT XML as a string variable, use GDALOpen/GDALOpenShared to open the string variable directly. This works in python, don't know if it works in C++. If it doesn't, then write the XML out to a file (on the filesystem or the /vsimem virtual memory filesystem) and then open the file using GDALOpen/GDALOpenShared.
There are some other examples of programmatic VRT creation in the VRT Tutorial.
Finally use the OpenJPEG driver CreateCopy to create the final copy from the vrt
Below is some python code implementing item 1. (untested as I'm not running GDAL >=2.0dev):
from osgeo import gdal def read_vsimem(filepath): """ Read GDAL vsimem files """ vsifile = gdal.VSIFOpenL(fn,'r') gdal.VSIFSeekL(vsifile, 0, 2) vsileng = gdal.VSIFTellL(vsifile) gdal.VSIFSeekL(vsifile, 0, 0) return gdal.VSIFReadL(1, vsileng, vsifile) def write_vsimem(filepath,data): """ Write GDAL vsimem files """ vsifile = gdal.VSIFOpenL(fn,'w') size = len(data) gdal.VSIFWriteL(data, 1, size, vsifile) return gdal.VSIFCloseL(vsifile) def create_copy(ds, drivername, filepath, creation_options=): """ Create a copy of a GDAL dataset """ drv = gdal.GetDriverByName(drivername) return drv.CreateCopy(ds, filepath, creation_options) def copy_everything(inpath, fixpath, outpath): """ Copy _all_ georeferencing and other metadata from one file to another Assumes files match exactly… ! """ vrtpath= '/vsimem/foo.vrt' inds = gdal.Open(inpath) vrtds = create_copy(inds, 'VRT', vrtpath) vrtxml = read_vsimem(vrtpath) write_vsimem(vrtpath, vrtxml.replace(inpath, fixpath)) #USE_SRC_CODESTREAM=YES/NO only available in GDAL >= 2.0 (EXPERIMENTAL!) outds = create_copy(ds, 'JP2OpenJPEG', outpath, creation_options=['USE_SRC_CODESTREAM=YES'])
This problem was solved by mimicking the behavior of gdalcopyproj.py; but it required updating to the "not yet stable" GDAL2.0 code set.
Write GeoTIFF metadata from one file to other
My task is to take one GeoTIFF, make some image segmentation on in, and save it to new GeoTIFF(with existing coordinates). If I understand correctly, the coordinates are preserved in GeoTIFF metadata. So I grab metadata from the original file:
And when I do System.out.println("Metadata: "+metadata) , I see the correct XML tree of metatags. So I'm do some magic with image
In result I obtain a BufferedImage(resultBufferedImage) with successfully image segmentation. And here starts my problems, I'm trying to save this BufferedImage with old metadata:
I get printed "After write". But program is still running, I tried to wait, but no results. So when I kill process the file is created successfully, even with geodata. How can I determine the finish of writing and stop program? p.s. Image in default Ubuntu viewer seems to be nice, but when I opened it in QGIS I have transparent fields, and how can I make gray background transparent?
2 Answers 2
This command will remove nearly all metadata but retain ExifIFD:ColorSpace, ExifIFD:Gamma, InteropIFD:InteropIndex, and ICC_Profile tags. ColorSpaceTags is an ExifTool shortcut for “standard tags which carry color space information” (added in ver 9.51).
There are some caveats. -all= will not delete Adobe APP14 block in jpegs, as this may affect the colors of the image. No personal info is held in this block, so there is normally no need to delete it. It will also not delete Exif tags in a tiff or tiff based file such as Nikon or Canon raw images (NEF or CR2), as the image data itself is contained in the Exif block. -CommonIFD0= can be added to the command to clear out most common Exif tags in these images (see ExifTool Shortcut tags for full list of tags contained in the CommonIFD0 shortcut).
I don't recommend you to remove metadata from your original images. It make sense to do this for images that you want to share or publish, during the export stage for the next reasons:
- You might take a look at the metadata of some of your beautiful images later, to see their exposure, GPS info, etc.
- Like Paul said, images with the sRGB profile will be 99.9% correctly viewed on any device or web-browser while images with another profiles can be displayed unpredictable for other people.
- Your images can be optimized for better and faster viewing when exporting
- You can do a lot of other post-processing things with your images during the export like sharpening, applying watermarks, etc.
So my suggestion is: don't remove metadata from your images that is value for you. And instead use a photo management software to export your images to a right format, applies your profile, strip metadata, assign your copyrights and contact info and add your watermarks.
Best way to copy GDAL metadata from one JP2 image to another - Geographic Information Systems
Digital Cinema describes the use of movies with a digital data representation in best quality. Traditionally movies are shot on film and projected with film. Today, this is done with digital cameras and digital projectors. Because of the huge amount of data within this application area data compression is necessary. In contrast to Electronic Cinema, which uses the digitization of the film for new commercialization pathes, Digital Cinema replaces only the film chain from the acquisition to the film theatres. Therefore Digital Cinema must achieve and surpass the traditional best film quality. The parameters for the digital representation of the movie have to be much more extensive than in standard videos.
Other compression standards have different limits for the use in Digital Cinema. This can be the maximum resolution, the compression possibilities (only lossy), the sampling type, the colorspace or the bit depth. However, JPEG2000 is an excellent compression standard for the use in Digital Cinema, because it delivers enough headroom in the description of digital movie data and has outstanding features, which can be used. Some features of JPEG2000 are intraframe coding for a simple editing access, lossless compression capabilities, metadata insertion, scalability in resolution and quality and so on. All features of the still Image JPEG2000 Standard 15444 - Part1 can also be used.
Requirements in data compression for Digital Cinema including high dynamic range, different color spaces, highest image resolutions, best compression quality upto lossless compression, and so were made possible by using JPEG 2000.
JPEG 2000 has been adopted by the broadcast industry as mezzanine compression in the live production workflows. The compression offers unique benefits suitable for video production as alternative to uncompressed video. Today JPEG 2000 is used for its high quality and low latency in video over IP applications such as Contribution Links (live events to studio transmission) and recent IP-based broadcast studio infrastructures. Moreover, it is also used as the master format for content storage. The broadcast implementations essentially rely on the JPEG 2000 Amd3 for Broadcast Contribution and the JPEG 2000 Amd8 for Interoperable Master Formats (IMF).
In 2014, several companies received an Emmy® Award for breakthroughs in the Standardization and Productization of transporting video with JPEG 2000 Broadcast Profile in MPEG-2 TS over IP Networks.
Several main benefits for JPEG 2000-based broadcast production workflows exists:
- Intra-frame compression JPEG 2000 is an intra-frame based encoding scheme, as it encodes each frame independently. This is a great advantage for content editing applications, as the video signal can be cut at any place without repercussion.
- Video quality JPEG 2000 offers an enhanced tool-set that is especially well-suited for high-quality video. While the driving force for MPEG has always been to deliver good video quality at low bitrates, e.g. for TV/Mobile/Internet broadcasting by improving the temporal prediction, the driving force behind JPEG 2000, on the other hand, has been to produce a standard with high-quality compression of single pictures or frames. JPEG 2000 can allocate more bits per sample, e.g. 10-bit video is offered, which is in line with broadcast production specifications and regular studio practice.
- Less visual artefacts Bit-errors in a JPEG 2000 stream create less visual artefacts than MPEG solutions, as the error appear as a short-lived blur in the picture much less visually disturbing than the blocking effects which can also be much longer lasting.
- Low latency Low latency processing is crucial for live TV contribution. JPEG 2000 technology is able to provide this required ultra low latency as there is no dependency between the signaled frames. In general, the technology is able to easily achieve encoding and decoding latencies of less than 1.5 frames, and some implementations can even offer latencies of less than one frame.
- Symmetrical complexity and cost structure for real-time video It takes the same horsepower to do a compression and a decompression and the same amount to do any quality of compression. The architecture of broadcast production systems has the same number of transmitters/encoders and receivers/decoders. The cost and complexity of the transmitter and the receiver should therefore preferably be the same, which is the case with JPEG 2000. Encoding and decoding can be done using the same chips.
- Robustness transmission JPEG 2000 is a particularly good choice for contribution of video over IP as it is exceptionally robust to transmission errors. As described earlier, JPEG 2000 provides no blocking artefacts and any errors that are introduced are more pleasant to the eye. Additionally, because JPEG 2000 has no error propagation between frames, artefacts are much shorter-lived. Intrinsically, JPEG 2000 also has special mechanisms to increase the robustness to transmission errors.
- Robustness to multiple encoding In a chain of successive compression-decompression processes over the same original material, JPEG 2000 compression technology sustains the same quality very well and it is robust to pixel shifts.
- Easy editing thanks to scalability in both resolutions and quality The resolution scalability enables editors to manipulate easily a sequence by manipulating only a low resolution (proxy) of the movie. All the operations done on the low resolution are then applied on the full resolution version.
Image archives and databases
One early use of JPEG 2000 will be as a base file format in image archives and databases. Traditionally, image archives store multiple copies of an individual files at varying resolutions and quality levels so that they can supply appropriate image data on request. In addition, considerable metadata is held about each image to allow it to be easily classified and retrieved.
JPEG 2000 files typically can have extensive metadata stored with them, in a standard compliant XML environment. As well as allowing selected metadata from an image database to be distributed to its users, this does permit interchange of image files with metadata between databases, and removes the need for an extensive manual data entry stage when cataloguing new images. In addition, the files can be stored at high quality in a lossless, colour managed environment, with conversion to lower resolution or lower quality performed 'on the fly'. The ability of part of a JPEG 2000 file to be used for generation of such modified images also means that it becomes practical to provide other capabilities on demand.
One example might be to watermark each image as delivered, not only with details which communicate authorship or ownership, but also transactional information. This could include licensing restrictions, details of the customer, or information which would allow the image to be easily recognised through some automated process designed to test for breaches of copyright.
The new Part 8 of the JPEG 2000 standard (JPSEC) dealing with security addresses these possibilities, whilst Part 9 (JPIP) defines how interactive applications between a client and server can be created. This too will be very important in the image database arena - as examples it makes retrieval of selected parts of an image much faster and easier to control, permitting 'pan and zoom' operations on part of an image. Demonstrations of this technology already exist (for example using Kakadu) in which several areas of an image can be selected by a user and are delivered more rapidly that the remaining less interesting parts. A range of novel browsing opportunities exist therefore for remote client software, making the delivery of large high quality image information under user control a practical reality.
JPEG 2000 has many characteristics which are useful to one of its target market areas, medical imaging. Some background to this has been covered in a JPEG committee document (N2782) which also gives some useful information about how JPEG 2000 works. One key aspect which often concerns the medical profession is the need to ensure that images can be communicated losslessly, without any distortions introduced by a compression process that may lead to mis-diagnosis. This often results in huge files, which can be difficult to store, handle, and communicate. JPEG 2000 can be used to encode files completely (or partially) losslessly, and provides good compression performance for this purpose (similar, for example, to that offered by JPEG's optimised method for such compression, JPEG-LS (IS 14495)). It does however have several additional features which make JPEG 2000 particularly attractive for medical imaging:
- volumetric image support through Part 10
- selected parts of the image can be defined as Regions of Interest - they can then be delivered before other parts of the image, or losslessly, whilst other parts of the image that are less critical use normal lossy compression.
- the JPEG 2000 codestream can be ordered to deliver images of lower resolution, or reduced quality, well before the full image can be transmitted. This helps significantly in browsing applications, and means that only one file is needed for several applications
- extensive metadata can be included with the image, in a tight association. This means that files can be transmitted between recipients which can easily be processed, or indexed into an existing database. Some applications, such as those relating to DICOM standards, have their own sophisticated methods for handling this metadata, and JPEG are working with the DICOM committee to ensure that these two important standards can be easily integrated
- many different forms of image can be usefully compressed using JPEG 2000 - for example radiological, MRI, CAT and other medical imaging modalities, which use non-visual sensors, and may use enhancement techniques such as pseudo-colouring the resulting image
Many cultural heritage institutions such as Museums and Art Galleries have very extensive collections which are not visible to the public because of display capacity and other reasons. Projects, such as 'NOF-Digitise' in the UK with a budget in excess of $80M, are being created to try and provide on-line learning resources and other solutions for global access. Natural disasters such as fire, earthquake and flooding, as well as the problems created by war, vandalism and terrorism show the need to preserve this information in as accurate a form as possible lest the heritage be lost forever. In addition, it is critically important to use widely adopted standards that have some chance of longevity in the face of technological change. The UK 'Domesday' project, for example, in which the BBC helped many schools and individuals put together a comprehensive record of the UK (in November 1986) to celebrate the 900th anniversary of the Domesday Book was created using a BBC Master computer with a proprietary interface to an analogue videodisc. The problems less than 20 years later are self-evident, and the large expenditure on the project stands to be lost, unless emulators for the material can be created.
The original JPEG standard has been existence almost as long as the Domesday project videodiscs described above. Whilst there are only one or two working Domesday players, and some attempts to rescue the situation, there are hundreds of millions of devices which can image JPEG files. The JPEG 2000 standard has been designed from the ground up to try and address many of the areas which concern the users in the Cultural heritage sector. These include:
- high quality lossless compression with full colour management
- extensions to moving and 3D images with the same advantages
- a royalty and license fee free baseline for widescale deployment
- protection of moral and copyrights through a well defined security mechanism which can (for example) offer undistorted thumbnail viewing and encrypted high resolution viewing from the same image file
- a comprehensive client / server architecture for users to be able to zoom in and out on images, or to request regions of interest to be served ahead of background material
- extensive metadata possibilities, including the incorporation of digital camera information (such as is held in EXIF files), as well as the Dublin Core metadata used as a basis for many cultural heritage projects
- adherence to well accepted and defined standards, including the use of XML, HTTP and others within the defined architecture
- a track record of accuracy and ecceptance by the industry
Wireless communications offers its own particular set of problems. More specifically, wireless networks are characterized by the frequent occurrence of transmission errors along with a low bandwidth. Hence, they put strong constraints on the transmission of digital images. Since JPEG2000 provides high compression efficiency, it is a good candidate for wireless multimedia applications. Moreover, due to its high scalability, JPEG2000 enables a wide range of Quality of Service (QoS) strategies for network operators.
To be widely adopted for wireless multimedia applications, JPEG 2000 has to be robust to transmission errors. To address this issue, the JPEG committee has established a new work item, JPEG 2000 Wireless (JPWL), as Part 11 of the standard. Its purpose is to standardise tools and methods to achieve the efficient transmission of JPEG 2000 imagery over an error-prone wireless network.
The main functionality of the JPWL system is to protect the codestream against transmission errors. More precisely, the protection technique modifies the codestream to make it more resilient to errors, e.g. by adding redundancy or interleaving the data. The decoding process detects the occurrence of errors and corrects them whenever possible.
A second functionality is to describe the degree of sensitivity of different parts of the codestream to transmission errors. This information can subsequently be used for unequal error protection. More specifically, sensitive parts of the codestream can be more heavily protected than less sensitive parts.
A third functionality is to describe the locations of residual errors in the codestream. This information can subsequently be used to make a decoder aware of the information loss and to prevent decoding corrupted parts of the stream.
Using the technologies standardised in JPWL, JPEG2000 becomes very resilient to transmission errors. Therefore, JPEG2000 is an ideal candidate for the efficient transmission of digital images and video in wireless applications. Indeed, recent studies have shown that Motion JPEG2000 is very well suited for video transmission over wireless channels. Specifically, it has been shown that Motion JPEG2000 outperforms the state-of-the-art MPEG-4 in terms of coding efficiency, error resilience, complexity, scalability and coding delay.
While the proposed solutions are not tuned to a specific network protocol, particular attention has been paid to three important use cases: 3rd generation wireless phone networks (3GPP/3GPP2), WLAN (IEEE 802.11 family of standards) and Digital Radio Mondiale (DRM).
Among potential killer applications for JPWL, Multimedia Messaging Service (MMS) knows a very rapid growth and is widely seen has one of the only bright spot in the wireless telecom industry. Other potential applications include video streaming and video conferencing.
Pre-press is the process used when digital files are prepared for printing. Two key requirements of this process are fidelity and consistency. In the past, the pre-press industry has depended on lossless image compression (for example using EPS or TIFF file formats) and colour calibration of all components in the process, using defined lighting and viewing conditions in order to achieve optimum results.
JPEG 2000 offers opportunities to the pre-press industry to both substitute its traditional formats with the more advanced aspects inherent in JPEG 2000, and to re-purpose its content to allow it to be used in Internet publishing or other contexts. The same JPEG 2000 image can generate thumbnails, screen images and print ready material simply by truncating a prepared codestream at different points. In addition, the powerful metadata handling and association in JPEG 2000 files means that Digital Asset Management or workflow processes can be easily linked into pre-press delivery, providing security to the photographer, image creator, and printer alike.
A key facet is the ability of JPEG 2000 to deliver true lossless compression - in one possible operational mode even the colour transform from an defined colour profile such as sRGB is lossless. The ability exists within JPEG 2000 to use a number of well defined colour management profiles, and in particular ICC colour profiles, supporting the CMYK spaces used within the pre-press industry. As the file format can include full colour space definitions (at least in the extended version of JPEG 2000 defined in Part 2 of the standard), including the formula used for transforming to another colour space, proprietary and accurate colour representations can be transferred between systems (at least within the limitations of output devices to render them).
Remote sensing and GIS
Geographic Information Systems (GIS) allow the viewing and analysis of multiple layers of spatially related information associated with a geographical location or region. GIS enables companies and governments to easily analyze the development, maintenance, and impact of roads, vegetation, utilities (water, electrical, communication, sewage).
GIS includes maps, vector information, and imagery. The collection of imagery is commonly achieved through remote sensing. Remote sensing started with aerial photography in the late 1800's onboard a balloon. Airplanes were used to collect information from above in the early 1900's and the first image taken from space was aboard the Apollo spacecraft in 1969. In the early 1970s the first imaging satellite (ERTS-1) collected imagery of the Earth. Images continue to be collected form both space and aircraft and are available for commercial and personal use on the Internet. The challenge for Remote Sensing images for GIS and other applications is the size of the image. Currently, it is common to have images that are greater than 10,000 by 10,000 pixels, multiple bands, and greater than 8 bits per pixel per band. While JPEG DCT is currently being used for the collection, storage and delivery of several GIS applications and remote sensing systems, other compression and file format techniques have become popular because of greater efficiency of storing and accessing large images.
The original requirements for JPEG 2000 included requirements from the remote sensing and GIS community, which have been meet. Greater bit depths, tiles, resolution progression, quality progression, and fast access to spatial locations all contribute to the capability and functionality of JPEG 2000, which make it an ideal technology for the remote sensing and GIS applications. As an open standard, it is expected that JPEG 2000 will become more prevalent in the remote sensing and GIS applications.
Photography has changed the way people record and remember images, events, and scientific information. Photography, which started in the mid 1800's, has continued to evolve over the last two centuries of scientific discovery. The development of flexible film in the late 1800s, color photography in the mid 1900s, and automatic cameras in the late 1900's changed the way photographs are taken and presented. The most recent addition of digital photography has also changed the way people collect, store, modify, disseminate and display images. Digital photography started with the advent of the first commercial digital cameras for consumers and professionals in the early 1990's, along with the first systems for digitizing film images. As the technology advanced, the cost of digital cameras and film digitization services has dropped, and the image quality has increased. The image size for professional portable digital cameras continues to grow, from about 1 Megapixel in 1993 to 10 Megapixels or more in 2003.
As digital cameras evolve, the requirements for the file format used to store the image data continue to evolve also. Digital cameras continue to increase the size and bit depth collected for an image to increase the resolution and extend the dynamic range and color gamut. Digital Photography requires the ability to compress three-band imagery of 8-to-16 bits per component. Digital photography requires efficient, high quality compression as well as rapid decoding of properly sized images for the camera's display screen. Metadata for the proper use and display of the image is a requirement for digital photography.
Scientific and Industrial
Many applications in the scientific and industrial sector are now turning to the use of image material to either replace or enhance existing data records. Examples include the use of satellite or aerial photography imagery to link to a mapping or GIS system, and the ease of use of digital cameras to provide evidence of satisfactory work completion - for example in road pipework excavation. To some extent these depend on the ubiquity and availability of the standard, and hence projects often use well tried and tested solutions. The widespread availability of digital cameras, and PC sofware to display and print the results are likely to keep costs down - especially important in sectors where deployment has to have a close link to company profitability.
However it is commonplace to have many different versions of images of the same item. As an example, a car manufacturer may have pictures of a vehicle engine used in service, marketing, quality assurance, training and testing applications. As digital asset management becomes recognised as an important control that a company should use, so more attention will need to be spent on reusability and re-purposing of digital assets such as images. JPEG 2000 offers many useful features in this context - proper colour management, compression that can include both lossy and lossless versions of an image in the same file, and extensive options to add user defined and standard metadata to an image file.
In addition, many aspects of scientific and industrial usage involve subsequent processing of a digital image, for example to enhance features or count items. Using any form of lossy compresion for images in this context may create problems - after all the information thrown away during lossy compression is generally that information that is imperceptible to a human eye - not necessarily showing the same characteristics as computer image processing software. It therefore becomes more important to ensure that archival material is stored at the highest fidelity possible - but is still rapidly searchable and viewable during a pre-processing stage for example. Again JPEG 2000 can offer significant advantages in this environment.
Extensive software toolkits are available from a number of vendors which support the new JPEG 2000 standard. These range from the freely available Jasper and JJ2000 software that is linked to Part 5 of the JPEG 2000 standard (Reference Software), to commercial alternatives from KakaduSoft, Aware, Algovision Luratech, Leadtools, Pegasus and others. These allow integration of the comprehensive features of JPEG 2000 into a wide range of products and systems.
Many key applications of the new JPEG 2000 standard will use the Internet, and Internet technologies to distribute images. JPEG 2000 images have a number of properties which make them very suitable for use with the Internet. Typically, Internet users are constrained from downloading large, high quality images because of their physical file size. Often providers of images must create three or more versions of an image, varying from a tiny thumbnail through to a page size image.
Digital cameras have improved in quality and resolution to a level where they are now competing effectively with traditional film. The images they generate are often no longer directly suitable for Internet deployment - the quality and size is wasted on traditional computer monitors. In part, this is because the monitor might show no more than a quarter of the captured image without scrolling, and in part because the colour fidelity of the monitor does not match that of the camera.
Both of these issues are addressed by JPEG 2000 standards. Images saved in JPEG 2000 format can be coded so that the data when transmitted and imaged gradually increases in resolution, starting with a thumbnail, or gradually increases in quality. A combination of these (and other) quality measures can also be achieved - and the user can stop the image transmission once they have enough detail to make their next choice, as the data is ordered in the file in the correct way to simplify its delivery by image servers.
Parts of JPEG 2000 that were created to facilitate these delivery methods are for example:
- Part 8 (JPSEC) deals with image security - for example showing how to use watermarking and other technologies to provide the technical base that many e-commerce applications will need to be able to show their material without risking its piracy
- Part 9 (JPIP) defines new methods to link and deliver the image metadata (information about the image, like its creation time and place) with the image itself, and to deliver under user control the most important pieces of this information first. For example, a doctor looking at an X-Ray could zoom in on areas of interest which could be magnified, or delivered at much enhanced quality long before the rest of the image.
- Part 10 (JP3D) addresses how three-dimensional representations of images could be communicated Part 11 (JPWL) looks at how the particular characteristics of wireless communications and mobile telephony might affect transmission of JPEG 2000 images. It is of course strongly related to the work in JPSEC and JPIP
Traditional surveillance technology has been quite slow to embrace the advantages of digital image processing. In part this has been because the sheer volumes of data have required analogue storage methods such as lapsed time video recorders, and in part because the cost of moving to a digital base have been prohibitive. In the last few years however, costs have fallen dramatically, whilst processing power and capabilities have improved equally fast. This allows many of the shortcomings of traditional surveillance applications to be addressed, whilst also considering many of society's concerns about privacy and intrusion.
Movement detection, and many more sophisticated forms of image analysis can be coupled with new sensor technology to allow much more pro-active monitoring and alarms. Use of 'region of interest' enhancement allow accurate identification of suspects while excluding from analysis, and subsequent public exposure, the innocent bystander. Tight control can be exerted over the user of surveillance technology - for example showing sufficient details to allow recognition of an individual found to have passed a stolen cheque, whilst not permitting enough detail to allow a corrupt viewer of the scene to be able to view and copy a signature being made.
The need for stored evidence to be of a sufficient high quality however also raises concerns and a need for protection against tampering and fabrication. It is very easy within the digital environment to change either subtly or completely aspects of an image, and the metadata surrounding it. Techniques such as encryption and watermarking can be used to help protect against this risk, but there is a real need for well accepted media management techniques which can help reduce risks in this area, for example using trusted third parties and crypto-technology. In addition, it is important that evidence is not segmented, being kept in a single file to avoid the obvious risks of mis-information.
Many of these aspects point to the potential usefulness of JPEG 2000 in this environment:
- the use of Motion JPEG 2000 has obvious advantages in catching sequences of actions, in which the initial view could be at low resolution, switching under the monitor's control to higher resolutions, faster frame rates, and including more metadata and regions of interest
- the file formats defined for JPEG 2000 allow both standardised and user metadata to be stored with the image data
- the new parts of JPEG 2000 extend its usefulness by adding new security support, effective client server communications, and an ability to link its features into an error-prone wireless infrastructure
- as a standard, costs of implementing the technology should be considerably lower than using proprietary technology with less risk of 'lock in'
Document imaging applications are often a trade-off between quality and compression. As technology has improved, and colour become the norm for many publication formats, so user quality expectations have also increased. As there is often a requirement for accurate on-screen viewing, as well as the ability to print high quality facsimiles of an original document, compression requirements are often conflicting. Many documents contain areas which are best communicated in character coded text format (to allow for optimum compression and indexing), together with photographic or half-toned images, graphics and other image types.
Which metadata can includegraphics read and report?
I am using TeXlive 2016 with LuaTeX. My TeX documents include images, which must be 8-bit flattened grayscale (no transparency) or 1-bit monochrome, in PNG format. I am good with graphics programs, and correctly prepare the images with proper model and resolution, finished using Magick mogrify -strip and verified with identify -verbose .
Apparently includegraphics knows how to size an image at 100% scale, based on its pixels and resolution. It read the metadata, I assume. Now for my question: can includegraphics (or some other command within LuaLaTeX, not shell) give a yes-no answer as to whether an image is really grayscale or monochrome? The images are always PNG.
My concern is that an unprocessed image (say in rgba model, using only gray shades) could accidentally be included. I do not expect TeX to convert the image or operate on it in any way. All I want is for it to be inspected. If not flattened gray or monochrome, then I'd write a warning message to the log. Then I'd know that I mistakenly included the wrong kind of image.
Possible, without reams of code? I have the PDF spec, but reading it is not for amateurs.
My question is broader than it might seem. Anyone preparing PDF for black and white print-on-demand would face similar issues. I realize that the images are not PNG within the PDF but that is how they are input to TeX.
EDIT: After reviewing David's response, below, I looked at the image format in more detail. Something similar applies to JPG, but I will focus on PNG.
The PNG specification requires an IHDR chunk early in the file. The tenth byte after the string IHDR is a code for the image color model. A grayscale (or monochrome) image, without alpha channel, has hex code 00 there. Thus, if Lua has the ability to directly read the image file bytes, it should be able to scan for IHDR, count to ten, and report the byte.
Which is More Promising: Data Science or Software Engineering? | Data Driven Investor
About a month back, while I was sitting at a café and working on developing a website for a client, I found this woman…
All of my explanations are centered around the current Zindi: Farm Pin Crop Detection Challenge. Let’s get started.
First of all, we need to download the data. There are two archives that we will use: train.zip containing GeoJSON and 201–01–01.zip containing GeoTIFF files. All of it is open-source data from the Sentinel-2 satellite. Inside you will find .jp2 files.
- and ogrinfo - for printing metadata about a file (coordinate system, tiling, bands, attribute fields etc). Tools with gdal in name are for raster files and with ogr for vector files. and ogr2ogr - for changing format and modifying files. and ogr2ogr - for chaning coordinate system and ogr2ogr - for joinig files - chaning raster files in place
- GDAL has additionally many other tools, for full reference see GDAL programs page where all tools and options of the tools are described in detail.
It is important to notice that most tools create a new file and then some settings might changes, because GDAL-defaults are used, for example raster inner tiling or compression. If you want to preserve these check with gdalinfo what has been used and add additional options to the commands.
I found an answer, just for site collection created on web application root.
Use this command to backup a site collection
It's important to use an absolute path as -Path parameter.
Then create a new web application and then run the following command
I hope that this can help you.
couple of things you can do.
1) Backup curent site collection and then restore it with different name & url.
2) you can use export Site Collection and Import site collection. You will lose the metadata etc.
3) you can use the 3rd party tool for this, which give you more power & flexbility and less effort. try ShareGate migration tool. they have 15 days trial, try it.
For SharePoint Online - the way to do this is via PnP :-
This will get all of the site columns, site content types, term store, list definitions, (empty) lists, Site Structure and copy them to the new site.
If you want content moved across as well, then that needs more PowerShell, but is doable.
An easy way to do that is to use Copy-SPSite cmdlet. I had to create several site collections based on one original site collection that was created as a "template". Created the template site collection in its own db (use CA or New-SPSite to add a db to a web application). Then every time I need a new site collection:
- Create a new site collection db.
- Use Copy-SPSite to copy the template site collection to a new one on the new db.
My dba was a little annoyed by the number of databases created but we learned that to put each site collection in its own db eases SharePoint admin tasks (backup/restore/move/quotas/etc.).
Member Function Documentation
Set the PDS parameters from a source file.
The current parameters are cleared before the new parameters are moved in, but not before the new parameters have been successfully obtained from the source file. Then the projection information is initialized.
N.B.: The name of the parameters group is set to the source file pathname.
|pathname||The pathname to the source file that will be parsed for PVL parameters.|
|std::ios::failure||If the source file can not be accessed or read.|
|idaeim::PVL::Invalid_Syntax||If the source file contains contains invalid PVL syntax.|
Set the PDS parameters from a Parameter Aggregate.
The current parameters are cleared before the new parameters are moved in. Then the projection information is initialized.
N.B.: The name of the parameters group is set to the name of the Parameter Aggregate.
|parameters||A Parameter Aggregate that is the source of new parameters.|
Get the geo-transform array.
The geo-transform contains the six coefficients of a two-dimensional affine transformation. In the classic expression of the two-dimensional affine transformation matrix by Newman and Sproull (Newman, W.M. and Sproull, R.F., "Principles of Interactive Computer Graphics", McGraw-Hill, 1979, section 4-3 Matrix Representations, p. 57ff) a 3x3 matrix representation is used. The last column is elided away because it is always the identity vector:
For an image translation operation by Ox,Oy the matrix values are:
For an image scaling operation by Sx,Sy the matrix values are:
For an image rotation operation by an angle A the matrix values are:
These operations can be concatenated - each operation matrix, including the third column identity vector, multiplied by the subsequent operation matrix - to produce a single matrix that incorporates all of the individual transformation operations.
For the geo-transform array the corresponding values are, by array index, GT=c, GT=a, GT=b, GT=f, GT=d, GT=e, or:
The geo-transform array contains the six coefficients that result from concatenating an offset of the image back (negative) to the projection origin, rotating the image to align with the projection (north up), and scaling from pixel units to world unit (meters):
- -(Sx * Ox * cos (A)) - (Sx * Oy * sin (A))
- Sx * cos (A)
- Sx * sin (A)
- (Sy * Ox * sin (A)) - (Sy * Oy * cos (A))
- Sy * -sin (A)
- Sy * cos (A)
Ox,Oy The offset, in pixels, of the projection origin relative to the upper-left corner of the upper-left image pixel. Sx,Sy The pixel size dimensions, in meters. A The clockwise angle of rotation about the projection origin relative to the vertical (north-pointing) axis.
For the normal case of an unrotated image projection this reduces to:
N.B.: Image pixel coordinates use a left-handed coordinate system in which the horizontal x-axis is positive to the right from the origin at the upper left corner of the upper left pixel along increasing sample numbers and the vertical y-axis is positive downwards from the origin along increasing line numbers. However the geo-transform operates in the conventional right-handed coordinate system (in meter units) in which the x-axis is positive to the right from the projection origin and the vertical y-axis is positive upwards from the origin. This reversal of the positive vertical direction can be the source of confusion when using the geo-transform values.
Returns: A pointer to the array of six double values of the geo-transform. N.B.: This array is owned by the PDS_Projection_Data object modifying it will affect the results of its use in subsequent projection operations.
Get the OGRSpatialReference.
Returns: If a OGRSpatialReference was assembled from the available parameter values a copy is returned otherwise NULL is returned.
The Projection_Type code, other than UNKNOWN_PROJECTION, can be used as an index into the Projection_Definitions array to obtain details of the mapping between PDS parameters and GDAL SRS paramters.
Returns: A Projection_Type code. If a projection could not be initialized this will be the UNKNOWN_PROJECTION value.
Get the name of the applicable projection.
Any given type of projection may have several aliases for its name. The name returned by this method is the one that is specified by the PDS PROJECTION_TYPE_PARAMETER_NAME parameter. To obtain the projection name recognized by the GDAL OGRSpatialReference class, use the projection type code, if it is not UNKNOWN_PROJECTION, as the index into the list of projection definitions and use the first entry in the Aliases array:
Returns: A string having the formal name of the applicable projection. If a projection could not be initialized this will be an empty string.
Initialized the geo-transform array values from the PDS parameter values.
There are three required parameters: PIXEL_SIZE_PARAMETER_NAME, HORIZONATAL_OFFSET_PARAMETER_NAME and VERTICAL_OFFSET_PARAMETER_NAME. If a parameter group with the IMAGE_MAP_PROJECTION_GROUP_NAME is present the parameters will be sought in this group otherwise they may occur anywhere in the current PDS_Data. If none of the required parameters can be found the geo-transform array will be set to the identity values.
The geo-transform values are set as follows:
The pixel is presumed to have a square shape thus the PIXEL_SIZE value of the PIXEL_SIZE_PARAMETER_NAME parameter applies to both the horizontal and vertical dimension.
The geo-transform values are in a real world space in which distance is measured in meters. The PDS Data Dictionary specifies that the units for the PIXEL_SIZE_PARAMETER_NAME parameter should be "KM/PIX". However, for some data products (e.g. HiRISE) the pixel size is measured in meters. The only way to distinguish which units apply is to examine the units string associated with the value. However, if the value has no units string it must be assumed that the PDS Data Dictionary units are in effect. Thus the units_scale is 1000 if the parameter units string is empty or specifies kilometers otherwise it is 1.
N.B.: The pixel vertical (GT) dimension is negative to compensate for the left-handed image coordinate system in which positive vertical is downwards and the right-handed geographic coordinate system in which positive vertical is upwards.
The HORIZONTAL_OFFSET and VERTICAL_OFFSET values of the HORIZONATAL_OFFSET_PARAMETER_NAME and VERTICAL_OFFSET_PARAMETER_NAME parameters, respectively, are the offsets of the map projection origin from the image origin in image coordinate space (sample,line). Because the PDS image origin is (1,1) - there is no (0,0) pixel - while the geographic projection origin is (0,0) the offset values are adjusted to a conventional (0,0) image origin.
Warning: Different PDS data products locate the image origin differently: Some use the upper left corner of the upper left pixel, while others use the center of the upper left pixel. Unfortuntately there is nothing in the PDS label to indicate which location is being used. This implementation assumes that the image origin is located at the upper left corner of the upper left pixel.
Returns: This PDS_Projection_Data object. Exceptions:
|std::logic_error||If the PROJECTION_ROTATION_PARAMETER_NAME parameter is found and it has a non-zero value (hopefully this can be supported in a future version of this class).|
|idaeim::PVL_Invalid_Value||If any of the required PDS parameters is not an assignment of a single numeric value.|
Initialize the spatial reference information of a new OGRSpatialReference object.
A GDAL OGRSpatialReference object (http://www.gdal.org/ogr/classOGRSpatialReference.html) is initialized if all its required PDS parameter values are successfully found.
First the projection name is erased, the projection type is initialized to UNKNOWN_PROJECTION, any existing OGRSpatialReference object is deleted and the spatial reference set to NULL.
If a parameter group with the IMAGE_MAP_PROJECTION_GROUP_NAME is present all remaining PDS parameters will be sought in this group otherwise they may occur anywhere in the current PDS_Data.
A PROJECTION_TYPE_PARAMETER_NAME parameter is required that names the type of applicable projection. This determines the projection name which is used to search the list of known projection definitions for a Projection_Definition with a matching name in its Aliases list. If no match is found nothing more is done.
A tentative OGRSpatialReference object is constructed and initialized (OGRSpatialReference::SetProjection) with the formal projection name - the first name in the projection definition's Aliases list - which may be different from the PDS projection name. The value of each required PDS parameter of the projection definition is used to initialize (OGRSpatialReference::SetNormProjParm) the associated OGRSpatialReference object's spatial reference system parameter. In addition, each default value of the projection definition is used to initialize the associated OGRSpatialReference object's spatial reference system parameter.
The OGRSpatialReference object's projection coordinate system name is initialized (OGRSpatialReference::SetProjCS) with the formal projection name followed, after a single space character, by the planet name. The planet name is obtained from the PLANET_PARAMETER_NAME PDS parameter if this can't be found "Unknown" is used. The geographic coordinate system specification is initialized (OGRSpatialReference::SetGeogCS) with the following values:
Planet name The planet name, as determined above. If the projection type is EQUIRECTANGULAR a "_localRadius" suffix is applied. If the projection type is POLAR_STEREOGRAPHIC a "_polarRadius" suffix is applied. Geographic name The unaltered planet name with a "GCS_" prefix. Datum name The unaltered planet name with a "D_" prefix. Planet axis radius
The planet axis radius is the value of the SEMI_MAJOR_RADIUS_PARAMETER_NAME parameter. However, if the projection is not geocentric based and the projection type is POLAR_STEREOGRAPHIC or STEREOGRAPHIC with a CENTER_LATITUDE_PARAMETER_NAME parameter absoulte value of 90 (i.e. stereographic centered at a pole) then the SEMI_MINOR_RADIUS_PARAMETER_NAME parameter value is used.
The use of a geocentric based projection (i.e. planetographic) is determined by the COORDINATE_SYSTEM_PARAMETER_NAME parameter, or the LATITUDE_TYPE_PARAMETER_NAME if the former is not found. Unless the parameter value is "planetocentric" (case insensitive) planetographic applies. However, the EQUIRECTANGULAR, ORTHOGRAPHIC and SINUSOIDAL projection types are not subject to this condition they are always deemed to be planetocentric.
Planet inverse flattening The inverse of an elipsoidal planet model flattening is the ratio of the semi-major axis radius to this value minus the semi-minor axis radius (using the parameters indicated above). Note that as the two values approach equality the ratio approaches infiniity, if the difference is less than or equal to 0.00000001 the inverse flattening is set to zero which indicates a spherical planet model. Also, if the projection is planetocentric (as determined above) or pole centered stereographic (as determined above) then the inverse flattening is always set to zero. Prime meridian name This descriptive name is set to "Reference_Meridian". Prime meridian offset Set to 0.
If the tentative OGRSpatialReference object can not be initialized for any reason it is deleted. Otherwise it is set as the current spatial reference object. Note that it is possible to have identified a projection name and projection type but fail to initialize a spatial reference object.
Returns: This PDS_Projection_Data object. Exceptions:
|idaeim::Invalid_Argument||If a required PDS parameter can not be found.|
|idaeim::PVL_Invalid_Value||If a required PDS parameter is not an assignment of a single numeric (or string, as the situation requires) value.|
|std::logic_error||If an OGRSpatialReference initialization operation fails.|
Get a JP2_Box with GeoTIFF data content.
The GeoTIFF JP2_Box is a JP2_Box::UUID_BOX_TYPE with content that begins with the GEO_TIFF_UUID signature. Immediately following the signature is binary data produced by the GDAL GTIFMemBufFromWkt function that operates on Spatial Reference System Well-Known_Text representation of the current projection specifications (generated by the OGRSpatialReference::exportToWkt method), and the current geo-transform array.
Returns: A pointer to a JP2_Box containing the GeoTIFF data, or NULL if no spatial reference object has been initialized. N.B.: Ownership of the JP2_Box is transferred to the caller. Exceptions:
|std::logic_error||If the GeoTIFF data production by GDAL fails.|
Get a JP2_Box with GML data content.
If the either the image width or height arguments are not specified the PDS IMAGE_WIDTH_PARAMETER_NAME and IMAGE_HEIGHT_PARAMETER_NAME parameters will be used to obtain the required values.
The GML JP2_Box is a JP2_Box::ASSOCIATION_BOX_TYPE containing a JP2_Box::LABEL_BOX_TYPE with a "gml.data" label and a JP2_Box::ASSOCIATION_BOX_TYPE containing a JP2_Box::LABEL_BOX_TYPE with a "gml.root-instance" label followed by a JP2_Box::XML_BOX_TYPE containing a "gml:FeatureCollection" XML document that provides the current geo-transform specifications. N.B. This XML document is a hardcoded copy of the document format text taken verbatim from the GDAL GDALJP2Metadata::CreateGMLJP2 method.
If the current spatial reference object is flagged as projected (OGRSpatialReference::IsProjected) or geographic (OGRSpatialReference::IsGeographic) its corrsponding authority name (OGRSpatialReference::GetAuthorityName) is "epsg" (case insensitive) the XML "srsName" will be "urn:ogc:def:crs:EPSG::" followed by the spatial reference authority code (OGRSpatialReference::GetAuthorityCode).
If no EGSG authority code is found then the XML "srsName" will be "gmljp2://xml/CRSDictionary.gml#ogrcrs1" and an additional JP2_Box::ASSOCIATION_BOX_TYPE will be added to the primary JP2_Box. This will contain a JP2_Box::LABEL_BOX_TYPE with a "CRSDictionary.gml" label followed by a JP2_Box::XML_BOX_TYPE containing a "gml:Dictionary" XML document with the following references:
The XML dictionary document is produced by the OGRSpatialReference::exportToXML method of the current spatial reference object.
WARNING: As of this writing no tests have been able to confirm that the GML content is correct.
|image_width||The width of the image be referenced. If zero the IMAGE_WIDTH_PARAMETER_NAME parameter value, if available, will be used.|
|image_height||The height of the image be referenced. If zero the IMAGE_HEIGHT_PARAMETER_NAME parameter value, if available, will be used.|
|idaeim::Exception||If the image width or height values can not be determined from the PDS_Data.|
|std::logic_error||If the GeoTIFF data production by GDAL fails.|
Try to match a name against an alias list.
The name is first uppercased, spaces are replaced with underbars, then compared against each name in the alias list in uppercase form. If this fails the comparison is then made against each alias in uppercase form with any underbar characters removed. Note that the name matching is case insensitive. N.B.: Alias names containing any space characters will never be matched don't use space characters in alias names.
|name||The name to be checked.|
|aliases||The NULL-terminated list of alias names.|
Get the projection type code for a projection name.
The list of known projection definitions is searched for a projection name alias that matches the specified projection name.
|projection_name||The projection name to match against a Projection_Definition aliases list.|
Best way to copy GDAL metadata from one JP2 image to another - Geographic Information Systems
In February 2005 the Open Geospatial Consortium (OGC) launched a new Encoding Interoperability Experiment relating to the use of the Geography Markup Language in JP2 (JPEG 2000) image files. The goal is to support a standardized mechanism for inclusion of geo-referencing information as XML-encoded metadata within the ISO 15444 JPEG 2000 image format.
The Geography Markup Language (GML) is the most widely supported open specification for representation of geographic (spatial and location) information. It defines XML encoding for the transport and storage of geographic information, including both the geometry and properties of geographic features. In keeping with OGC's IPR Policy for royalty-free OGC standards, the GML specification is freely available for use on royalty-free terms. GML provides a variety of kinds of objects for describing geography including features, coordinate reference systems, geometry, topology, time, units of measure and generalized values.
In March 2004 OGC approved the release of the GML Implementation Specification Version 3.1.0 as a publicly available OGC Recommendation Paper. The GML specification is now being edited jointly in the OGC GML Revision Working Group and in ISO/TC 211/WG 4 (Geographic Information/Geomatics). Standardized as ISO 19136 in the ISO/TC 211 context, GML is "a detailed XML implementation of the General Feature Model (GFM), and most of ISO 19123, 19107, 19108, 19111, along with some utility feature types such as observations, and utility components such as units of measure."
JPEG 2000 is successor to the earlier ISO 10918-1 'JPEG' standard for digital images. JPEG 2000 "uses 'wavelet' technology as well as being better at compressing images (20 percent plus), it can allow an image to be retained without any distortion or loss." JPEG 2000 as ISO/IEC 15444-1, according to LizardTech, is a high-end alternative to the popular JPEG image format which offers high-quality lossy and lossless image compression in a multi-resolution (pyramidal) format that is internal to the file structure. JPEG 2000 is highly scalable in several dimensions: it supports file sizes into the gigabyte range and beyond, multispectral and hyperspectral datasets with increased bit-depths, and selective decompression of scenes within the image at user-controllable qualities. It provides for a rich set of primitives for transporting compressed image data in a Web services environment: in a network environment, JPEG 2000 images can be streamed from server to client while still in compressed form, allowing viewers to access only the data (pixels) they need and at only the resolution and quality they require."
As with other XML-based or XML-aware image file formats, the JPEG 2000 standard makes provision for several forms of XML-encoded metadata in the JPX file format within reserved 'boxes'. Normative Annex M 'JPX file format extended metadata definition and syntax' defines a comprehensive set of optional metadata elements that may be embedded in a JPX file within XML boxes. Metadata types are documented in four logical groups: image creation metadata, content description metadata, metadata history metadata, and intellectual property rights metadata. Section M.8 supplies the JPX extended metadata document type definition (DTD). The standardized part of the metadata model is based upon the DIG35 Metadata Standard for Digital Images. Information in the XML 'box' is not application specific, however, allowing for arbitrary conforming XML-encoded metadata.
The new Open Geospatial Consortium "GML in JPEG" Interoperability Experiment has been formed to "test and refine a draft implementation specification that defines how Geography Markup Language is to be used within JPEG 2000 data packages for geographic imagery. The Interoperability Experiment will implement several prototype GMLJP2 codecs (data compressor/decompressors) based upon an OGC draft specification 04-045 titled "GML in JPEG 2000 for Geographic Imagery" this specification proposal was submitted to OGC by Galdos Systems Inc. and LizardTech. The purpose is to confirm that the specification will support the requirements of geospatially related imagery over the Internet, and to improve the specification if it does not support these requirements. The participants will perform several individual experiments of increasing complexity and will demonstrate encoding similar to GeoTIFF."
The initiator organizations are Galdos Systems, Inc. (Canada) LizardTech, Inc. (US) and the European Union Satellite Centre (Spain). Other organizations participating in the initiative include DM Solutions Group (Canada) ITT Industries Space Systems Division (US) SPOT Image (France) US Geological Survey, Astrogeology US NASA Jet Propulsion Laboratory and Intergraph (Z/I Imaging) (US).
The OGC project team will produce a GMLJP2 Encoding Interoperability Experiment Report based upon the outline of goals identified in the proposed document, covering specification of the uses of GML within JPEG 2000, packaging mechanisms for including GML within JPEG 2000, and GML application schemas for encoding of OGC coverages within JPEG 2000. A number of experiments will be performed using GML application schemas (Coverage Range Coverage Range Image Copy Annotated Image Embedded CRS Embedded Units of Measure Definition Bundled GML Features Multi-image Case) as well as minimal instance experiments for Image Import/Export and Metadata Read. Other OGC specifications impacted by the study include Web Feature Service (WFS) and Web Coverage Service (WCS).
Initiatives classified as OGC Interoperability Experiments are part of the OGC Interoperability Program, which is "an essential part of OGC's fast, effective, inclusive user-driven process to develop, test, demonstrate, and promote the use of OGC Specifications. OGC Interoperability Experiments are designed as "brief, inexpensive, low-overhead initiatives led and executed by OGC members to achieve specific technical objectives that further the OGC Technical Baseline. They provide an opportunity for OGC members to launch and run an initiative without the more substantial sponsorship that supports OGC's traditional testbeds and pilot projects. These initatives can be for specification development, refinement, or testing or for other purposes approved by the OGC Review Board." Other OGC Interoperability Programs include fast-paced, multi-vendor test beds, OGC Pilot Projects, and OGC Interoperability Support Services.
The Open Geospatial Consortium (OGC) is "an international industry consortium of more than 270 companies, government agencies and universities participating in a consensus process to develop publicly available interface specifications. OpenGIS Specifications support interoperable solutions that 'geo-enable' the Web, wireless and location-based services, and mainstream IT. The specifications empower technology developers to make complex spatial information and services accessible and useful with all kinds of applications."
About the Geography Markup Language (GML)
"Geography Markup Language is an XML grammar written in XML Schema for the modelling, transport, and storage of geographic information. The key concepts used by Geography Markup Language (GML) to model the world are drawn from the OpenGIS Abstract Specification and the ISO 19100 series.
GML provides a variety of kinds of objects for describing geography including features, coordinate reference systems, geometry, topology, time, units of measure and generalized values.
A geographic feature is 'an abstraction of a real world phenomenon it is a geographic feature if it is associated with a location relative to the Earth'. So a digital representation of the real world can be thought of as a set of features. The state of a feature is defined by a set of properties, where each property can be thought of as a
The number of properties a feature may have, together with their names and types, are determined by its type definition. Geographic features with geometry are those with properties that may be geometry-valued. A feature collection is a collection of features that can itself be regarded as a feature as a consequence a feature collection has a feature type and thus may have distinct properties of its own, in addition to the features it contains.
Geographic features in GML include coverages and observations as subtypes. A coverage is a sub-type of feature that has a coverage function with a spatial domain and a value set range of homogeneous 2 to n dimensional tuples. A coverage can represent one feature or a collection of features 'to model and make visible spatial relationships between, and the spatial distribution of, earth phenomena.'
An observation models the act of observing, often with a camera, a person or some form of instrument ('an act of recognizing and noting a fact or occurrence often involving measurement with instruments'). An observation is considered to be a GML feature with a time at which the observation took place, and with a value for the observation. A reference system provides a scale of measurement for assigning values 'to a location, time or other descriptive quantity or quality'.
A coordinate reference system consists of a set of coordinate system axes that is related to the earth through a datum that defines the size and shape of the earth. Geometries in GML indicate the coordinate reference system in which their measurements have been made. The 'parent' geometry element of a geometric complex or geometric aggregate makes this indication for its constituent geometries.
A temporal reference system provides standard units for measuring time and describing temporal length or duration. Following ISO 8601, the Gregorian calendar with UTC is used in GML as the default temporal reference system. A Units of Measure (UOM) dictionary provides definitions of numerical measures of physical quantities, such as length, temperature, and pressure, and of conversions between UOMs. " [from the version 3.0.1 specification Introduction]
"ISO 19136 (issued by Open GIS Consortium as GML 3.1) is a detailed XML implementation of the General Feature Model (GFM), and most of ISO 19123, 19107, 19108, 19111, along with some utility feature types such as observations, and utility components such as units of measure. The GML encoding is described directly using W3C XML Schema (WXS). However, rules for mapping GML to and from UML models are carefully described in Annex E and Annex F of ISO 19136. These rules can be applied providing the UML follows the profile described in ISO 19103. In particular GML provides a pattern for defining domain-specific feature-types. GML uses WXS to define components in the gml namespace. Each domain-specific language based on GML is termed a GML Application Language, and defines components (in particular, Features) in their own namespace, which are derived from or incorporate components drawn from GML. Effectively, the complete WXS representation for the GML Application Language is a formalisation of the Feature Type Catalogue for the application domain, plus some supporting components. [SEEGRID II Conference Wiki]
Overview of the OGC's JPEG 2000 and GML Initiative
Sensor Models: Original satellite and aerial imagery are now much more than just three 8-bit RGB bands, and JPEG 2000 supports many of the imagery requirements users now see, such as bit-depths of 16 or higher and hyperspectral bands. These images typically have rich sets of associated metadata. For example: (1) full descriptions of the cameras may include sensor characteristics such as the number and wavelengths of the spectral bands, precision and calibration information, and type of sensor (2) positioning information may include the usual coordinate system and position of the image relative to Earth (3) image quality information might include cloud cover estimates, NIIRS rating and air quality at time of collection. Using GML, an application schema can be written that captures this metadata structure this could be defined and provided by the imagery vendor within their supplied JP2 imagery, GML instance data would provide the actual metadata content.
Multiple Images and Feature Identification: JPEG 2000 permits multiple images to be contained within the same file. Consider a workflow in which images of a particular region are to be captured and analyzed during a period of months: (1) multiple sets of stereopair imagery are to be stored, each pair has associated metadata, including date and time stamps (2) the images all share a common coordinate system, but all are at slightly different coordinate positions (3) any individual image may have specific features identified on that image. All of this information may be contained in a single JP2 file. In addition to the data corresponding to the individual images, the file may contain 'shared' GML data describing the coordinate system and feature descriptions for all images and 'private' GML data for each image describing specific feature instances and offsets within the image.
The Spatial Web: Increasingly geospatial data are being used in the Spatial Web environment, where Web services are used to transparently perform operations such as: (1) providing catalog access to large archives of geospatial data (2) exporting the data itself, based on query parameters (3) performing mosaicking and layering operations (4) performing simple feature classification, description and extraction (5) styling the data for presentation. Such workflows are already in use today with vector data GML is the language of the GeoWeb used to describe regions and extents, define and label features and express queries. As users add raster imagery to this system, label features and express queries. As users add raster imagery to this system, they must be able to use GML for characterizing the imagery and its associated features. "
GML - JPEG 2000: Earlier Discussion 2004
"The OGC proposes to use GML to store geo-locational, feature overlay, annotation and text information within JP2 files. Unfortunately this is likely to be incompatible with the existing GeoJP2 format. Specific metadata proposed by the OGC for storage are: (1) acquisition data/time (2) imagery type, source and rendering (3) spatial resolution and geolocation accuracy (4) SensorML &mdash an XML variant used to hold sensor metadata (5) quality assessment metadata such as percentage cloud cover and missing pixels (6) ISO 19115 metadata elements such as title, source and classification.
One of the big issues to be addressed is whether to use JP2 or JPX as the OGC standard. JP2, being more restrictive and inflexible, is less able to hold the proposed set of metadata and be expanded in future. One the other hand, there is a danger that existing JP2 readers would be unable to handle a JPX based geospatial format.
Mark Kyle (LizardTech) gave a presentation which "covered work by Galdos Systems, LizardTech and the EU Satellite Centre to use GML encoding of coverage georeferencing within JP2 (both for geo-rectified and geo-referencable grids). The metadata model incorporates coverage definition, coverage metadata and value metadata. Embedded image annotation consists of text and associated geographic regions. The geographic features are defined through an associated applications schema either in-line or via URL pointers. The styling of features and annotations is based on StyledLayerDescriptors and ignores the drawing primitives provided in the JP2 format specification. This is to avoid mixed styling but has the drawback that some vendors do make use of the JP2 drawing primitives. "
Source: "Joint IES/GML Session on including GML in the JPEG2000 Image Format," Report on the Open Geospatial Consortium Technical Meeting at the Illinois Institute of Technology, Chicago.
About OGC Interoperability Experiments
"Interoperability Experiments, a new kind of OGC Interoperability Initiative, are brief, inexpensive, low-overhead initiatives led and executed by OGC members to achieve specific technical objectives that further the OGC Technical Baseline. They provide an opportunity for three or more OGC members to launch and run an initiative without the more substantial sponsorship that supports OGC's traditional testbeds and pilot projects. These initatives can be for specification development, refinement, or testing or for other purposes approved by the OGC Review Board. Interoperability Experiments operate according to the OGC's 'Interoperability Experiment Policies and Procedures'.
To begin an Interoperability Experiment, interested members submit to the OGC Review Board an Activity Plan and letters of support from at least three voting members of the Open Geospatial Consortium. If the Review Board approves the Interoperability Experiment, the board designates an OGC Staff member to oversee the experiment.
Before the Interoperability Experiment begins, the OGC issues a press release announcing the IE so that organizations have an opportunity to participate or observe.
Upon completion of the Interoperability Experiment, the technical deliverables, draft specifications, are documented as "IE Reports". These may be introduced into the Specification Program for review, possible revision, and possible adoption by the OGC Technical and Planning Committees as Adopted OpenGIS Implementation Specifications. [from the IE Summary]
About 'JPEG' Patents
The JPEG 2000 technical effort seeks freedom from the well-known "patent terrorism" surrounding the earlier 'JPEG' standard. Here are excerpts from the FAQ 'What is the patent situation with JPEG 2000?':
". Firstly, the JPEG committee have tried to ensure in their standardisation work that many 'baseline' parts of the JPEG 2000 standard should be implementable either in full or in part without payment of either royalty fees (volume related) or license fees (non-volume related). How have they done this?
Many companies participate in the work of JPEG. In all cases where patents exist that may read on the core technology being proposed for what JPEG call the 'Baseline' of the standard (that part of the Standard that all implementations need to implement), the following actions are taken:
(1) Patent holders are requested to confirm that they will offer such patents on licensed a royalty and license fee-free basis, with the only constraint being that of reciprocity. By this we mean that the patent holder is expected to issue a license for this use against the baseline implementation of a JPEG 2000 series standard [i] without charge, [ii] and which only requires the licensee to make a similar grant on any patents that the licensee might hold that they would also claim apply to that part of the standard
(2) If such agreement cannot be obtained, the JPEG committee will then look at alternatives that avoid the use of patented technology, a license for which cannot be obtained on the above basis. In the event that no such option exists, the required feature or specification is removed from the baseline specification defined in the Standard.
In the specific case of JPEG 2000 Part 1 (ISO/IEC IS 15444-1 | ITU-T T.800), as of 17th July 2003 a total of 27 such declarations from 11 companies or individuals were submitted. All patents that WG1 believes to be essential to practice JPEG 2000 Part 1 are available on ITU-T's patent policy 2.1, which is fee-free. Work is ongoing to compile an ITU-T patent database.
The JPEG committee recognise the importance of being able to implement their core standards free of the need to pay patent holders, and continue to strive to achieve this goal wherever possible.
In addition, the JPEG committee recognises that there may be occasions where the use of non fee-free technology is unavoidable, or where it offer significant technical advantages over a fee-free solution. In these cases, JPEG may include such technology in its standards in accordance with ISO and ITU patent policy, which require that a signed declaration be obtained prior to publication stating that such patents are available on 'reasonable and non-discriminatory' terms (RAND). As an example, Part 2 of JPEG 2000 details many extensions to the baseline specification, some of which may require the use of patented technology only available on a RAND basis. As of 17th July 2003, only one such declaration was registered within the ITU-T database. " [from the JPEG 2000 web site]