More

Save image within bounding box programmatically


I want to save an image from QGIS but the save image functionality feels limited.

Is it possible to use the python command line to save an image at a specific zoomed in scale and to set a bounding box over the region I want to print?

For example, I want to save an image at scale 1:1000 000 within specific coordinates as a bounding box.


Can you try the latest QGis development version ? QGis 3 will feature a much improved "save as image" feature, the version is not yet officially released, but the nightly builds are available and seem quite stable. Presentation of the new feature: http://imhere-asia.com/blog/post/more-qgis-30-improvements-saving-map-canvas-as-image-pdf

Otherwise, you can also use the Composer: add a single map taking all the space, then you can set the scale and coordinates in the properties of the map. Documentation is available here: https://docs.qgis.org/2.18/en/docs/user_manual/print_composer/overview_composer.html

I am sure it is also possible in Python, but it seems overkill for your usecase.


Exkoria, I know you're using QGIS, but I figured I'd post this code from ArcMAP in-case you can get some help from it. It pans to each feature in a shapefile and exports an image, and then repeats for all features in the shapefile you specify. Like I said, its not QGIS, but it might help somehow.

# This script pans ArcMap, once for each feature in Layer (passed in value) # # Note the time.sleep(1), this was included because I needed to pause to # allow ArcBruTile time to download images. Depending on your mileage, # this probably needs to be adjusted. # # 4/19/2012 # # http://nodedangles.wordpress.com # # import sys,arcpy,datetime inLayer = sys.argv[1] def printit(inMessage): print inMessage arcpy.AddMessage(inMessage) mxd = arcpy.mapping.MapDocument("CURRENT") arcpy.MakeFeatureLayer_management(inLayer, "indexLayer") cur=arcpy.SearchCursor("indexLayer") df = arcpy.mapping.ListDataFrames(mxd)[0] newExtent = df.extent iCount = 0 iTotal = (arcpy.GetCount_management("indexLayer").getOutput(0)) for row in cur: thisPoly = row.getValue("Shape") newExtent.XMin, newExtent.YMin = thisPoly.extent.XMin, thisPoly.extent.YMin newExtent.XMax, newExtent.YMax = thisPoly.extent.XMax, thisPoly.extent.YMax df.extent = newExtent iCount+=1 arcpy.mapping.ExportToJPEG(mxd, r"C:scratchImagesMap_%s.jpg">ShareImprove this answeranswered Aug 28 '14 at 13:50BarrettBarrett2,99314 silver badges29 bronze badges 

How to find out if a PDF file has data not visible within the bounding box? [closed]

Want to improve this question? Update the question so it's on-topic for TeX - LaTeX Stack Exchange.

I am embedding a small area of a large webpage into a document compiled with pdflatex (using includegraphics ). A raster image / screenshot produces artifacts when zooming in, so instead I am saving the page as a PDF with the browser's print dialog, and cropping the section I need.

This tool changes the bounding box, and produces the desired visual appearance. But the file preserves all the content of the original.

There is personal information contained outside the bounding box I would like to ensure is not in the cropped pdf (usernames, timestamps, ids, etc.). Given a string of ascii plaintext in the original document, e.g. my username, how would i go about finding whether it's present in the PDF. Can I expect ascii text from the page to be contained literally or would it be encoded in some vectorial form?

I mention that the file is coming from a browser, because intuitively, browsers might encode their printed PDFs in a predictable way, maybe it's easy to clip areas if it maps to the DOM directly.


1 Answer 1

First of all, note that you can always use Point2D.Double for representing points in a plane.

Next, I would split your program into two methods: the first one for pruning points that are not within a specified bounding box, and the second one for reading the points from the standard input, and putting them into the first one.

What comes to coding conventions, I would have a blank line before the if statement. Also, you should surround the condition of an if statement with a single space. So, instead of


13 Answers 13

You could set up artboards for each object. Or just adjust the artboard to fit only the object you want to export and then tick the "clip to artboard" option when saving/exporting.

You could hide everything you don't want to export first:

  • Select All
  • Shift -click the art you want to export
  • Choose Object > Hide from the menu
  • Export (leaving "clip to artbaord" unchecked) You should only see the art not hidden.
  • Choose Object > Show All from the menu.
  • Repeat

You can also utilize the hidden "hide other" shortcut. Select the object you want to save/export and then hit Command-Option-Shift-3 (Mac) or Ctrl-Alt-Shift-3 (Win). This will hide everything which is not selected. Save/Export, then hit Command-Option-3 (Mac) or Ctrl-Alt-3 (Win) to show everything again. Select a new objects and repeat.

Or I often find just copying to a new file faster.

  • Select the artwork you want to export
  • Edit > Copy
  • File > New
  • Edit > Paste
  • Export (leaving "clip to artbaord" unchecked)
  • File > Close (don't save - just tap the d key)
  • Repeat

Do this often enough and it becomes a very fast processes.

I just fit the artboard to the objects I want:

  1. Select desired objects.
  2. Object -> Artboards -> Fit to Selected Art
  3. ctrl+alt+shift+s to open the Save for Web dialog.
  4. ctrl + z to undo fit.

This requires both Illustrator and Photoshop, but is my favorite:

  1. Click to select the object/objects in Illustrator
  2. Do a "File>Copy", Ctrl+C or applekey-C to copy
  3. In Photoshop, do "File>New", Ctrl+N or applekey-N and keep the default options it gives you (exception: you may want to change Background to Transparent)
  4. In Photoshop, do a "File>Paste", ctrl+v or applekey-V.

You will now have just the object/objects by itself (already cropped!), and can Save, Export or Export for Web from there. Not sure if this works on all platforms but it surely does on my macbook air.

In Illustrator CC 2015.3 (20), Adobe finally added an asset export tool similar to Sketch.

Now you can select any objects, right click and "Export Selection. ". If you want to export multiple different selection, you can click "Collect For Export".

In the modal that appears, you can:

  • Give each asset a name.
  • Pick a format for each asset png, jpg, svg and pdf are supported.
  • For raster formats you can create multiple scaled outputs, e.g. 2x for retina screens. Each exported scale has a configurable suffix.

These configurations are saved so when you edit the objects later, you can re-export everything easily.

Here is a surprisingly simple trick I recently learned.

Open the AI file with Photoshop (Right click, Open with Adobe Photoshop…)

In the dialog box select the Images button and all the images in your document appear. Select the images you want to open and they will each open in a separate document at the embedded resolution.

If you are exporting SVGs here is a super easy way:

  1. Copy the shape you wish to export to the clipboard.
  2. Open Terminal
  3. Type pbpaste > someFileName.svg

Your SVG will be ready to go in whatever directory you'd like.

Not the ideal solution, but if you put the selected objects into a layer (either move them or duplicate them to a new layer temporarily), turn off the rest of the layers and export.

Not sure what format you're exporting to, but one solution is to turn off the visibility of the objects you don't want to export in the Layers panel, then export.

Easiest way I know is to select the objects you want to export and select then the Artboard Tool from the toolbar. Use this tool to click on the selected objects one by one (you might have to click twice on objects after the first one) and a new artboard will be created for each object.

Then you just have to hit Cmd + E (mac) or Ctrl + E (pc) to export and then simply select the format you want and make sure 'Use Artboards' checkbox is ticked. Each individual image will be exported as an instance of the filename you type according to its artboard number (ie: filename-01 , filename-02 etc.). Just bear in mind that this will export everything in the artboard area on all layers, so if you need transparent BGs you will need to hide any BG layers or other layers you don't want to include in the export.

Better solution: record action in panel Actions (add key for this action - for example F2 ):

  • select something on artboard (object, two object etc),
  • press Ctrl + C (copy),
  • press Ctrl + N (create new document),
  • select Insert menu item. from Actions panel menu (right corner in Actions panel)
  • select file -> save for web
  • stop recording actions

From now when you press F2 , you automatically save for web selected object :)

Easiest I've found - make your selection, file, export, then check the 'selection' box to export only the elements selected.

So I make it this way: Assign a hotkey Ctrl-Alt-Shift-4 to ObjectsFit to Selected Art, and a Ctrl-Shift-E to FileExport This way:

  1. Select object
  2. Press Ctrl-Alt-Shift-3 (to hide all other)
  3. Press Ctrl-Alt-Shift-4 (to fit the artboard)
  4. Press Ctrl-Shift-E (to export)

PS: don't forget to check "Use artboards" on export dialog! And hit Ctrl-Z twice to undo temporary changes.

The fastest way is to make use of the awesome export features of Sketch.

Way 1: -Select the group in Illustrator -Paste in Sketch (repeat with all the groups you want to export) -Select the groups in Sketch, and click "make exportable" -Export at the resolution you want.

Way 2: -Select the group in Illustrator -Paste in Sketch (repeat with all the groups you want to export) -Select the groups from the levels tab -Drag and drop the desktop or folder.


Mdhntd

What instances can be solved today by modern solvers (pure LP)?

Is it possible that Curiosity measured its own methane or failed doing the spectrometry?

Which are more efficient in putting out wildfires: planes or helicopters?

What caused the flashes in the video footage of Chernobyl?

When do I make my first save against the Web spell?

What is meaning of 4 letter acronyms in Roman names like Titus Flavius T. f. T. n. Sabinus?

Could you sell yourself into slavery in the USA?

Was Wolfgang Unzicker the last Amateur GM?

Bypass with wrong cvv of debit card and getting OTP

Should I warn my boss I might take sick leave

Term for a character that only exists to be talked to

In National Velvet why didn't they use a stunt double for Elizabeth Taylor?

Who pays for increased security measures on flights to the US?

Find the closest HTML colour name

Do I need to be legally qualified to install a Hive smart thermostat?

Are there advantages in writing by hand over typing out a story?

Does a reference have a storage location?

Sleepy tired vs physically tired

How to travel between two stationary worlds in the least amount of time? (time dilation)

Creating bounding box in ArcGIS API for JavaScript?

Convert geographic coordinates to UTM zone 29N with JavascriptAdding graphics from different spatialReference using ArcGIS API for JavaScript?Generate a random 100x100 m bounding box in any projection in pure Python?Clip Spatial object to bounding box in RReading lines within Bounding box failedHow to determine bounding box of current qgis 'map canvas extent' from the python consoleBetter-matching bounding box by using collectionHow to restrict panning in QGIS to a given extentBoundingBox key in Geo API of RedisFind out if a bounding box is within a vector layer in openlayers 3

I am trying to create a bounding box for my map in ArcGIS 3.28. I want the bounding box to stay in a given area based on the coordinates given. I found a class called Extent in the API Reference that can set a bounding box, but it does not look like there's a way to implement the bonding box. This is what I have at the moment:`

Using this, is there a way to create a bounding box or is something I can add to it to make it work?

I am trying to create a bounding box for my map in ArcGIS 3.28. I want the bounding box to stay in a given area based on the coordinates given. I found a class called Extent in the API Reference that can set a bounding box, but it does not look like there's a way to implement the bonding box. This is what I have at the moment:`

Using this, is there a way to create a bounding box or is something I can add to it to make it work?

I am trying to create a bounding box for my map in ArcGIS 3.28. I want the bounding box to stay in a given area based on the coordinates given. I found a class called Extent in the API Reference that can set a bounding box, but it does not look like there's a way to implement the bonding box. This is what I have at the moment:`

Using this, is there a way to create a bounding box or is something I can add to it to make it work?

I am trying to create a bounding box for my map in ArcGIS 3.28. I want the bounding box to stay in a given area based on the coordinates given. I found a class called Extent in the API Reference that can set a bounding box, but it does not look like there's a way to implement the bonding box. This is what I have at the moment:`

Using this, is there a way to create a bounding box or is something I can add to it to make it work?


If you are more concerned with performance than clean code use this. It restricts the number of calls to slow COM objects property accessors. This is the main reason why this solution is faster than the above simple method:

Edit: IAmNerd2000 's original approach fails when formatting lies outside the "RealUsedRange". Thus it was removed from this post.

Edit: As MacroMarc pointed out, very large used ranges will cause the optimal code to crash due to an Out of memory error. As a current work around I resort to VBasic2008 's code if the error occurs. So at worse it will be as slow as VBasic2008 's code, but at best it will be 10x faster.

Edit: RealUsedRange_VBasic2008_refac didn't work in some situations. The solution has now been changed to reflect this.

Edit: Changes based on Tinman's post. Main changes were removing variant references, using CountLarge instead of .Rows.Count=1 and .Columns.Count=1 and Value2 instead of Value


OpenCV OCR and text recognition with Tesseract

In order to perform OpenCV OCR text recognition, we’ll first need to install Tesseract v4 which includes a highly accurate deep learning-based model for text recognition.

From there, I’ll show you how to write a Python script that:

  1. Performs text detection using OpenCV’s EAST text detector, a highly accurate deep learning text detector used to detect text in natural scene images.
  2. Once we have detected the text regions with OpenCV, we’ll then extract each of the text ROIs and pass them into Tesseract, enabling us to build an entire OpenCV OCR pipeline!

Finally, I’ll wrap up today’s tutorial by showing you some sample results of applying text recognition with OpenCV, as well as discussing some of the limitations and drawbacks of the method.

Let’s go ahead and get started with OpenCV OCR!

How to install Tesseract 4

Figure 1: The Tesseract OCR engine has been around since the 1980s. As of 2018, it now includes built-in deep learning capability making it a robust OCR tool (just keep in mind that no OCR system is perfect). Using Tesseract with OpenCV’s EAST detector makes for a great combination.

Tesseract, a highly popular OCR engine, was originally developed by Hewlett Packard in the 1980s and was then open-sourced in 2005. Google adopted the project in 2006 and has been sponsoring it ever since.

If you’ve read my previous post on Using Tesseract OCR with Python, you know that Tesseract can work very well under controlled conditions…

…but will perform quite poorly if there is a significant amount of noise or your image is not properly preprocessed and cleaned before applying Tesseract.

Just as deep learning has impacted nearly every facet of computer vision, the same is true for character recognition and handwriting recognition.

Deep learning-based models have managed to obtain unprecedented text recognition accuracy, far beyond traditional feature extraction and machine learning approaches.

It was only a matter of time until Tesseract incorporated a deep learning model to further boost OCR accuracy — and in fact, that time has come.

The latest release of Tesseract (v4) supports deep learning-based OCR that is significantly more accurate.

In the remainder of this section, you will learn how to install Tesseract v4 on your machine.

Later in this blog post, you’ll learn how to combine OpenCV’s EAST text detection algorithm with Tesseract v4 in a single Python script to automatically perform OpenCV OCR.

Let’s get started configuring your machine!

Install OpenCV

To run today’s script you’ll need OpenCV installed. Version 3.4.2 or better is required.

To install OpenCV on your system, just follow one of my OpenCV installation guides, ensuring that you download the correct/desired version of OpenCV and OpenCV-contrib in the process.

Install Tesseract 4 on Ubuntu

The exact commands used to install Tesseract 4 on Ubuntu will be different depending on whether you are using Ubuntu 18.04 or Ubuntu 17.04 and earlier.

To check your Ubuntu version you can use the lsb_release command:

As you can see, I am running Ubuntu 18.04 but you should check your Ubuntu version before continuing.

For Ubuntu 18.04 users, Tesseract 4 is part of the main apt-get repository, making it super easy to install Tesseract via the following command:

If you’re using Ubuntu 14, 16, or 17 though, you’ll need a few extra commands due to dependency requirements.

The good news is that Alexander Pozdnyakov has created an Ubuntu PPA (Personal Package Archive) for Tesseract, which makes it super easy to install Tesseract 4 on older versions of Ubuntu.

Just add the alex-p/tesseract-ocr PPA repository to your system, update your package definitions, and then install Tesseract:

Assuming there are no errors, you should now have Tesseract 4 installed on your machine.

Install Tesseract 4 on macOS

Installing Tesseract on macOS is straightforward provided you have Homebrew, macOS’ “unofficial” package manager, installed on your system.

Just run the following command and Tesseract v4 will be installed on your Mac:

2020-07-21 Update: Tesseract 5 (alpha release) is available. Currently, we recommend sticking with Tesseract 4. If you would like the latest Tesseract (as of this writing it is 5.0.0-alpha), then be sure to append the --HEAD switch at the end of the command.

If you already have Tesseract installed on your Mac (if you followed my previous Tesseract install tutorial, for example), you’ll first want to unlink the original install:

And from there you can run the install command.

Verify your Tesseract version

Figure 2: Screenshot of my system terminal where I have entered the tesseract -v command to query for the version. I have verified that I have Tesseract 4 installed.

Once you have Tesseract installed on your machine you should execute the following command to verify your Tesseract version:

As long as you see tesseract 4 somewhere in the output you know that you have the latest version of Tesseract installed on your system.

Install your Tesseract + Python bindings

Now that we have the Tesseract binary installed, we now need to install the Tesseract + Python bindings so our Python scripts can communicate with Tesseract and perform OCR on images processed by OpenCV.

If you are using a Python virtual environment (which I highly recommend so you can have separate, independent Python environments) use the workon command to access your virtual environment:

In this case, I am accessing a Python virtual environment named cv (short for “computer vision”) — you can replace cv with whatever you have named your virtual environment.

From there, we’ll use pip to install Pillow, a more Python-friendly version of PIL, followed by pytesseract and imutils :

Now open up a Python shell and confirm that you can import both OpenCV and pytesseract :

If you don’t see any import errors, your machine is now configured to perform OCR and text recognition with OpenCV

Let’s move on to the next section (skipping the Pi instructions) where we’ll learn how to actually implement a Python script to perform OpenCV OCR.

Install Tesseract 4 and supporting software on Raspberry Pi and Raspbian

Note: You may skip this section if you aren’t on a Raspberry Pi.

Inevitably, I’ll be asked how to install Tesseract 4 on the Rasberry Pi.

The following instructions aren’t for the faint of heart — you may run into problems. They are tested, but mileage may vary on your own Raspberry Pi.

First, uninstall your OpenCV bindings from system site packages:

Here I used the rm command since my cv2.so file in site-packages is just a sym-link. If the cv2.so bindings are your real OpenCV bindings then you may want to move the file out of site-packages for safe keeping.

Now install two QT packages on your system:

For whatever reason, the trained English language data file was missing from the install so I needed to download and move it into the proper directory:

From there, create a new Python virtual environment:

And install the necessary packages:

You’re done! Just keep in mind that your experience may vary.

Understanding OpenCV OCR and Tesseract text recognition

Now that we have OpenCV and Tesseract successfully installed on our system we need to briefly review our pipeline and the associated commands.

To start, we’ll apply OpenCV’s EAST text detector to detect the presence of text in an image. The EAST text detector will give us the bounding box (x, y)-coordinates of text ROIs.

We’ll extract each of these ROIs and then pass them into Tesseract v4’s LSTM deep learning text recognition algorithm.

The output of the LSTM will give us our actual OCR results.

Finally, we’ll draw the OpenCV OCR results on our output image.

But before we actually get to our project, let’s briefly review the Tesseract command (which will be called under the hood by the pytesseract library).

When calling the tessarct binary we need to supply a number of flags. The three most important ones are -l , --oem , and --psm .

The -l flag controls the language of the input text. We’ll be using eng (English) for this example but you can see all the languages Tesseract supports here.

The --oem argument, or OCR Engine Mode, controls the type of algorithm used by Tesseract.

You can see the available OCR Engine Modes by executing the following command:

We’ll be using --oem 1 to indicate that we wish to use the deep learning LSTM engine only.

The final important flag, --psm controls the automatic Page Segmentation Mode used by Tesseract:

For OCR’ing text ROIs I’ve found that modes 6 and 7 work well, but if you’re OCR’ing large blocks of text then you may want to try 3 , the default mode.

Whenever you find yourself obtaining incorrect OCR results I highly recommend adjusting the --psm as it can have dramatic influences on your output OCR results.

Project structure

Be sure to grab the zip from the “Downloads” section of the blog post.

From there unzip the file and navigate into the directory. The tree command allows us to see the directory structure in our terminal:

Our project contains one directory and two notable files:

  • images/ : A directory containing six test images containing scene text. We will attempt OpenCV OCR with each of these images.
  • frozen_east_text_detection.pb : The EAST text detector. This CNN is pre-trained for text detection and ready to go. I did not train this model — it is provided with OpenCV I’ve also included it in the “Downloads” for your convenience.
  • text_recognition.py : Our script for OCR — we’ll review this script line by line. The script utilizes the EAST text detector to find regions of text in the image and then takes advantage of Tesseract v4 for recognition.

Implementing our OpenCV OCR algorithm

We are now ready to perform text recognition with OpenCV!

Open up the text_recognition.py file and insert the following code:

Today’s OCR script requires five imports, one of which is built into OpenCV.

Most notably, we’ll be using pytesseract and OpenCV. My imutils package will be used for non-maxima suppression as OpenCV’s NMSBoxes function doesn’t seem to be working with the Python API. I’ll also note that NumPy is a dependency for OpenCV.

The argparse package is included with Python and handles command line arguments — there is nothing to install.

Now that our imports are taken care of, let’s implement the decode_predictions function:

The decode_predictions function begins on Line 8 and is explained in detail inside the EAST text detection post. The function:

  1. Uses a deep learning-based text detector to detect (not recognize) regions of text in an image.
  2. The text detector produces two arrays, one containing the probability of a given area containing text, and another that maps the score to a bounding box location in the input image.

As we’ll see in our OpenCV OCR pipeline, the EAST text detector model will produce two variables:

  • scores : Probabilities for positive text regions.
  • geometry : The bounding boxes of the text regions.

…each of which is a parameter to the decode_predictions function.

The function processes this input data, resulting in a tuple containing (1) the bounding box locations of the text and (2) the corresponding probability of that region containing text:

  • rects : This value is based on geometry and is in a more compact form so we can later apply NMS.
  • confidences : The confidence values in this list correspond to each rectangle in rects .

Both of these values are returned by the function.

Note: Ideally, a rotated bounding box would be included in rects , but it isn’t exactly straightforward to extract a rotated bounding box for today’s proof of concept. Instead, I’ve computed the horizontal bounding rectangle which does take angle into account. The angle is made available on Line 41 if you would like to extract a rotated bounding box of a word to pass into Tesseract.

For further details on the code block above, please see this blog post.

From there let’s parse our command line arguments:

Our script requires two command line arguments:

  • --image : The path to the input image.
  • --east : The path to the pre-trained EAST text detector.

Optionally, the following command line arguments may be provided:

  • --min-confidence : The minimum probability of a detected text region.
  • --width : The width our image will be resized to prior to being passed through the EAST text detector. Our detector requires multiples of 32.
  • --height : Same as the width, but for the height. Again, our detector requires multiple of 32 for resized height.
  • --padding : The (optional) amount of padding to add to each ROI border. You might try values of 0.05 for 5% or 0.10 for 10% (and so on) if you find that your OCR result is incorrect.

From there, we will load + preprocess our image and initialize key variables:

Our image is loaded into memory and copied (so we can later draw our output results on it) on Lines 82 and 83.

We grab the original width and height (Line 84) and then extract the new width and height from the args dictionary (Line 88).

Using both the original and new dimensions, we calculate ratios used to scale our bounding box coordinates later in the script (Lines 89 and 90).

Our image is then resized, ignoring aspect ratio (Line 93).

Next, let’s work with the EAST text detector:

Our two output layer names are put into list form on Lines 99-101. To learn why these two output names are important, you’ll want to refer to my original EAST text detection tutorial.

Then, our pre-trained EAST neural network is loaded into memory (Line 105).

I cannot emphasize this enough: you need OpenCV 3.4.2 at a minimum to have the cv2.dnn.readNet implementation.

The first bit of “magic” occurs next:

To determine text locations we:

  • Construct a blob on Lines 109 and 110. Read more about the process here.
  • Pass the blob through the neural network, obtaining scores and geometry (Lines 111 and 112).
  • Decode the predictions with the previously defined decode_predictions function (Line 116).
  • Apply non-maxima suppression via my imutils method (Line 117). NMS effectively takes the most likely text regions, eliminating other overlapping regions.

Now that we know where the text regions are, we need to take steps to recognize the text! We begin to loop over the bounding boxes and process the results, preparing the stage for actual text recognition:

We initialize the results list to contain our OCR bounding boxes and text on Line 120.

Then we begin looping over the boxes (Line 123) where we:

  • Scale the bounding boxes based on the previously computed ratios (Lines 126-129).
  • Pad the bounding boxes (Lines 134-141).
  • And finally, extract the padded roi (Line 144).

Our OpenCV OCR pipeline can be completed by using a bit of Tesseract v4 “magic”:

Taking note of the comment in the code block, we set our Tesseract config parameters on Line 151 (English language, LSTM neural network, and single-line of text).

Note: You may need to configure the --psm value using my instructions at the top of this tutorial if you find yourself obtaining incorrect OCR results.

The pytesseract library takes care of the rest on Line 152 where we call pytesseract.image_to_string , passing our roi and config string .

? Boom! In two lines of code, you have used Tesseract v4 to recognize a text ROI in an image. Just remember, there is a lot happening under the hood.

Our result (the bounding box values and actual text string) are appended to the results list (Line 156).

Then we continue this process for other ROIs at the top of the loop.

Now let’s display/print the results to see if it actually worked:

Our results are sorted from top to bottom on Line 159 based on the y-coordinate of the bounding box (though you may wish to sort them differently).

From there, looping over the results , we:

  • Print the OCR’d text to the terminal (Lines 164-166).
  • Strip out non-ASCII characters from text as OpenCV does not support non-ASCII characters in the cv2.putText function (Line 171).
  • Draw (1) a bounding box surrounding the ROI and (2) the result text above the ROI (Lines 173-176).
  • Display the output and wait for any key to be pressed (Lines 179 and 180).

OpenCV text recognition results

Now that we’ve implemented our OpenCV OCR pipeline, let’s see it in action.

Be sure to use the “Downloads” section of this blog post to download the source code, OpenCV EAST text detector model, and the example images.

From there, open up a command line, navigate to where you downloaded + extracted the zip, and execute the following command:

Figure 4: Our first trial of OpenCV OCR is a success.

We’re starting with a simple example.

Notice how our OpenCV OCR system was able to correctly (1) detect the text in the image and then (2) recognize the text as well.

The next example is more representative of text we would see in a real- world image:

Figure 5: A more complicated picture of a sign with white background is OCR’d with OpenCV and Tesseract 4.

Again, notice how our OpenCV OCR pipeline was able to correctly localize and recognize the text however, in our terminal output we see a registered trademark Unicode symbol — Tesseract was likely confused here as the bounding box reported by OpenCV’s EAST text detector bled into the grassy shrubs/plants behind the sign.

Let’s look at another OpenCV OCR and text recognition example:

Figure 6: A large sign containing three words is properly OCR’d using OpenCV, Python, and Tesseract.

In this case, there are three separate text regions.

OpenCV’s text detector is able to localize each of them — we then apply OCR to correctly recognize each text region as well.

Our next example shows the importance of adding padding in certain circumstances:

Figure 7: Our OpenCV OCR pipeline has trouble with the text regions identified by OpenCV’s EAST detector in this scene of a bake shop. Keep in mind that no OCR system is perfect in all cases. Can we do better by changing some parameters, though?

In the first attempt of OCR’ing this bake shop storefront, we see that “SHOP” is correctly OCR’d, but:

  1. The “U” in “CAPUTO” is incorrectly recognized as “TI”.
  2. The apostrophe and “S” is missing from “CAPUTO’S’.
  3. And finally, “BAKE” is incorrectly recognized as a vertical bar/pipe (“|”) with a period (“.”).

By adding a bit of padding we can expand the bounding box coordinates of the ROI and correctly recognize the text:

Figure 8: By adding additional padding around the text regions identified by EAST text detector, we are able to properly OCR the three words in this bake shop sign with OpenCV and Tesseract. See the previous figure for the first, failed attempt.

Just by adding 5% of padding surrounding each corner of the bounding box we’re not only able to correctly OCR the “BAKE” text but we’re also able to recognize the “U” and “’S” in “CAPUTO’S”.

Of course, there are examples where OpenCV flat out fails:

Figure 9: With a padding of 25%, we are able to recognize “Designer” in this sign, but our OpenCV OCR system fails for the smaller words due to the color being similar to the background. We aren’t even able to detect the word “SUIT” and while “FACTORY” is detected, we are unable to recognize the text with Tesseract. Our OCR system is far from perfect.

I increased the padding to 25% to accommodate the angle/perspective of the words in this sign. This allowed for “Designer” to be properly OCR’d with EAST and Tesseract v4. But the smaller words are a lost cause likely due to the similar color of the letters to the background.

In these situations there’s not much we can do, but I would suggest referring to the limitations and drawbacks section below for suggestions on how to improve your OpenCV text recognition pipeline when confronted with incorrect OCR results.

Limitations and Drawbacks

It’s important to understand that no OCR system is perfect!

There is no such thing as a perfect OCR engine, especially in real-world conditions.

And furthermore, expecting 100% accurate Optical Character Recognition is simply unrealistic.

As we found out, our OpenCV OCR system worked in well in some images, it failed in others.

There are two primary reasons we will see our text recognition pipeline fail:

  1. The text is skewed/rotated.
  2. The font of the text itself is not similar to what the Tesseract model was trained on.

Even though Tesseract v4 is significantly more powerful and accurate than Tesseract v3, the deep learning model is still limited by the data it was trained on — if your text contains embellished fonts or fonts that Tesseract was not trained on, it’s unlikely that Tesseract will be able to OCR the text.

Secondly, keep in mind that Tesseract still assumes that your input image/ROI has been relatively cleaned.

Since we are performing text detection in natural scene images, this assumption does not always hold.

In general, you will find that our OpenCV OCR pipeline works best on text that is (1) captured at a 90-degree angle (i.e., top-down, birds-eye-view) of the image and (2) relatively easy to segment from the background.

If this is not the case, you may be able to apply a perspective transform to correct the view, but keep in mind that the Python + EAST text detector reviewed today does not provide rotated bounding boxes (as discussed in my previous post), so you will still likely be a bit limited.

Tesseract will always work best with clean, preprocessed images, so keep that in mind whenever you are building an OpenCV OCR pipeline.

If you have a need for higher accuracy and your system will have an internet connection, I suggest you try one of the “big 3” computer vision API services:

…each of which uses even more advanced OCR approaches running on powerful machines in the cloud.

What's next? I recommend PyImageSearch University.

I strongly believe that if you had the right teacher you could master computer vision and deep learning.

Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer science?

All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught.

If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery.

Inside PyImageSearch University you'll find:

  • &check 23 courses on essential computer vision, deep learning, and OpenCV topics
  • &check 23 Certificates of Completion
  • &check 35h 14m on-demand video
  • &check Brand new courses released every month, ensuring you can keep up with state-of-the-art techniques
  • &check Pre-configured Jupyter Notebooks in Google Colab
  • &check Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!)
  • &check Access to centralized code repos for all 400+ tutorials on PyImageSearch
  • &check Easy one-click downloads for code, datasets, pre-trained models, etc.
  • &check Access on mobile, laptop, desktop, etc.

Save image within bounding box programmatically - Geographic Information Systems

Given a polygon and a point ‘p’, find if ‘p’ lies inside the polygon or not. The points lying on the border are considered inside.

We strongly recommend to see the following post first.
How to check if two given line segments intersect?
Following is a simple idea to check whether a point is inside or outside.

How to handle point ‘g’ in the above figure?
Note that we should return true if the point lies on the line or same as one of the vertices of the given polygon. To handle this, after checking if the line from ‘p’ to extreme intersects, we check whether ‘p’ is colinear with vertices of current line of polygon. If it is coliear, then we check if the point ‘p’ lies on current side of polygon, if it lies, we return true, else false.


Parameters

The location or workspace in which the random points feature class will be created. This location or workspace must already exist.

The name of the random points feature class to be created.

Random points will be generated inside or along the features in this feature class. The constraining feature class can be point, multipoint, line, or polygon. Points will be randomly placed inside polygon features, along line features, or at point feature locations. Each feature in this feature class will have the specified number of points generated inside it (for example, if you specify 100 points, and the constraining feature class has 5 features, 100 random points will be generated in each feature, totaling 500 points).

Random points will be generated inside the extent. The constraining extent will only be used if no constraining feature class is specified.

The number of points to be randomly generated.

The number of points can be specified as a long integer number or as a field from the constraining features containing numeric values for how many random points to place within each feature. The field option is only valid for polygon or line constraining features. If the number of points is supplied as a long integer number, each feature in the constraining feature class will have that number of random points generated inside or along it.

The shortest distance allowed between any two randomly placed points. If a value of 1 Meter is specified, all random points will be farther than 1 meter away from the closest point.

Determines if the output feature class will be a multipart or single-part feature.

  • Unchecked—The output will be geometry type point (each point is a separate feature). This is the default.
  • Checked—The output will be geometry type multipoint (all points are a single feature).

If Create Multipoint Output is checked, specify the number of random points to be placed in each multipoint geometry.

Derived Output

The output random points feature class.

The location or workspace in which the random points feature class will be created. This location or workspace must already exist.

The name of the random points feature class to be created.

Random points will be generated inside or along the features in this feature class. The constraining feature class can be point, multipoint, line, or polygon. Points will be randomly placed inside polygon features, along line features, or at point feature locations. Each feature in this feature class will have the specified number of points generated inside it (for example, if you specify 100 points, and the constraining feature class has 5 features, 100 random points will be generated in each feature, totaling 500 points).

Random points will be generated inside the extent. The constraining extent will only be used if no constraining feature class is specified.

The number of points to be randomly generated.

The number of points can be specified as a long integer number or as a field from the constraining features containing numeric values for how many random points to place within each feature. The field option is only valid for polygon or line constraining features. If the number of points is supplied as a long integer number, each feature in the constraining feature class will have that number of random points generated inside or along it.

The shortest distance allowed between any two randomly placed points. If a value of 1 Meter is specified, all random points will be farther than 1 meter away from the closest point.

Determines if the output feature class will be a multipart or single-part feature.

  • POINT — The output will be geometry type point (each point is a separate feature). This is the default.
  • MULTIPOINT — The output will be geometry type multipoint (all points are a single feature).

If create_multipoint_output is set to MULTIPOINT , specify the number of random points to be placed in each multipoint geometry. The default is 10.

Derived Output

The output random points feature class.

Code sample

The following Python window script demonstrates how to use the CreateRandomPoints tool in immediate mode.

The following stand-alone Python script demonstrates how to create random points with random values.

The following stand-alone Python script demonstrates several methods to use the CreateRandomPoints tool.


2. Configuration

Before being able to use Fabnami’s e-shop, you need to configure it properly. Some of the fields that appear in the various sections of the Fabnami Configuration and Administration app are compulsory whereas others are optional. Some fields, like the Materials, can contain the specifications and pricing configurations of any number of materials. Other fields can only contain single values.

2.1. Information

Most fields are self-explanatory: you’ll also find a brief help text below each field. All company-related information can appear in the e-shop front-end and be seen by your clients. Whether the information is displayed or not depends on the kind of front-end format you choose to have. Here are some remarks about the fields:

The Company logo URL is optional and is used, for example, in the editable invoices that you can access from the Business  ▸ Orders page of the Configuration and Administration app.

The Fixed phone is required, Mobile phone is optional and is currently not used.

The VAT number can be optionally displayed after checking out the cart if your clients are based in your same commercial area.

In Sales tax name you can override the localized "VAT" string in case the country you are based in uses a different name (for example, "GST")

With the settings in Terms and conditions you can require your clients to agree to your Terms and Conditions before proceeding with the purchase. In this case, you must provide the URL of the web page that displays them.

You need to check Enable production if you want to activate the production/live mode and start accepting orders and payments (if online payments are enabled and configured). If you access the production servers without enabling the production mode, you’ll receive an error notification.

If you pick a payment service, you also need to input the Payment Service Provider Public/Publishable API keys used for testing and those used in production (also called live keys). You can find API keys in the websites of your payment processor of choice. If you access the test servers, the test API key is used. If you enable the production mode and access the production servers, the live API key is used instead. To complete the configuration of the online payment service, you must configure the secret (private) API keys in the Configuration  ▸ Specs & Pricing page.

The Description field gives you the possibility to describe your business briefly. You can use plain text or HTML. You can also embed a map to display the location of your business. To do that by using Google Maps, search your business’ location, click on the hamburger menu icon on the top-left corner of the page, choose Share or Embed Map from the slide-in menu. Choose then a custom-size embedded map and set the height (the second field on the left) to 200. Copy the iframe code, paste it into the Fabnami Information Description field and set the width to "100%"".

If you select Collect leads, Fabnami requires your clients to input a valid email address before computing any quote and makes such email (as well as the origin IP number and uploaded files) available to you in the Business  ▸ Quotes page of the Configuration and Administration app. Not requiring to input an email address at the beginning of the purchase process can result in an improved user experience and higher conversion rates. A client’s email address is in any case required during the check-out.

2.2. Specs & Pricing

Setting the checks and pricing expressions is the most complex part of the e-shop configuration. Here you need to configure the manufacturability checks and pricing for all the materials, finishes and additional services that you want to offer online. Every material can be associated with its manufacturability checks and with several finishes. You can configure the additional services so to have them displayed in combination with either manufacturable and non-manufacturable models. For example, if the model is not manufacturable, you can offer to check and repair it manually.

Many fields are self-explanatory, here are some remarks and explanations about the others: * The VAT rate must be expressed as a fraction: if the tax rate is 20% you need to type in 0.2 * The Time filter in days controls the maximum age of the orders and quotes displayed in the Business  ▸ Orders and Business  ▸ Quotes pages. The default and maximum number is 90. * The Payment Service Provider Private/Secret Keys are the API key corresponding to the Public/Publishable keys mentioned in the Information section.

2.2.1. Integrations

Citrix Podio

Based on Citrix Podio, we developed a set of applications to let you manage your clients’ orders and let you and your colleagues collaborate in an easy and fast way during the fulfillment of the orders. If you have a Podio account (free up to 10 users), you can install the Fabnami applications by going to the 3D-printing order-fulfillment management app-pack in the Podio market and installing it. The apps integrated with Fabnami are the following:

All models uploaded by your clients for which Fabnami computed a quote are visible in this Podio app.

All model/material/finish combinations requested by your clients as well all purchased services can be found in this app. Each 3D-print job belongs to one order only, and each order can be associated with multiple jobs. Also, each job references one and only one model.

All orders issued by your clients appear in this app. Each order is associated with one or more jobs.

To configure properly the Fabnami Podio apps, you need to add a Podio-specific JSON object to the Integrations section of the Configuration  ▸ Specs & Pricing page within the Configuration and Administration app, like the following one:

The app_id and token must correspond to the ones of your Podio apps. To visualize the id and token of a Podio app, click on the app icon in the top navigation bar, and then click on the gear in the top-left corner of the page. At the bottom of the pop-up list, click on the second-last item Developer. You can find the app_id and token on the next page, just copy and paste them into the appropriate fields in Configuration  ▸ Specs & Pricing page within the Configuration and Administration app.

Currency

Before being able to use Podio, you need to enable the currency in which the Fabnami pricing engine computes your print quotes (USD, EUR, GBP, and CHF are pre-configured). To add a currency like, for example, the Canadian Dollar CAD, please proceed as follows:

If you have not done it yet, install the 3D-printing order fulfillment management app-pack

Click on the 3D Print Orders app

Click on the small wrench in the upper-right corner

Scroll down to Price VAT excl.

Click on the down-pointing arrow beside the banknote icon

Choose Set currency from the drop-down menu

Repeat the same for Price VAT inc.

Dropbox

The Fabnami Dropbox integration allows you to have all 3D-models uploaded by your clients appear in real-time in your Dropbox folder. To enable the integration, log into the Configuration and Administration app, navigate to Configuration  ▸ Dropbox app and follow the instructions to give Fabnami access to your Dropbox folder. Additionally, Dropbox must be enabled by pasting the following code into the Integrations section of the Configuration  ▸ Specs & Pricing page within the Configuration and Administration app:

2.2.2. Webhooks

A webhook is an HTTP callback that can be triggered by certain events to notify external systems. In the Configuration and Administration app, you can configure webhooks that can be triggered when Fabnami computes a quote or when a client issues an order. In both cases, you can freely configure the payload that is POSTed or PUT to the external systems' endpoint. The payload must be in JSON format and can reference any quote or order data element, including your client’s email (if you configured Fabnami to request it), file names, body (shell) count, unit price for quotes, billing/shipping address, order amount, and order quantities. Please find an example in the Salesforce webhook section.

Salesforce

You can use webhooks to create leads in Salesforce whenever Fabnami computes a quote, or a client issues an order. To do that, you need to define a Salesforce web-to-lead form. The web-to-lead form page contains the following important pieces of information:

uniquely identifies your web-to-lead form

To create a web-to-lead whenever Fabnami computes a quote, the JSON payload should look like this:

The email is available if you enable Collect leads in the Configuration  ▸ Information page within the Configuration and Administration app, which makes Fabnami require your clients' email address before uploading a 3D model.

To create a web-to-lead whenever a client issues an order, you should configure the JSON webhook payload to look like this:

Do not hesitate to contact [email protected] to get advice and assistance with the integration of Salesforce with your Fabnami account.

2.2.3. Materials

The Materials section contains the manufacturability checks and the pricing logic for all materials and finishes. The fields Name, ID and Machine model ID are for internal use only, to let you keep track of your offer and share the IDs with other apps like Enterprise Resource Planning or inventory tracking apps. The Description field is the only one that is visible to your clients in the e-shop front-end. If you define finishes, the finish description follows the material’s description, after a comma. The Delivery time indicates the time needed by you to fulfill a print order in the specified material. If the delivery time is expressed with a shorthand like, for example, 6H , 3D or 1W , the expression is expanded to six hours, two days or one week.

2.2.4. Manufacturability checks

In the Manufacturability checks section, you can configure the controls performed on the 3D models uploaded by your customers. You can base the checks on any geometric and topological feature of the 3D mesh, on top of using our proprietary algorithms for thin-wall and minimum-detail size detection. The checks' logic needs to be coded in the Logic field as a JavaScript expression. You can configure the manufacturability checks for each material you define. Depending on the outcome of the check, Fabnami shows your clients the corresponding quote or, optionally, a service.

The flexibility of Fabnami’s configuration allows you to specify any manufacturability check logic. To avoid the difficulties of having to program the checks starting from basic functions and measures, we developed a library of helper functions that allows you to configure the manufacturability checks without having to learn to program.

contains a description of all model measures and geometric algorithms that you can use in the manufacturability check JavaScript expression.

contains a description of the helper functions that you can use to define easily and concisely the manufacturability and raise issues that need to be notified to your customers.

contains a few manufacturability expression examples on which you can base your configuration.

2.2.5. 3D-print price

In the Price section of the Configuration and Administration app, you need to define a pricing model for the 3D print based on the geometric and topological features of the 3D meshes uploaded by your customers. The price needs to be coded in the Logic field as a JavaScript expression.

The price expression bears several similarities with the one used to configure the manufacturability checks. It must set the price of the 3D print for the material/finish you are configuring, which is eventually displayed to your customers. Although you can use all available low-level measures and functions, we developed a library of helper functions to reduce the difficulties related to programming the pricing logic.

You can find additional information in the following sections:

contains a description of all model measures and geometric algorithms that you can use in the pricing JavaScript expression.

contains a description of how to configure the 3D print price logic.

contains a few pricing expression examples on which you can base your configuration.

Finishes

You can associate any number of finishes to each material you define. The price of the finishes is configured via a JavaScript price expression, just like the materials’ prices. The finish’s price is added to the price of the relevant material before displaying the quote to your clients.

2.2.6. JavaScript expressions

During the configuration of the Fabnami e-commerce solution, you encounter two kinds of expression: the manufacturability check expressions and the price expressions. The JavaScript logic of both expressions have access to all 3D model’s geometric and topological measures and, additionally, to the results of several algorithms that are automatically applied to the 3D model. The manufacturability check expressions can also define a set of optional, configurable checks, whose results become automatically available within the expression.

Model measures

The JavaScript expression has access to object named model that contains a comprehensive set of 3D model properties and measures. The properties marked with a star (*) need additional information to be computed, as it is explained below. The properties available to the JavaScript expression in the model object are the following:

model.bounding_box_size (array of three numeric values)

Additionally, the manufacturability check expressions can also access the following measures:

Lateral path

Some 3D printers use a technology that requires a tool to follow the borders of a model slice. For example, the paper-based MCor printers cut sheets of paper with a blade along the border of a model’s horizontal slices. If you use the following command in the Manufacturability check logic:

model.lateral_path becomes available in both the manufacturability check and pricing JavaScript logic. The lateral path is proportional to the length of the path followed by a tool that runs along the contour a model’s slices. The actual path length depends on the thickness of the slices.

Support material volume

To calculate the support material volume you have several choices:

Set model.support_material.algorithm = "simple" and set the property model.support_material.threshold_overhang_angle (in radians and larger than zero) or, alternatively, use the helper function compute_support_material(angle) . Fabnami computes the support material volume and sets model.support_material_volume to its computed value. Please note that this calculation method does not minimize the print duration. Optionally, you can set model.support_material.support_calculation_precision (integer and larger than zero) to increase or decrease the precision (beware: higher precisions require more time). If you do not define it, it defaults to 30.

Minimal shadow support material

Set model.support_material.algorithm = "optimize_rotation" and set the property model.support_material.threshold_overhang_angle (in radians and larger than zero) or, alternatively, use the helper function minimize_support_using_threshold_overhang_angle(angle) . Fabnami optimally rotates the 3D-mesh to minimize the support material volume and sets model.support_material_volume to its computed value. Please note that this calculation method does not minimize the print duration. Optionally, you can set model.support_material.support_calculation_precision (integer and larger than zero) to increase or decrease the precision (beware: higher precisions require more time). If you do not define it, it defaults to 30.

Flat-part support material

Set model.support_material.algorithm = "use_minimal_bounding_box_dimension" and set the property model.support_material.threshold_overhang_angle (in radians and larger than zero) or, alternatively, use the helper function minimize_support_using_minimal_bounding_box_dimension(angle) . Fabnami computes the support material volume for the case model is laid flat, so that its minimum dimension is the vertical one, and sets model.support_material_volume to its computed value. For some 3D print processes, this is the best way to minimize the print duration.

Set model.support_material.algorithm = "smart" to compute the volume of the support material generated by the Stratasys’ SMART method. Fabnami sets model.support_material_volume to its computed value.

Smart and minimal support material

Set model.support_material.algorithm = "smart_minimal_bounding_box_rotation" to compute the volume of the support material generated by the Stratasys’ SMART method, minimized by flipping the 3D-mesh bounding box in all possible ways. Fabnami sets model.support_material_volume to its computed value.

Embedding support material

Some printers almost completely embed the object in the support material. In this case, you can compute the support material volume by subtracting the object volume from the bounding box volume and optionally multiply the result by a factor lower than 1 to account for the fact that the bounding box is not full and for the density of the support material:

Maximum sharpness

If you set the property model.max_sharpness_angle (expressed in radians), the property model.sharpness_ok becomes available to the JavaScript expression and is true if no too sharp angles between facets are found. If you use the helper function require_max_sharpness(angle) , Fabnami sets model.max_sharpness_angle automatically you do not need to set it manually.

Maximum overhang angle

If you set the property model.max_overhang_angle (expressed in radians), the property model.overhanging_facet_count becomes available to the JavaScript expression. The expression model.overhanging_facet_count == 0 returns true if no facet hangs over the maximum angle. If your printer uses support material, you do not need to set this variable, but you can optionally optimize the support material as it is explained in the Support material volume section. If you use the helper function require_no_overhanging_surfaces(angle) , model.max_overhang_angle is automatically set you do not need to set it manually.

Minimal thickness

If you set the property model.min_thickness , model.thin_facet_count becomes available within the JavaScript expression. The expression model.thin_facet_count == 0 returns true if no thin walls are found. If you use the helper function require_thicker_than(min_thickness) , the model.min_thickness is automatically set you do not need to set it manually.

2.2.7. Mesh repair

All common mesh defects — for example, duplicated vertexes and edges and non-harmonized face normals — are automatically and transparently fixed by default only for the purpose of computing the quotes. If you need advanced automatic repair for specific use cases, please get in touch with us. If you are interested in this subject, please have a look at the following blog entry authored by Fabnami’s founders: The limits of automatic 3D mesh repair.

2.2.8. Manufacturability check expressions

The manufacturability check expressions allow you to compute whether the 3D models uploaded by your clients can be 3D-printed in a given material, to notify your clients of all the 3D mesh issues they should fix before being able to submit the mesh successfully for print or to warn about potential issues with the 3D-models they submitted. The most frequent checks are implemented in the helper functions that are available in the JavaScript expression (See the Simplified check coding section). If you need to perform more exhaustive checks, please read the Advanced check coding section. You can mix advanced code and helper functions as needed.

Simplified check coding

The simplified coding is based on a set of helper functions that encapsulate multiple manufacturability checks and notifications to your clients. The helper functions available are the following:

This option tightens some mesh checks. If you enable this option, Fabnami checks 3D meshes very strictly and does not compute the volume of a mesh unless the mesh satisfies several strict criteria. However, we noticed that the software bundled with some 3D printers and the widely used Netfabb application do not apply very strict checks. So Fabnami performs by default less strict checks and lets you consider as manufacturable models that do not fulfill the most strict requirements for manufacturability.

This option tightens some mesh checks. Use it if the slicer or tool-path generation software you use does not handle well self-intersections. By default, the is_two_manifold test corresponds to the result of a less strict check whereas is_solid is not affected by this option and is always computed using a strict algorithm to detect holes in the mesh, which make the mesh not watertight.

This option enables the calculation of a variable that is proportional to the path followed by a tool that runs along the horizontal slices of a model. If this function is called, the path length becomes available as model.lateral_path . This option was specifically designed to compute the blade usage in MCor printers.

If the 3D mesh has self-intersections, the model is considered to be non-manufacturable, and your clients are notified via the e-shop.

If the strict_two_manifoldness_detection option is enabled and the 3D mesh is not 2-manifold, the model is considered to be non-manufacturable and your customers are notified via the e-shop. By, default Fabnami adopts a tolerant heuristic.

This check considers a model as manufacturable if the 3D mesh is strictly two-manifold (the option strict_two_manifoldness_detection does not affect this specific check) and its inside/outside can be defined without ambiguity. If the model is considered to be non-manufacturable, your clients are notified via the e-shop.

If the number of distinct components found in the 3D mesh is higher the max_count , the model is considered to be non-manufacturable, and your clients are notified via the e-shop.

If the number of distinct connected components found in the 3D mesh is higher than max_count , the model is considered to be non-manufacturable, and your customers are clients via the e-shop.

If a wall with thickness thickness smaller than min_thickness is found, the model is considered to be non-manufacturable, and your clients are notified via the e-shop.

If any edge with an angle smaller than angle is found, the model is considered to be non-manufacturable, and your clients are notified via the e-shop.

If a facet has an overhang angle larger than angle (expressed in radians), the model is considered to be non-manufacturable, and your clients are notified via the e-shop.

If any of the three bounding-box measures is smaller than min_dimension , the model is considered to be non-manufacturable, and your clients are notified via the e-shop.

The max_bb parameter is an array with three numerical values (for example, [25, 35, 50] ). If the model cannot be flipped in any way to fit it inside the specified max_bb box, the model is considered to be non-manufacturable, and your clients are notified via the e-shop.

The min_bb and max_bb parameters have to be arrays with three numerical values (for example, [25, 35, 50] ). If the model cannot be flipped in any way to fit it inside the specified max_bb box and, at the same time, to contain a min_bb sized box, the model is considered to be non-manufacturable, and your clients are notified via the e-shop.

All functions described above perform the checks, possibly notify your customers and do not return values.

If you want to mix helper functions and advanced code, you might need to know the result of a check without issuing a notification. Helper functions have the following twin functions that return a boolean value ( true or false ) without issuing a notification:

Advanced check coding

If your manufacturability check needs go beyond the functionalities offered by the helper functions, you can code your manufacturability check logic. In this case, you also need to specify which issues need to be notified to your customers via the e-shop by using the following function:

The parameter A must be one of the following error codes:

If A does not correspond to any permitted error code, it defaults to custom_issue and the optional parameter B takes the value of A . If A corresponds to a permitted error code, your clients are notified in their language of choice. You can specify an additional error message with the parameter B . This optional additional error message is displayed nearby the error message corresponding to the error code passed with the parameter A . The optional error message is not localized and is shown as it appears in the JavaScript expression.

You can enable the strict modes for volume calculation and two-manifoldness detection by setting the following variables to true

You can enable the calculation of the lateral path by setting the following variable to true :

The lateral path was explicitly designed to compute the blade usage of MCor printers.

2.2.9. Manufacturability check warnings

As it is explained in the Manufacturability check expressions section, by notifying a manufacturability issue, the model is marked as not manufacturable. If you want to warn your customers about issues that can potentially affect the manufacturability of the model, but still consider the model as manufacturable, you can issue a manufacturability check warning. Such warnings can be generated in the Manufacturability checks section of the Configuration  ▸ Specs & Pricing page within the Configuration and Administration app by adding a line like the following:

No standardized messages are available: you can only use custom text. Language localization is not available.

2.2.10. Price expressions

The role of the price expression is to calculate a price for 3D printing a 3D model in a specified material and finish. Analogously to the manufacturability check JavaScript expressions, you can use a library of helper functions (documented in the Simplified pricing coding section) that cover some common pricing cases. If your pricing needs to go beyond the functionalities offered by the helper functions, you need to read the Advanced pricing coding section.

Simplified pricing coding

The simplified coding is based on a set of helper functions that encapsulate commonly used pricing models. In many cases, a few lines of code are enough to compute the price for a 3D print. The helper functions available are the following:

computes the 3D print price based exclusively on the model volume by using the price tiers defined in a table (see the example below).

var computed_price = get_price_from_table_using_value(table, value)

more generally, computes the 3D print price based on the value by using the price tiers defined in a price tiers table. The first function does not assign a value to the price variable, but you can process its result in your custom JavaScript code and finally assign the price amount to the price variable.

The format of the price tiers table is an array of objects. Each object has two properties: