OCR-D workflow configurations based on makefiles
This provides a first attempt at running OCR-D workflows configured and controlled via GNU makefiles. Makefilization offers the following advantages:
- incremental builds (steps already processed for another configuration or in a failed run need not be repeated) and automatic dependencies (new files will force all their dependents to update)
- persistency of configuration and results
- encapsulation and ease of use
- sharing configurations and repeating experiments
- less writing effort, fast templating
- parallelization across workspaces
Nevertheless, there are also some disadvantages:
- depends on directories (fileGrps) as targets, which is hard to get correct under all circumstances
- must mediate between filesystem perspective (understood by
make) and METS perspective
makecannot handle target names with spaces in them (at all)
(This means that fileGrp directories must not have spaces. Local file paths may contain spaces though, if the respective processors support that.)
To install system dependencies for this package, run…
…in a privileged context for Ubuntu (like a Docker container).
Or equivalently, install the following packages:
Additionally, you must of course install ocrd itself along with its dependencies in the current shell environment. Moreover, depending on the specific configurations you want to use (i.e. the processors it contains), additional modules must be installed. See OCR-D setup guide for instructions. (Yes,
workflow-configuration is already part of ocrd_all.)
You have 2 options, depending on your usage preferences:
For direct invocation of make
Simply copy or symlink all makefiles (i.e. both the specific workflow configurations
*.mk and the general
Makefile) to the target directory.
(The target directory is the directory where your OCR workspace directories can be found. A workspace directory is one which contains a
You can then run workflows in the target directory by calling…
make [OPTIONS] -f WORKFLOW-CONFIG.mk WORKSPACES...
- OPTIONS are the usual options controlling GNU make (e.g.
-jfor parallel processing).
- WORKFLOW_CONFIG.mk is one of the configuration makefiles you find here.
- WORKSPACES is a list of workspace directories, or
all(the default) for all workspaces make can find.
For invocation via shell script
… if you are in a (Python) virtual environment. Otherwise specify the installation prefix directory via environment variable
$VIRTUAL_ENV/bin is in your
PATH, you can now call…
ocrd-make [OPTIONS] -f WORKFLOW-CONFIG.mk WORKSPACES...
… in the target directory with the same interface as above (only without the need for copying makefiles).
Workflows are processed like software builds: File groups (depending on one another) are the targets to be built in each workspace, and all workspaces are built recursively. A build is finished when all targets exist and none are older than their respective prerequisites (e.g. image files).
To run a configuration…
- Activate working environment (virtualenv) and change to the target directory.
- Choose (or create) a workflow configuration makefile.
(Yes, you can have to look inside and browse its rules!)
[ocrd-]make -f CONFIGURATION.mk [all]
(The special target
all (which is also the default goal) will look for all workspaces in the current directory.)
You can also run on a subset of workspaces by passing these as goals on the command line…
[ocrd-]make -f CONFIGURATION.mk PATH/TO/WORKSPACE1 PATH/TO/WORKSPACE2 ...
To get help:
To get a short description of the chosen configuration:
[ocrd-]make -f CONFIGURATION.mk info
To see the command sequence that would be executed for the chosen configuration (in the format of
[ocrd-]make -f CONFIGURATION.mk show
To remove the configuration makefiles in the current/target directory:
To prepare workspaces for processing by fixing certain flaws that kept happening during publication:
To create workspaces from directories which contain image files:
To get help for the import tool:
To derive flat directories from workspaces suitable for LAREX annotation:
ocrd-export-larex -I FILEGRP -O DIR
To get help for the LAREX export tool:
To spawn a new configuration file:
Furthermore, you can add any options that
make understands (see
make --help or
info make 'Options Summary'). For example,
--dry-runto just simulate the run
--questionto just check whether anything needs to be built at all
--silentto suppress echoing recipes
--jobsto run on workspaces in parallel
--always-maketo consider all targets out-of-date (i.e. unconditionally rebuild)
Note, that because workspaces are built by recursive invocation, and
make does not pass on those
MAKEFLAGS which can affect dependency calculation, you cannot directly use the following options:
--old-fileto consider some target up-to-date w.r.t. its prerequisites (i.e. unconditionally keep) but older than its dependents (i.e. unconditionally ignore)
--new-fileto consider some target newer than its dependents (i.e. unconditionally update them)
However, you can wrap them in a special variable
EXTRA_MAKEFLAGS which gets expanded at the workspace level. For example, to rebuild anything after the fileGrp
[ocrd-]make -f CONFIGURATION.mk all EXTRA_MAKEFLAGS="-W OCR-D-BIN"
You can also use that variable to specify any other than the
.DEFAULT_GOAL of your configuration as the overall target. For example, to build anything up to the fileGrp
[ocrd-]make -f CONFIGURATION.mk all EXTRA_MAKEFLAGS="OCR-D-SEG-LINE"
(If you chdir into some workspace yourself,
make won’t run recursively, so no
all target exists and no
EXTRA_MAKEFLAGS is necessary.)
There are 2 more special variables besides
EXTRA_MAKEFLAGS. To process only a subset of pages in all fileGrps, use
PAGES. For example, to only consider pages
[ocrd-]make -f CONFIGURATION.mk all PAGES=PHYS_0005,PHYS_0007
And to override the default (or configured) log levels for all processors and libraries, use
LOGLEVEL. For example, to get debugging everywhere, do:
[ocrd-]make -f CONFIGURATION.mk all LOGLEVEL=DEBUG
To write new configurations, first choose a (sufficiently descriptive) makefile name, and spawn a new file for that:
[ocrd-]make NEW-CONFIGURATION.mk (or copy from an existing configuration).
Next, edit the file to your needs: Write rules using file groups as prerequisites/targets in the normal GNU make syntax. The first target defined must be the default goal that builds the very last file group for that configuration, or else a variable
.DEFAULT_GOAL pointing to that target must be set anywhere in the makefile.
- Keep the comments and the
include Makefiledirective in the file.
- Change/customize at least the
infotarget, and the
- Copy/paste rules from the existing configurations.
- Define variables with the names of all target/prerequisite file groups, so rules and dependent targets can re-use them (and the names can be easily changed later).
- Try to utilise the provided static pattern rule (which takes the target as output file group and the prerequisite as input file group) for all processing steps. The rule covers any OCR-D compliant processor with no more than 1 output file group. Use it by simply defining the target-specific variable
OPTIONS) and giving no recipe whatsoever.
When any of your processors use GPU resources, you must prevent races for GPU memory during parallel execution.
You can achieve this by simply setting
GPU = 1for that target when using the static pattern rule, or by using
sem --id OCR-D-GPUSEMwhen writing your own recipes.
Alternatively, you can either prevent using GPUs globally by (un)setting
CUDA_VISIBLE_DEVICES=, or prevent running parallel jobs (on multiple CPUs) by passing
INPUT = OCR-D-GT-SEG-LINE $(INPUT): ocrd workspace find -G $@ --download ocrd workspace find -G OCR-D-IMG --download # just in case # You can use variables for file group names to keep the rules brief: BIN = $(INPUT)-BINPAGE # This is how you use the pattern rule from Makefile (included below): # The prerequisite will become the input file group, # the target will become the output file group, # the recipe will call the executable given by TOOL, # also generating a JSON parameter file from PARAMS: $(BIN): $(INPUT) $(BIN): TOOL = ocrd-olena-binarize $(BIN): PARAMS = "impl": "sauvola-ms-split" # or equivalently: $(BIN): OPTIONS = -P impl sauvola-ms-split # You can also use the file group names directly: OCR-D-OCR-TESS: $(BIN) OCR-D-OCR-TESS: TOOL = ocrd-tesserocr-recognize OCR-D-OCR-TESS: PARAMS = "textequiv_level": "glyph", "model": "frk+deu" # or equivalently: OCR-D-OCR-TESS: OPTIONS = -P textequiv_level glyph -P model frk+deu # This uses more than 1 input file group and no output file group, # which works with the standard recipe as well (but mind the ordering): EVAL: $(INPUT) OCR-D-OCR-TESS EVAL: TOOL = ocrd-cor-asv-ann-evaluate # Because the first target in this file was $(BIN), # we must override the default goal to be our desired overall target: .DEFAULT_GOAL = EVAL # ALWAYS necessary: include Makefile
OCR-D ground truth
data_structure_text/dta repository, which includes both layout and text annotation down to the textline level, but very coarse segmentation, the following character error rate (CER) was measured:
Hence, it appears that consistently (across different OCRs) …
- denoising with Ocropy (with
noise_maxsize=3.0) does not help
- deskewing with Ocropy on the page level usually helps
- additional deskewing and flipping with Tesseract on the region level usually deteriorates
- binarization with
sauvola-ms-splitis better than
However, this result is still preliminary. Both the processor implementations evolve and the GT annotations get fixed over time.
To make writing (and reading) configurations as simple as possible, they are expressed as rules operating on METS file groups (i.e. workspace-local). For convenience, the most common recipe pattern involving only 1 input and 1 output file group via some OCR-D CLI is available via static pattern rule, which merely takes the target-specific variables
TOOL (the CLI executable) and optionally
PARAMS (a JSON-formatted list of parameter assignments) or
OPTIONS (a white-space separated list of parameter assignments). Custom rules are possible as well. If the makefile does not start with the overall target, it must specify its
.DEFAULT_GOAL, so callers can run without knowledge of the target names.
Rules that are not configuration-specific (like the static pattern rule) are all shared by including a common
Makefile at the end of configuration makefiles. That file has 2 sets of rules:
- a top-level set operating in the target directory (possibly in parallel),
targets are the available workspaces, and the global default goal
- a low-level set operating in the workspace directory (always sequentially), targets are the configured file groups, including the local default goal.
The former calls the latter recursively for each workspace.
GPU vs CPU parallelism
When executing workflows in parallel across workspaces (with
--jobs) on multiple CPUs, it must be ensured that not too many OCR-D processors which use GPU resources are running concurrently (to prevent over-allocation of GPU memory). Thus, make needs to know:
- which processors (have/want to) use GPU resources, and
- how many such processors can run in parallel.
It can then synchronize these processors with a semaphore. This is achieved by expanding the static pattern rule with a synchronisation mechanism (based on GNU parallel). Workflow configurations can use that by setting the target-specific variable
GPU to a non-empty value for the respective rules. (Custom recipes will have to use
sem --id OCR-D-GPUSEM.)
That way, races are prevented, but also GPUs cannot become the bottleneck: When all GPUs are busy, processors will fall back to CPU.
workspace vs page parallelism
When executing workflows in parallel across workspaces (with
--jobs) on multiple CPUs, it must be ensured that OCR-D processors do not use local multiprocessing facilities themselves (to prevent over-allocation of CPUs).
In the current state of affairs, OCR-D processors cannot be run in parallel across pages via multiprocessing. (At least, they are never implemented that way.) That may change in the future with a new OCR-D API. But still, many processors do already use libraries like OpenMP or OpenBLAS which use multiprocessing locally within pages. This can be controlled via environment variables like
This is achieved by exporting these variables to all recipes with a value of
-j is in
MAKEFLAGS or half the number of physical CPUs (unless
NTHREADS is explicitly given) otherwise.