io7m | single-page | multi-page | epub | Laurel User Manual 0.99.0

Laurel User Manual

Laurel User Manual
CREATOR Mark Raynsford
DATE 2024-09-28T14:52:43+00:00
DESCRIPTION User manual for the Laurel package.
IDENTIFIER 980294e7-2da9-4911-95d7-a830aa4a82bb
LANGUAGE en
RIGHTS Public Domain
TITLE Laurel User Manual
The laurel application attempts to provide tools to assist with image captioning within the context of machine learning.
In particular, the application is geared towards the management of smaller datasets (in the range of thousands of images) for use in techniques such as LORA training.
The laurel application provides the following features:

1.2.2. Features

  • A user interface for managing images and captions for those images.
  • A caption categorization system for assisting with keeping captions consistent across large datasets.
  • The ability to import captions and images into a dataset from a directory hierarchy.
  • The ability to export captions and images into a directory for use in training scripts.
  • A persistent undo/redo system that can store every change ever made to a dataset, including the ability to effectively revert to an earlier version at any time.
  • A carefully-engineered Java API for manipulating datasets; the command-line tools and user interface are thin shells over this API.
  • Datasets are backed by SQLite for reliable, transactional updates, and a file format that is designed to endure for decades to come.
  • Command line tools for automating operations such as importing, exporting, and interrogating metadata.
  • The application is comprehensively documented; you are currently reading this documentation!
There are several ways to install the Laurel application.
The portable application distribution is simply a zip archive consisting of a couple of frontend shell scripts and the Java jar files that comprise the application. This distribution is mostly platform-independent [1], but requires some (fairly straightforward) manual setup.
The distribution uses your locally installed Java VM. First, check that you have a JDK 21 or higher JVM installed:

2.2.3. Java Version

$ java -version
openjdk version "21.0.4" 2024-07-16
OpenJDK Runtime Environment (build 21.0.4+7)
OpenJDK 64-Bit Server VM (build 21.0.4+7, mixed mode, sharing)
The application distribution is a zip file with a laurel directory in the root of the zip archive.

2.2.5. Unzip

$ unzip com.io7m.laurel.distribution-0.99.0-distribution.zip
   creating: laurel/
   creating: laurel/bin/
  inflating: laurel/bin/laurel
  inflating: laurel/bin/laurel-ui
   creating: laurel/lib/
  inflating: laurel/lib/com.io7m.anethum.api-1.1.1.jar
  inflating: laurel/lib/com.io7m.blackthorne.core-2.0.2.jar
  inflating: laurel/lib/com.io7m.blackthorne.jxe-2.0.2.jar
  inflating: laurel/lib/com.io7m.darco.api-1.0.0.jar
  inflating: laurel/lib/com.io7m.darco.sqlite-1.0.0.jar
...
On UNIX-like platforms, ensure the included frontend scripts are executable:

2.2.7. chmod

$ chmod +x laurel/bin/*
Set the LAUREL_HOME environment variable to the directory:

2.2.9. LAUREL_HOME

$ export LAUREL_HOME=$(realpath laurel)
Now run either laurel/bin/laurel for the command-line tool, or laurel/bin/laurel-ui for the graphical user interface.

2.2.11. laurel

$ laurel/bin/laurel version
com.io7m.laurel 1.0.0-SNAPSHOT 7e810b7cda6e7d8db2032fdb936f9260aaf906f2

Footnotes

1
Unfortunately, JavaFX does not allow for platform-independence due to including rather incompetently-packaged platform-specific artifacts. The command-line tools are usable on the fairly huge range of underlying platforms that the sqlite-jdbc library supports.
References to this footnote: 1
This section of the documentation describes how to use the application without spending any time explaining the underlying model the application works with, and without describing how exactly the application works. The theory of operation section of the manual describes the inner workings of the application in a more formal manner.
The vast majority of operations in the application can be undone. When an operation is performed, it can typically be reverted by selecting Undo from the Edit menu. Any operation that has been undone can be performed again by selecting Redo from the Edit menu.
The application is slightly atypical in that there is no "save" functionality. Instead, every operation performed in the application that changes the state of the dataset is persisted into the dataset itself. This, effectively, provides an unbounded undo stack that survives application restarts.
The current state of the undo/redo stack can be viewed in the History tab. Details of the undo implementation are described in the theory of operation.
The application opens to an empty file view.
Via the File menu, it's possible to create a new dataset, or open an existing one.
With a dataset loaded, the file view shows a series of tabs.
The Images tab allows for loading images and assigning captions to images.
Click the Add Image button to load an image from the filesystem.
Once an image is loaded, it appears in the image list.
Clicking the image opens an image preview window that contains a larger copy of the image. This window will constantly update to whatever is the currently selected image. The intended use case for the image preview window is to be left open on a separate screen so that a large version of the image is always visible when manually captioning images.
Click the Create Caption button to create a new caption.
When a caption is first created, it is visible in the set of unassigned captions for the selected image. Naturally, the set of unassigned captions is different for each image. The Assign Caption button can be used to assign one or more selected captions to the currently selected image.
Typically, in fine-tuning methods such as LORAs, there will be one or more captions that should be globally applied to all images, and should also, when the captions are exported, always appear at the beginning of the list of captions for each image.
Click the Configure Global Prefix Captions button to configure global prefix captions.
The Configure Global Prefix Captions window allows for creating, deleting, modifying, and reordering captions.
The Categories tab allows for grouping captions into categories.
Click the Add Category button to create a new category.
When a category is selected, the captions that are not assigned to that category will appear in the list of unassigned captions. Conversely, the captions that are assigned to the category will appear in the list of assigned captions. In a similar manner as for image caption assignment, captions can be assigned and unassigned to/from a category using the arrow buttons.
Categories can be marked as required using the buttons above the category list. When a category is required, all images must have at least one caption from that category assigned to pass validation checks.
The Metadata tab allows for embedding textual metadata into the dataset. This can be used to hold author information, license information, and etc.
Metadata values can be added using the Add Metadata button. Existing metadata values can be modified with the Modify Metadata button, and removed with the Remove Metadata button.
The History tab displays the undo and redo stack for the currently loaded dataset.
The history can be deleted using the Delete History button. Note that this operation cannot be undone, and requires confirmation. It is recommended that the history be deleted before datasets are distributed.
The Validation tab allows for running validation checks on the dataset. Validation is executed using the Validate button.
If validation suceeds, a success message is displayed.
If validation fails, the reasons for the failures are displayed.
The application supports importing directories filled with captioned images.
Importing can be accessed from the File menu.
Any errors encountered during the import process are shown in the dialog.
The import process will recursively walk through a given directory hierarchy searching for image files. When an image file is discovered, the process will look for a caption file associated with the image. A caption file must have the file extension caption. For example, if the process discovers an image file named example.png, the caption file associated with it must be called example.caption.
Caption files must be provided in the documented caption file format.
The application supports exporting datasets to directories.
Exporting can be accessed from the File menu. If the Export Images checkbox is checked, image files will be written to the output directory. For very large datasets where captions are being repeatedly exported during development, it can be useful to switch off image exports in order to save time.
Any errors encountered during the export process are shown in the dialog.
Caption files will be exported to the documented caption file format.
The laurel package provides a command-line interface for performing tasks such as importing and exporting datasets. The base laurel command is broken into a number of subcommands which are documented over the following sections.

4.1.2. Command-Line Overview

laurel: usage: laurel [command] [arguments ...]

  The laurel command-line application.

  Use the "help" command to examine specific commands:

    $ laurel help help.

  Command-line arguments can be placed one per line into a file, and
  the file can be referenced using the @ symbol:

    $ echo help > file.txt
    $ echo help >> file.txt
    $ laurel @file.txt

  Commands:
    export     Export a dataset into a directory.
    help       Show usage information for a command.
    import     Import a directory into a dataset.
    version    Show the application version.

  Documentation:
    https://www.io7m.com/software/laurel/
All subcommands accept a --verbose parameter that may be set to one of trace, debug, info, warn, or error. This parameter sets the lower bound for the severity of messages that will be logged. For example, at debug verbosity, only messages of severity debug and above will be logged. Setting the verbosity to trace level effectively causes everything to be logged, and will produce large volumes of debugging output.
The laurel command-line tool uses quarrel to parse command-line arguments, and therefore supports placing command-line arguments into a file, one argument per line, and then referencing that file with @. For example:

4.1.5. @ Syntax

$ laurel import --input-directory /tmp/data --output-file output.ldb

$ (cat <<EOF
import
--input-directory
/tmp/data
--output-file
output.ldb
EOF
) > args.txt

$ laurel @args.txt
All subcommands, unless otherwise specified, yield an exit code of 0 on success, and a non-zero exit code on failure.
import - Import a directory into a dataset.
The import command imports a dataset.

4.2.3.1. --input-directory

Attribute Value
Name --input-directory
Type java.nio.file.Path
Default Value
Cardinality [1, 1]
Description The input directory.

4.2.3.2. --output-file

Attribute Value
Name --output-file
Type java.nio.file.Path
Default Value
Cardinality [1, 1]
Description The output file.

4.2.3.3. --verbose

Attribute Value
Name --verbose
Type com.io7m.quarrel.ext.logback.QLogLevel
Default Value info
Cardinality [1, 1]
Description Set the logging level of the application.

4.2.4.1. Example

$ quarrel import --input-directory /tmp/data --output-file output.ldb
export - Export a dataset into a directory.
The export command exports a dataset.

4.3.3.1. --export-images

Attribute Value
Name --export-images
Type java.lang.Boolean
Default Value true
Cardinality [1, 1]
Description Whether to export images.

4.3.3.2. --input-file

Attribute Value
Name --input-file
Type java.nio.file.Path
Default Value
Cardinality [1, 1]
Description The input file.

4.3.3.3. --output-directory

Attribute Value
Name --output-directory
Type java.nio.file.Path
Default Value
Cardinality [1, 1]
Description The output directory.

4.3.3.4. --verbose

Attribute Value
Name --verbose
Type com.io7m.quarrel.ext.logback.QLogLevel
Default Value info
Cardinality [1, 1]
Description Set the logging level of the application.

4.3.4.1. Example

$ quarrel export --input-file example.ldb --output-directory /tmp/dataset
A caption is a string that can be applied to an image to describe some element of that image.
Captions must conform to the following format:

5.1.2.2. Caption Format

caption ::= [a-z0-9A-Z_-][a-z0-9A-Z_ \-']*
A caption file is a file consisting of a comma-separated list of captions. More formally, the file conforms to the following format:

5.1.3.2. Caption File Format

caption_file ::= caption ("," caption)+ [ "," ]
An example caption file is as follows:

5.1.3.4. Caption File Format

red drapes,
black and white zigzag floor,
red chair,
gold lamp,
coffee cup,
Note that the trailing comma on the last line is optional. All whitespace around commas is ignored.
Categories allow for grouping captions in a manner that allows the application to assist with keeping image captioning consistent.
When adding captions to images for use in training models such as LORAs, it is important to keep captions consistent. Consistent in this case means to avoid false positive and false negative captions. To understand what these terms mean and why this is important, it is necessary to understand how image training processes typically work.
Let m be an existing text-to-image model that we're attempting to fine-tune. Let generate(k, p) be a function that, given a model k and a text prompt p, generates an image. For example, if the model m knows about the concept of laurel trees, then we'd hope that generate(m, "laurel tree") would produce a picture of a laurel tree.
Let's assume that m has not been trained on pictures of rose bushes and doesn't know what a rose bush is. If we evaluate generate(m, "rose bush"), then we'll just get arbitrary images that likely don't contain rose bushes. We want to fine-tune m by producing a LORA that introduces the concept of rose bushes. We produce a large dataset of images of rose bushes, and caption each image with (at the very least) the caption rose bush.
The training process then steps through each image i in the dataset and performs the following steps:

5.2.2.1.5. Per-Image Training Steps

  1. Take the set of captions provided for i and combine them into a prompt p. The exact means by which the captions are combined into a prompt is typically a configurable aspect of the training method. In practice, the most significant caption ("rose bush") would be the first caption in the prompt, and all other captions would be randomly shuffled and concatenated onto the prompt.
  2. Generate an image g with g = generate(m, p).
  3. Compare the images g and i. The differences between the two images are what the fine-tuning of the model will learn.
In our training process, assuming that we've properly captioned the images in our dataset, we would hope that the only significant difference between g and i at each step would be that i would contain an image of a rose bush, and g would not. This would, slowly, cause the fine-tuning of the model to learn what constitutes a rose bush.
Stepping through the entire dataset once and performing the above steps for each image is known as a single training epoch. It will take most training processes multiple epochs to actually learn anything significant. In practice, the model m can conceptually be considered to be updated on each training step with the new information it has learned. For the sake of simplicity of discussion, we ignore this aspect of training here.
Given the above process, we're now equipped to explain the concepts of false positive and false negative captions.
A false positive caption is a caption that's accidentally applied to an image when that image does not contain the object being captioned. For example, if an image does not contain a red sofa, and a caption "red sofa" is provided, then the "red sofa" caption is a false positive.
To understand why a false positive caption is a problem, consider the training process described above. Assume that our original model m knows about the concept of "red sofas".

5.2.2.2.3. False Positive Process

  1. The image i does not contain a red sofa. However, one of the captions provided for i is "red sofa", and so the prompt p contains the caption "red sofa".
  2. An image g is generated with g = generate(m, p). Because p contains the caption "red sofa", the generated image g will likely contain a red sofa.
  3. The process compares the images g and i. The source image i doesn't contain a red sofa, but the generated image g almost certainly does. The system then, essentially, erroneously learns that it should be adding red sofas to images!
Similarly, a false negative caption is a caption that's accidentally not applied to an image when it really should have been. To understand how this might affect training, consider the training process once again:

5.2.2.3.2. False Negative Process

  1. The image i contains a red sofa. However, none of the captions provided for i are "red sofa", and so the prompt p does not contain the caption "red sofa".
  2. An image g is generated with g = generate(m, p). Because p does not contain the caption "red sofa", the generated image g will probably not contain a red sofa.
  3. The process compares the images g and i. The source image i contains a red sofa, but the generated image g almost certainly does not. The system then, essentially, erroneously learns that it should be removing red sofas from images!
In practice, false negative captions happen much more frequently than false positive captions. The reason for this is that it is impractical to know all of the concepts known to the model being trained, and therefore it's impractical to know which concepts the model can tell are missing from the images it inspects.
Given the above understanding of false positive and false negative captions, the following best practices can be inferred for captioning datasets:

5.2.2.4.2. Best Practices

  • Include a single primary caption at the start of the prompt of every image in the dataset. This primary caption is effectively the name of the concept that you are trying to teach to the model. The reason for this follows from an understanding of the training process: By making the primary caption prominent and ubiquitous, the system should learn to primarily associate the image differences with this caption.
  • Caption all elements of an image that you do not want the model to associate with your primary caption. This will help ensure that the captioned objects do not show up as differences in the images that the training process will, as a result, learn.
  • Be consistent in your captioning between images with respect to which aspects of the image you caption. For example, if in one of your images, you caption the lighting or the background colour, then you should caption the lighting and background colour in all of the images. This assumes, of course, that you are not trying to teach the model about lighting or background colours! This practice is, ultimately, about reducing false negatives.
In our example training process above, we should use "rose bush" as the primary caption for each of our images, and we should caption the objects in each image that are not rose bushes (for example, "grass", "soil", "sky", "afternoon lighting", "outdoors", etc.)
When a category is marked as required, then each image in the dataset must contain one or more captions from that category.
Unlike captions which can share their meanings across different datasets, categories are a tool used to help ensure consistent captioning within a single dataset. It is up to users to pick suitable categories for their captions in order to ensure that they caption their images in a consistent manner. A useful category for most datasets, for example, is "lighting". Assign captions such as "dramatic lighting", "outdoor lighting", and so on, to a required "lighting" category. The validation process will then fail if a user has forgotten to caption lighting in one or more images.
Categories must conform to the following format:

5.2.4.2. Category Format

category ::= [a-z0-9A-Z_-][a-z0-9A-Z_ \-']*
An image is a rectangular array of pixels. The application does not do any special processing of images beyond storing them in the dataset. Images are, in practice, expected to be in one of the various popular image formats such as PNG.
Each image in the dataset may have zero or more captions assigned.
The application stores the complete, persistent history of every change ever made to the dataset.
The undo and redo stacks are stored in the file model.
Each command that is executed on the file model is invertible. That is, each command knows how to perform an action, and how to revert that action. By storing the complete sequence of executed commands, it is effectively possible to take a dataset and repeatedly undo operations until the dataset is back at the blank starting state.
Metadata in the application is a simple string key/value store. Keys are unique.
It is recommended that creators annotate datasets with the standard Dublin Core metadata terms, summarized in the following table:

5.5.2.2. Recommended Metadata

Name Description
dc:title The dataset title.
dc:creator The dataset creator.
dc:subject The dataset subject.
dc:description A human-readable description of the dataset.
dc:publisher A organization publishing the dataset.
dc:contributor The contributors to the dataset.
dc:date The dataset publication or creation date.
dc:type The type (recommended: "dataset").
dc:format The format (recommended: "com.io7m.laurel").
dc:identifier The dataset identifier.
dc:source The dataset source URI.
dc:language The dataset language.
dc:rights The dataset rights/license.
The applications stores the dataset in a structure known as the file model.
The file model's underlying representation is an SQLite database. The database contains all of the images, captions, categories, metadata, and the undo history.
The database uses the following schema:

5.6.2.2.2. Schema

CREATE TABLE schema_version (
  version_lock            INTEGER NOT NULL DEFAULT 1,
  version_application_id  TEXT    NOT NULL,
  version_number          INTEGER NOT NULL,

  CONSTRAINT check_lock_primary
    PRIMARY KEY (version_lock),

  CONSTRAINT check_lock_locked
    CHECK (version_lock = 1)
)
-- [jooq ignore start]
STRICT
-- [jooq ignore stop];
CREATE TABLE metadata (
  meta_name  TEXT NOT NULL,
  meta_value TEXT NOT NULL

-- [jooq ignore start]
  ,
  CONSTRAINT metadata_primary_key
    PRIMARY KEY (meta_name)
-- [jooq ignore stop]
)
-- [jooq ignore start]
STRICT
-- [jooq ignore stop];
CREATE TABLE image_blobs (
  image_blob_id      INTEGER PRIMARY KEY NOT NULL,
  image_blob_data    BLOB                NOT NULL,
  image_blob_sha256  TEXT                NOT NULL,
  image_blob_type    TEXT                NOT NULL
)
-- [jooq ignore start]
STRICT
-- [jooq ignore stop];
CREATE TABLE images (
  image_id      INTEGER PRIMARY KEY NOT NULL,
  image_name    TEXT                NOT NULL,
  image_file    TEXT,
  image_source  TEXT,
  image_blob    INTEGER             NOT NULL,

-- [jooq ignore start]
  CONSTRAINT images_name_unique
    UNIQUE (image_name),
-- [jooq ignore stop]

  CONSTRAINT images_blob_exists
    FOREIGN KEY (image_blob) REFERENCES image_blobs (image_blob_id)
)
-- [jooq ignore start]
STRICT
-- [jooq ignore stop];
CREATE TABLE captions (
  caption_id    INTEGER PRIMARY KEY NOT NULL,
  caption_text  TEXT                NOT NULL

-- [jooq ignore start]
  ,
  CONSTRAINT captions_text_unique
    UNIQUE (caption_text)
-- [jooq ignore stop]
)
-- [jooq ignore start]
STRICT
-- [jooq ignore stop];
CREATE TABLE global_captions (
  global_caption_id     INTEGER PRIMARY KEY NOT NULL,
  global_caption_text   TEXT                NOT NULL,
  global_caption_order  INTEGER             NOT NULL

-- [jooq ignore start]
  ,
  CONSTRAINT global_captions_text_unique
    UNIQUE (global_caption_text)
-- [jooq ignore stop]
)
-- [jooq ignore start]
STRICT
-- [jooq ignore stop];
CREATE TABLE categories (
  category_id        INTEGER PRIMARY KEY NOT NULL,
  category_text      TEXT                NOT NULL,
  category_required  INTEGER             NOT NULL

-- [jooq ignore start]
  ,
  CONSTRAINT categories_text_unique
    UNIQUE (category_text)
-- [jooq ignore stop]
)
-- [jooq ignore start]
STRICT
-- [jooq ignore stop];
CREATE TABLE caption_categories (
  caption_caption_id       INTEGER NOT NULL,
  caption_category_id  INTEGER NOT NULL,

  CONSTRAINT caption_categories_caption_exists
    FOREIGN KEY (caption_caption_id)
      REFERENCES captions (caption_id)
        ON DELETE CASCADE,

  CONSTRAINT caption_categories_category_exists
    FOREIGN KEY (caption_category_id)
      REFERENCES categories (category_id)
        ON DELETE CASCADE,

  CONSTRAINT caption_categories_primary_key
    PRIMARY KEY (caption_caption_id, caption_category_id)
)
-- [jooq ignore start]
STRICT
-- [jooq ignore stop];
CREATE TABLE image_captions (
  image_caption_image    INTEGER NOT NULL,
  image_caption_caption  INTEGER NOT NULL,

  CONSTRAINT image_captions_image_exists
    FOREIGN KEY (image_caption_image)
      REFERENCES images (image_id)
        ON DELETE CASCADE,

  CONSTRAINT image_captions_caption_exists
    FOREIGN KEY (image_caption_caption)
      REFERENCES captions (caption_id)
        ON DELETE CASCADE,

  CONSTRAINT image_captions_primary_key
    PRIMARY KEY (image_caption_image, image_caption_caption)
)
-- [jooq ignore start]
STRICT
-- [jooq ignore stop];
CREATE VIEW image_captions_counts AS
  SELECT
    image_captions.image_caption_caption         AS count_caption_id,
    count (image_captions.image_caption_caption) AS count_caption_count
  FROM
    image_captions
  GROUP BY image_captions.image_caption_caption;
CREATE TABLE undo (
  undo_id           INTEGER PRIMARY KEY NOT NULL,
  undo_data         BLOB                NOT NULL,
  undo_description  TEXT                NOT NULL,
  undo_time         INTEGER             NOT NULL
)
-- [jooq ignore start]
STRICT
-- [jooq ignore stop];
CREATE TABLE redo (
  redo_id           INTEGER PRIMARY KEY NOT NULL,
  redo_data         BLOB                NOT NULL,
  redo_description  TEXT                NOT NULL,
  redo_time         INTEGER             NOT NULL
)
-- [jooq ignore start]
STRICT
-- [jooq ignore stop];
The schema_version table's single row MUST contain com.io7m.laurel in the version_application_id column.
Limitations in SQLite mean that it is, unfortunately, impractical to enforce invariants such as category and caption formats at the database level.
When an undoable command is successfully executed on the file model, the parameters of the original command, and the data that was modified, is stored in the undo table. When a command is undone, that same data is moved to the redo table.
The data and parameters are serialized to Java Properties format, but the precise names and types of the keys is currently unspecified. This means that, although applications other than Laurel can open and manipulate datasets, they will currently need to do some mild reverse engineering to manipulate the history.

Footnotes

1
Tables are required to be STRICT. Flexible typing is a bug and not a feature, regardless of how many times the SQLite documentation extols the virtues of being able to accidentally insert malformed data into database tables.
The validation process checks a number of properties of the underlying file model.
The validation process checks to see if the category requirements are satisfied for all images in the dataset. In pseudocode, the process is:

5.7.2.2. Required Categories (Pseudocode)

let Images             = { All images in the dataset }
let CategoriesRequired = { All categories in the dataset that are marked as "required" }

for Image in Images do
  let CaptionsPresent = CaptionsAssigned(Image);
  for Category in CategoriesRequired do
    let CaptionsRequired = CaptionsInCategory(Category);
    if IsEmpty (CaptionsRequired ∩ CaptionsPresent) then
      Fail("At least one caption is required from the category")
    end if;
  done;
done;
Informally, for each image i, for each required category c, validation succeeds if at least one caption in c is assigned to i.

6.1. License

Copyright © 2024 Mark Raynsford <code@io7m.com> https://www.io7m.com

Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.

THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
Copyright © 2024 Mark Raynsford <code@io7m.com> https://www.io7m.com.
This book is placed into the public domain for free use by anyone for any purpose. It may be freely used, modified, and distributed.
In jurisdictions that do not recognise the public domain this book may be freely used, modified, and distributed without restriction.
This book comes with absolutely no warranty.
io7m | single-page | multi-page | epub | Laurel User Manual 0.99.0