LiquiDoc is a documentation build utility for true single-sourcing of technical content and data. It is especially suited for documentation projects with various required output formats from complex, single-sourced codebases, but it is intended for any project with complex, versioned input data for use in docs, user interfaces, and even back-end code. The highly configurable command-line utility (and Ruby gem) engages template engines to parse complex data into rich text output, from blogs to books to knowledge bases to slide presentations.
Content source is formatted in the incredible AsciiDoc lightweight markup language. Data sources can be flat files in formats such as XML (eXtensible Markup Language), JSON (JavaScript Object Notation), CSV (comma-separated values), and our preferred human-editable format: YAML (acronym in dispute). LiquiDoc also accepts regular expressions to parse unconventionally formatted files.
LiquiDoc relies heavily on the Asciidoctor rendering engine, which produces HTML and PDF documents as well as complete static websites, the latter via Jekyll. Output can be pretty much any flat file, with automatic data conversions to JSON and YAML, as well as rich-text/multimedia formats like HTML, PDF, slide decks, and more.
|
Note
|
While the first two releases of LiquiDoc were published under the MIT license by my former employer, I do not believe the originating repo will be maintained. Therefore, as of version 0.3.0, I maintain this fork under the MIT license. More in Contributing and Licensing. |
LiquiDoc is a build tool for software-documentation projects or for the documentation component of a larger software project. Unlike tools that are mere converters, LiquiDoc can be configured to perform multiple consecutive routines for generating content from multiple data/content sources, each output in various formats based on distinct templates and themes. It can be integrated into build- and package-management systems and deployed for continuous integration (CI).
LiquiDoc pulls together the underlying “AJYL” technologies: AsciiDoc technical markup (via Asciidoctor), YAML data structures, and the Liquid templating format/engine, built using the Jekyll static-site generator and JAMstack components and services for publishing and delivery. It is developed in coordination with the LiquiDoc Content Management Framework, a recommended architecture, strategies, and conventions for building robust documents with LiquiDoc. LiquiDoc itself is fairly open-ended, supporting various configurations of dependent platforms Jekyll and Asciidoctor.
The utility currently provides for basic configuration of build jobs, and it can be incorporated into build toolchains. The gem does not have a formalized Ruby API yet, but the command-line interface is very powerful, especially combined with build configs formatted in YAML enhanced by Liquid markup for dynamic parsing of routines at buildtime. (See Dynamic LiquiDoc Build Configurations.) From any given data file, multiple template-driven parsing operations can be performed to produce totally different output formats from the same content and data sources.
Upcoming capabilities include a secondary publish function for generating Asciidoctor output from data-driven AsciiDoc-formatted files to ePub and even HTML/JavaScript slide presentations.
See this project’s GitHub issues for upcoming features, and feel free to add your own requests.
|
Note
|
Your system must be running Ruby 2.3 or later. Linux and MacOS users should be okay. See rubyinstaller.org if you’re on Windows. |
-
Create a file called
Gemfilein your project’s root directory. -
Populate the file with LiquiDoc dependencies.
A LiquiDoc project Gemfilesource 'https://rubygems.org' gem 'json' gem 'liquid' gem 'asciidoctor' gem 'asciidoctor-pdf' gem 'logger' gem 'crack' gem 'liquidoc'
TipA version of this file is included in the LiquiDoc CMF bootstrap repo, which is a recommended way to quickstart or demo a LiquiDoc CMF application. -
Open a terminal (command prompt).
If you don’t have a preferred terminal application, use your OS’s magic search and look for
terminal. -
Navigate to your project root directory.
Examplecd Documents/workspace/my_project
-
Run
bundle installto prepare dependencies.If you do not have Bundler installed, Ruby will tell you. Enter
gem install bundler, let Bundler install, then repeat this step.
Cool! LiquiDoc should now be ready to run with Bundler support, which is the strongly recommended approach.
LiquiDoc provides a Ruby command-line tool for processing source files into new text files based on templates you define. These definitions can be command-line options, or they can be instructed by preset configurations you define in separate configuration files.
|
Tip
|
Quickstart
If you want to try the tool out with dummy data and templates, clone the LiquiDoc CMF bootstrap repo and run the suggested commands.
This will set you up with an architecture and starter files, including a basic build config.
|
Give LiquiDoc (1) any proper YAML, JSON, XML, or CSV (with header row) data file and (2) a template mapping any of the data to token variables with Liquid markup — LiquiDoc returns STDOUT feedback or writes a new file (or multiple files) based on that template.
bundle exec liquidoc -d _data/sample.yml -t _templates/liquid/sample.asciidoc -o _output/sample.adoc
This single-action invocation of LiquiDoc ingests data from YAML file sample.yml, reads Liquid-formatted template sample.asciidoc, and generates AsciiDoc-formatted file sample.adoc
|
Tip
|
Add --verbose to any liquidoc command to see the steps the utility is taking.
|
The best way to use LiquiDoc is with a configuration file. This not only makes the command line much easier to manage (requiring just a configuration file path argument), it also adds the ability to perform more complex build routines and manage them with source control.
Here is very simple build routine instructed by a LiquiDoc config:
- action: parse # (1)
data: source_data_file.json # (2)
builds: # (3)
- template: liquid_template.html # (4)
output: _output/output_file.html # (5)
- template: liquid_template.markdown # (4)
output: _output/output_file.md # (5)-
The top-level
-denotes a new, consecutively executed “step” in the build. Theaction:parameter determines what type of action this step will perform. The options areparse,migrate,render, anddeploy. -
If the
data:setting’s value is a string, it must be the filename of a format automatically recognized by LiquiDoc:.yml,.json,.xml, or.csv. Otherwise,data:must contain subordinate settings forfile:andtype:. -
The
builds:section contains a list of procedures to perform on the data. It can include as many subroutines as you wish to perform. This one instructs two builds. -
The
template:setting should be a liquid-formatted file (see Templating with Liquid). -
The
output:setting is a path and filename where you wish the output to be saved. Can also bestdoutto write to console.
- action: parse
data: # (1)
file: source_data_file.txt # (2)
type: regex # (3)
pattern: (?<kee>[A-Z0-9_]+)\s(?<valu>.*)\n # (4)
builds:
- template: liquid_template.html
output: _output/output_file.html
- template: liquid_template.markdown
output: _output/output_file.md
stage: parse-my-file # (5)-
In this format, the
data:setting contains several other settings. -
The
file:setting accepts any text file, no matter the file extension or data formatting within the file. This field is required. -
The
type:field can be set toregexif you will be using a regular expression pattern to extract data from lines in the file. It can also be set toyml,json,xml, orcsvif your file is in one of these formats but uses a nonstandard extension. -
If your type is
regex, you must supply a regular expression pattern. This pattern will be applied to each line of the file, scanning for matches to turn into key-value pairs. Your pattern must contain at least one group, denoted with unescaped(and)markers designating a “named group”, denoted with?<string>, wherestringis the name for the variable to assign to any content matching the pattern contained in the rest of the group (everything else between the unescaped parentheses.). -
Optionally, you can tag any top-level step with a label. This will be expressed during logging, and eventually it will enable us to suppress or reorder steps by name (see Issue #33).
When you have established a configuration file, you can call it with the option -c on the command line.
bundle exec liquidoc -c _configs/cfg-sample.yml --stdout
|
Tip
|
Repeat without the --stdout flag, and you’ll find the generated files in _output/, as defined in the configuration.
|
The primary type of action performed by LiquiDoc during a build step is parsing semi-structured data into any flat format desired.
Valid data sources come in a few different types.
There are the built-in data types (YAML, JSON, XML, CSV) vs free-form type (files processed using regular expressions, designated by the regex data type).
There is also a divide between simple one-record-per-line data types (CSV and regex), which produce one set of parameters for every line in the source file, versus nested data types that can reflect far more complex structures.
The native nested formats are actually the most straightforward.
So long as your filename has a conventional extension, you can just pass a file path for this setting.
That is, if your file ends in .yml, .json, or .xml, and your data is properly formatted, LiquiDoc will parse it appropriately.
For standard-format files that have non-standard file extensions (for example, .js rather than .json for a JSON-formatted file), you must declare a type explicitly.
- action: parse
data:
file: _data/source_data_file.js
type: json
builds:
- template: _templates/liquid_template.html
output: _output/output_file.htmlOnce LiquiDoc knows the right file type, it will parse the file into a Ruby object for further processing.
Data ingested from CSV files will use the first row as key names for columnar data in the subsequent rows, as shown below.
name,description,default,required
enabled,Whether project is active,,true
timeout,The duration of a session (in seconds),300,falseThe above source data, parsed as a CSV file, will yield an array of hashes. Each array item is a structure — what Ruby calls a hash — representing a row from the source file (except the first row, which establishes parameter keys). As represented in the CSV example above, if the structure contains more than one key-value pair (more than one “column” in the source), all such pairs will be siblings, not nested or hierarchical.
data[0].name #=> enabled
data[0].description #=> Whether project is active
data[0].default #=> nil
data[0].required #=> true
data[1].name #=> timeout
data[1].description #=> The duration of a session (in seconds)
data[1].default #=> 300
data[1].required #=> falseUnstructured data files can be ingested as well, as long as records are delineated by lines (as with CSV) and each line meets a consistent pattern we can “scrape” for data to organize. This method generates arrays of structures similarly to the CSV approach.
Unstructured records are parsed into using regular expression (“regex”) patterns. Any file organized with one record per line may be consumed and parsed by LiquiDoc, provided you tell the parser which variables to extract from where. The parser will read each line individually, applying your regex pattern to extract data using named groups then storing them as variables for the associated parsing action.
|
Tip
|
Learn regular expressions
If you deal with docs but are not a regex user, become one.
They are increedibly powerful and can save hours of error-prone manual work such as complex find and replace.
|
A_B A thing that *SnASFHE&"\|+1Dsaghf true G_H Some text for &hdf 1t`F false
- action: parse
data:
file: _data/sample.free
type: regex
pattern: ^(?<code>[A-Z_]+)\s(?<description>.*)\s(?<required>true|false)\n
builds:
- template: _templates/liquid_template.html
output: _output/output_file.htmlLet’s take a closer look at that regex pattern.
^(?<code>[A-Z_]+)\s(?<description>.*)\s(?<required>true|false)\nWe see the named groups code, description, and required.
This maps nicely to a new array.
data[0].code #=> A_B
data[0].description #=> A thing that *SnASFHE&"\|+1Dsaghf
data[0].required #=> true
data[1].code #=> G_H
data[1].description #=> Some text for &hdf'" 1t`F
data[1].required #=> falseFree-form/regex parsing is obviously more complicated than the other data types. Its use case is usually when you simply cannot control the form your source takes.
The regex type is also handy when the content of some fields would be burdensome to store in conventional semi-structured formats like those natively parsed by LiquiDoc. This is the case for jumbled content containing characters that require escaping, so you can store source matter like that from the example above in the rawest possible form.
LiquiDoc can directly convert any supported semi-structured data input format to either YAML or JSON output.
Simply provide no template parameter, and make sure the output file has a proper extension (.yml or .json).
- action: parse
data: _data/testdata.xml
output: _build/frontend/testdata.json|
Note
|
This feature is in need of validation. XML and CSV output will be added in a future release if direct conversions prove useful. |
Shopify’s open-source Liquid templating language and engine are used for parsing complex variable data in plaintext markup, typically for generating iterated (looping) output. For instance, a data structure of glossary terms and definitions that needs to be looped over and pressed into a more publish-ready markup, such as Markdown, AsciiDoc, reStructuredText, LaTeX, or HTML.
Any valid Liquid-formatted template is accepted, in the form of a text file with any extension.
For data sourced in CSV format or extracted through regex source parsing, all data is passed to the Liquid template parser as an array called data:, containing one or more rows to be iterated through.
Data sourced in YAML, XML, or JSON may be passed as complex structures with custom names determined in the file contents.
Looping through known data formats is fairly straightforward. A for loop iterates through your data, item by item. Each item or row contains one or more key-value pairs.
{% for row in data %}{{ row.name }}::
{{ row.description }}
+
[horizontal.simple]
Required:: {% if row.required == "true" %}*Yes*{% else %}No{% endif %}
{% endfor %}In Example — rows.asciidoc Liquid template for outputting AsciiDoc plaintext markup, we’re instructing Liquid to iterate through our data items, generating a data structure called row each time.
The double-curly-bracketed tags convey variables to evaluate.
This means {{ row.name }} is intended to express the value of the name parameter in the item presently being parsed.
The other curious marks such as :: and [horizontal.simple] are AsciiDoc markup — they are the formatting we are trying to introduce to give the content form and semantic relevance.
In Liquid and most templating systems, any row containing a non-printing “tag” will leave a blank line in the output after parsing.
One solution is to stack tags horizontally when you do not wish to generate a blank line, as with the first row above.
However, a non-printing tag such as {% endfor %} will generate a blank line that can be inconvenient in the output.
This side effect of templating is unfortunate, as it discourages elegant, “accordian-style” code nesting, like you see in the HTML example below (AsciiDoc parsed into HTML). Unlike most templating formats, however, Liquid offers highly effective whitespace control capability. This additional markup is not always worth the time but can come in quite handy, especially when generating markup where indentation matters. In the end, ugly Liquid templates can generate quite elegant markup output with exquisite precision.
The above (Example — rows.asciidoc Liquid template for outputting AsciiDoc plaintext markup) would generate the following:
A_B::
A thing that *SnASFHE&"\|+1Dsaghf
+
[horizontal.simple]
Required::: *Yes*
G_H::
Some text for &hdf'" 1t`F
+
[horizontal.simple]
Required::: NoThe generically styled AsciiDoc rich text reflects the distinctive structure with (very little) more elegance.
- A_B
-
A thing that *SnASFHE&"\|+1Dsaghf
Required Yes
- G_H
-
Some text for &hdf'" 1t`F
Required No
The implied structures are far more evident when displayed as HTML derived from Asciidoctor parsing of the LiquiDoc-generated AsciiDoc source (from Example — AsciiDoc-formatted output).
<div class="dlist data-line-1">
<dl>
<dt class="hdlist1">A_B</dt>
<dd>
<p>A thing that *SnASFHE&"\|+1Dsaghf</p>
<div class="hdlist data-line-5 simple">
<table>
<tr>
<td class="hdlist1">
Required
</td>
<td class="hdlist2">
<p><strong>Yes</strong></p>
</td>
</tr>
</table>
</div>
</dd>
<dt class="hdlist1">G_H</dt>
<dd>
<p>Some text for &hdf'" 1t`F</p>
<div class="hdlist data-line-11 simple">
<table>
<tr>
<td class="hdlist1">
Required
</td>
<td class="hdlist2">
<p>No</p>
</td>
</tr>
</table>
</div>
</dd>
</dl>
</div>Remember, all this started out as that little old free-form text file.
A_B A thing that *SnASFHE&"\|+1Dsaghf true G_H Some text for &hdf 1t`F false
In addition to data files, parse operations accept fixed variables and environment variables.
Fixed variables are defined using a per-build structure called variables: in the config file.
Each build operation can accept a distinct set of variables.
- action: parse
data: schema.yml
builds:
- name: parse-basic-nav
template: _templates/side-nav.html
output: _output/side-nav-basic.html
variables:
product:
edition: basic
- name: parse-premium-nav
template: _templates/side-nav.html
output: _output/side-nav-prem.html
variables:
product:
edition: premiumThis configuration will use the same data and templates to generate two distinct output files.
Each build uses an identical Liquid template (side-nav.html) to parse its distinct side-nav-<edition>.html file.
Inside that template, we might find a block of Liquid code hiding some navigation items from the basic edition, and vice versa.
<li><a href="home">Home</a></li>
<li><a href="dash">Dashboard</a></li>
{% if vars.product.edition == "basic" %}
<li><a href="upgrade">Upgrade!</a></li>
{% elsif vars.product.edition == "premium" %}
<li><a href="billing">Billing</a></li>
{% endif %}This portion of the example config presses two versions of the Liquid template side-nav.html into two different nav menus, either to be served on two parallel sites or one site with the ability to select front-end elements depending on user status.
In this example, only the menu shown to premium users will contain the billing link; basic users will see an upgrade prompt.
After this parsing, files are written in any of the given output formats, or else just written to console as STDOUT (when you add the --stdout flag to your command or set output: stdout in your config file).
Liquid templates can be used to produce any plaintext format imaginable.
Just format valid syntax with your source data and Liquid template, then save with the proper extension, and you’re all set.
During the build process, different tools handle file assets variously, so your images and other embedded files are not always where they need to be relative to the current procedure. Migrate actions copy resource files to a temporary/uncommitted directory during the build procedure so they can be readily accessed by subsequent steps.
In addition to designating action: migrate, migrate operations require just a few simple settings.
- action: migrate
source: assets/images
target: _build/img
options:
inclusive: false
- action: migrate
source: index-map.adoc
target: _build/index-map.adocThe first action step above copies all the files and folders in assets/images and adds them to _build/img.
It will only recreate the contents of the source directory, not the directory path itself, because the inclusive: option is set to false (its default value is true).
When both the source and target paths are directories and inclusive is true, the files are copied to target/source/.
When inclusive is false, they copy to target/.
Individual files must be listed in individual steps, one per step, as in the second step above.
Presently, all render actions convert AsciiDoc-formatted source files into rich-text documents, such as PDFs and HTML pages. LiquiDoc uses Asciidoctor’s Ruby engine and various other plugins to generate output in a few supported formats.
First let’s look at a render action configuration step.
- action: render
source: book-index.adoc
data: _configs/asciidoctor.yml
builds:
- output: _build/publish/codewriting-book-draft.pdf
theme: theme/pdf-theme.yml
- output: _build/publish/codewriting-book-draft.html
theme: theme/site.cssEach action for rendering a conventionally structured book-style document requires an index, which is the primary AsciiDoc file to process labeled source: in our configuration.
This file can contain all of your AsciiDoc content, if you wish.
Alternatively, it can be made up entirely of include:: macros, creating an linear map of your document’s contents, which may themselves be more AsciiDoc files, code examples, and so forth.
= This File Can Contain Regular AsciiDoc Markup
include::chapter-01.adoc[]
include::code-sample.rb[tags="booksample"]
include::code-sample.js[lines="22..33"]After the title line, the first macro instruction in this example will embed the entire file chapter-01.adoc, parsing and rendering its AsciiDoc-formatted contents in the process.
The second instruction extracts part of the file code-sample.rb and embeds it here.
Inside codesample.rb, content is tagged with comment code to mark what we wish to extract.
In the case of a Ruby file, you would expect to find code like the following in the source.
# tag::booksample[]
def exampleblock
puts "This is an example for my book."
end
# end::booksample[]For AsciiDoc source code, you would use the // comment notation.
// tag::booksample[]
purpose::
to demonstrate inclusion.
// end::booksample[]The third instruction in our Example AsciiDoc index file, which was simply include::code-sample.js[lines="22..33"] — this dangerous little bugger extracts a fixed span of code lines, as designated.
Static-site generators are critical tools to just about any docs-as-code infrastructure. Starting with Jekyll but soon to add more (Awestruct and possibly Grain next), each generator added will maintain all of its capabilities and do most of the heavy lifting.
LiquiDoc’s role is primarily to help your preferred SSG handle your source in ways consistent with any other rendering and file managing your docs codebase requires. For example, the jekyll-asciidoc extension that enables Jekyll builds to parse AsciiDoc markup only honors attributes set in Jekyll config files. Therefore, just before triggering the build, LiquiDoc writes a new config file from which Jekyll draws AsciiDoc attribute assignments.
- Jekyll
-
A Jekyll render operation calls
bundle exec jekyll buildfrom the command line pretty much the way you would do it manually. You still need a Jekyll configuration file with the usual settings in it. This is established in your build-config block
- action: render
data: globals.yml
builds:
- backend: jekyll
properties:
files:
- _configs/jekyll-global.yml
- _configs/jekyll-portal-1.yml
arguments:
destination: build/site/user-basic
attributes:
portal_term: GuideThe backend: designation of jekyll is required, and at least one file under properties:files: is strongly encouraged for proper Jekyll behavior.
LiquiDoc will write an additional YAML file containing all of the Asciidoctor attributes, to be appended to this list when the build command is run.
This captures attributes offered up in the action-level data: file and in the attributes: section of the build step.
The arguments: block is made up of key-value parameters that establish or override any Jekyll config settings.
|
Note
|
The action-level parameter source: is left blank in this example.
This setting cannot be used to designate a Jekyll source path.
If the above action had a second build step, such as a single output doc, the source would have relevance as the index file for that document.
|
For basic render actions, the source: file and other .adoc files determine most of the rest of the content source files (if any) using AsciiDoc includes.
But Asciidoctor renderings can be configured and manipulated by attribute settings at other stages.
Basically, we are trying to maximize our readiness to ingest document data and build properties from a wide range of sources.
This way inline substitutions can be made out of data living outside the source tree of any particular document, passed into the document build in the form of YAML data converted into — you guessed it — AsciiDoc attributes.
|
Note
|
AsciiDoc attributes are not the same as Asciidoctor configuration properties.
While both kinds create substitutions that are expressed the same way ({property_name}), they are set differently in your LiquiDoc configuration.
|
LiquiDoc provides several means for adding attributes to your documents, in addition to the ways you might be used to setting attributes (inside your docfiles and command line). They are listed below in the order of assignment/substitution. Therefore, an identical value defined explicitly in each subsequent space will overwrite any set in the previous stages.
The order of substitution is as follows.
After that, we’ll demonstrate even more ways to ingest datasets.
- AsciiDoc document inline
-
The most common way to set variables is inside your AsciiDoc source files — typically at the top of your
index.adocfile or the equivalent. Any parameters set there will cascade through your included files for parsing. This is a good place to establish defaults, but they can be overwritten by the other four means of setting AsciiDoc attributes.Example — Setting AsciiDoc attributes inline:some_var: My value :imagesdir: ./img
- Document data file
-
A YAML-formatted data file containing a stack of key-value pairs can be passed to Asciidoctor.
Example AsciiDoc attributes data fileimagesdir: assets/images basedir: _build my_custom_var: Some text, can include spaces and most punctuation
This file must be called out in your configuration using the top-level
data:setting.Example AsciiDoc data file setting for attributes ingest- action: render source: my_index.adoc data: _data/asciidoctor.yml builds: - output: myfile.html
You may also pass multiple files and/or just a sub-block of a given file (a named variable with its own nested data). See below.
- Per-build properties files
-
With document-wide attributes set, we begin overwriting them on a per-build basis for different renderings of that same source document. For starters, LiquiDoc can extract attributes from still more data files at this stage, like so:
Example — Attribute extraction from build-specific data files- output: _build/publish/manual-europe.pdf properties: files: _conf/jekyll.yml,_data/europe.yml - output: _build/publish/manual-china.pdf properties: files: _conf/jekyll.yml,_data/china.yml
The
properties:filessetting can take the form of a comma-delimited list or a YAML array, and it can filter to specific subdata (see below). These per-build properties files are meant to be document settings, so for static site renderings (e.g., Jekyll), these are meant to contain YAML files formatted for Jekyll configuration reads.
- Per-build in LiquiDoc config
-
So if your document is a book, and your builds are an HTML edition and a PDF edition, you can pass distinct settings to each.
Example per-build attribute settings in config file- action: render source: my_book.adoc data: _data/asciidoctor.yml builds: - output: my_book.html attributes: edition: HTML - output: my_book.pdf attributes: edition: PDF - output: my_book_special.pdf attributes: edition: Special
Imagine this affecting content in the book file.
Example book index with variable content= My Awesome Book: {edition} Edition include::chapter-1.adoc[] include::chapter-2.adoc[] ifeval::["{edition}" == "Special"] include::chapter-3.adoc[] endif::[]
The AsciiDoc code above that might be least familiar to you is conditional code, represented by the
ifeval::[]andendif::[]markup. Here we see how passing attributes at the build iteration level gives us all kinds of cool powers. Not only are we setting the subtitle with a variable; if we’re building the special edition, we add a chapter the other two editions ignore.
- Command-line arguments
-
There is yet a way to override all of this, which is also handy for testing variables out without editing any files: pass arguments via the
-aoption on the command line. The-aoption flag accepts an argument in the format ofkey=value, wherekeyis the name of your attribute, andvalueis your optional assignment for that attribute. You may pass as many attributes as you like this way, up to the capacity of your shell’s command line, which is probably something.Example — Setting global build attributes on the CLIbundle exec liquidoc -c _configs/my_book.yml -a edition='Very Special NSFW'
- multiple attribute files
-
You may also specify more than one attribute file by separating filenames with commas. They will be ingested in order.
- specific subdata
-
You may specify a particular block in your data file by designating it with a colon.
Example — Listing multiple data files & designating a nested blockdata: - asciidoc.yml - product.yml:settings.attributes
Example — Designating a data block — alternate formatproperties: files: asciidoc.yml,product.yml:settings.attributes
Here we see
,used as a delimiter between files and:as an indicator that a block designator follows. In this case, the render action will load thesettings.attributesblock from theproduct.ymlfile.Example — Designating data blocks within a properties filesproperties: files: - countries.yml:cn - edition.yml:enterprise.premium
In this last case, we’re passing locale settings for a premium edition targeted to a Chinese audience.
Certain AsciiDoc/Asciidoctor settings are determinant enough that they can be set using parameters in the build config. Establishing these as per-build settings in your config file will override anywhere else they are set, except on the command line.
|
Important
|
These settings do not necessarily have 1:1 correspondence to AsciiDoc(tor) attributes. |
- output
-
The filename for saving rendered content. This build setting is required for render operations that generate a single file. Static site generation renders, however, target a directory set in the SSG’s config.
- backend
-
The backend determines the rendering context. When building single-file output, the backend is typically determined from the
output:filename and/or thedoctype:. Some renderers, such as Jekyll, require specific backend designations (jekyll). Valid options arehtml5,pdf,jekyll, with more to come. - doctype
-
Overrides Asciidoctor doctype attribute. Valid values are:
book-
Generates a book-formatted document in PDF, HTML, or ePub.
article-
Generates an article-formatted document in PDF, HTML, or ePub.
manpage-
Generates Linux man page format.
deck-
Generates an HTML/JavaScript slide deck. (Not yet implemented.)
style-
Points either to a YAML configuration for PDF styles or a CSS stylesheet for HTML rendering.
- variables
-
Designate one or more nested variables alongside ingested data in parse actions.
- properties
-
Designates a file or files for settings and additional explicit configuration at the build level for render actions.
Mainstream deployment platforms are probably better suited to tying all your operations together, but we plan to bake a few common operations in to help you get started. For true build-and-deployment control, consider build tools such as Make, Rake, and Gradle, or deployment tools like Travis CI, CircleCI, and Jenkins.
For testing purposes, however, spinning up a local webserver with the same stroke that you build a site is pretty rewarding and time saving, so we’ll start there.
For now, this functionality is limited to adding a --deploy flag to your liquidoc command.
This will attempt to serve files from the destination: set for the associated Jekyll build.
|
Warning
|
LiquiDoc-automated deployment of Jekyll sites is both limited and untested under nonstandard conditions. Non-local deployment should be handled by external continuous-integration/devlopment (CICD) tools. |
If you’re using Jekyll to build sites, LiquiDoc makes indexing your files with the Algolia cloud search service a matter of configuration. The heavy lifting is performed by the jekyll-algolia plugin, but LiquiDoc can handle indexing even a complex site by using the same configuration that built your HTML content (which is what Algolia actually indexes).
|
Note
|
You will need a free community (or premium) Algolia account to take advantage of Algolia’s indexing service and REST API. Simply create a named index, then visit the API Keys to collect the rest of the info you’ll need to get going. |
Two hard-coding steps are required to prep your source to handle Algolia index pushes.
-
Add a block to your main Jekyll configuration file.
Example Jekyll Algolia configurationalgolia: application_id: 'your-application-id' # (1) search_only_api_key: 'your-search-only-api-key' # (2) extensions_to_index: [adoc] # (3)
-
From the top bar of your Algolia interface.
-
From the API Keys screen of your Algolia interface.
-
List as many extensions as apply, separated by commas.
-
-
Add a block to your build config.
- action: render data: globals.yml builds: - backend: jekyll properties: files: - _configs/jekyll-global.yml - _configs/jekyll-portal-1.yml arguments: destination: build/site/user-basic attributes: portal_term: Guide search: index: 'portal-1'
The
index:parameter is for the name of the index you are pushing to. (An Algolia “app” can have multiple “indices”.) This entry configures but does not trigger an indexing operation.
Indexing is invoked by command-line flags.
Add --search-index-push or --search-index-dry along with the --search-api-key='your-admin-api-key-here' argument in order to invoke the indexing operation.
The --search-index-dry flag merely tests content packaging, whereas --search-index-push connects to the Algolia REST API and attempt to push your content for indexing and storage.
bundle exec liquidoc -c _configs/build-docs.yml --search-index-push --search-index-api-key='90f556qaa456abh6j3w7e8c10t48c2i57'This operation performs a complete build, including each render operation, before the Algolia plugin processes content and pushes each build to the indexing service, in turn.
|
Tip
|
To add modern site search for your users, add Algolia’s InstantSearch functionality to your front end! |
Like any software or documentation build tool, routine configuration is everything. Everything needs to be just so in a build. Order matters, and resources must be used wisely.
Rather than discuss build strategies broadly here, I have opted to move all my recommendations to the LiquiDoc Content Management Framework. LiquiDoc CMF’s bootstrap repository has more, but the LiquiDoc CMF Guides are the real authority. For now, look there for LDCMF-specific as well as broader strategic build insights.
For non-geniuses like myself, it can be really helpful to have a plain-English accounting of what is happening during a build procedure. During builds, LiquiDoc creates a secondary log as it churns through a configuration.
If you add no documentation fields to your build config’s YAML file, this secondary logger will still generate a plain-language description of the steps it is taking. But step can be enhanced with customized comments, as well, to pass along the reasoning behind any step.
By default these are written “config explainers” to a file stored under your build directory (_build/pre/config-explainer.adoc unless otherwise established).
Alternatively, the log will print to screen (console) during a configured LiquiDoc build procedure.
Simply add the --explicit flag to your command.
bundle exec liquidoc -c _configs/build-docs.yml --explicitThis feature will explain which sources are used to produce what output, but it won’t say why. LiquiDoc administrators can state the purpose of each action step and each build sub-step. There are two ways to intervene with the automated log message.
- message
-
Add a custom
message:key. The contents of this parameter will appear instead of the automated message. - reason
-
The reason will be integrated with the automated message (it’s moot with a custom message as described above). Usually it will be appended as a comma-demarcated phrase at the end of the automated statement or in a sensible place in the middle, depending on the structure of the automated message.
_configs/build-docs.yml- action: migrate
source: theme/
target: _build/
reason: so `theme/` dir will be subordinate to the SSG source path
- action: parse
data: data/product.yml
message: . Performs the first round of product-data parsing to build two structurally vital files, sourcing data in `data/product.yml`.
builds:
- template: _templates/liquid/index-by-user-stories.asciidoc
output: _build/_built_index-stories.adoc
message: |
.. Builds the stories index file used to give order to the PDF index file's inclusion of topic files (`_build/includes/_built_page-meta.adoc`)|
Tip
|
In custom message: fields, adding AsciiDoc ordered-list markup maintains the ordered lists this feature generates by for automated steps (the ones where you don’t explicitly declare a message:).
You may also use bullets (*), add styling directives or other markers, etc.
|
-
Copies
theme/to_build/, so theme/ dir will be subordinate to the SSG source path. -
Performs the first round of product-data parsing to build two structurally vital files, sourcing data in
data/product.yml.-
Builds the stories index file used to give order to the PDF index file’s inclusion of topic files (
_build/includes/_built_page-meta.adoc)
-
This config explainer feature is mainly intended to feed into documentation about your primary docs build. The AsciiDoc-formatted explainers can be included anywhere in a document about your docs infrastructure.
As long as we are invoking Liquid to manipulate files with templates in our parse operations, we had might as well use it to parse our config files themselves. This is an advanced procedure for injecting programmatic functionality into your builds. If you are comfortable with Liquid templating and basic LiquiDoc build config structure, you are ready to learn dynamic configuration.
As of LiquiDoc 0.9.0, config files can be parsed (preprocessed) at the top of a build. That is, your config files can contain variables, conditionals, and iterative loops — any Liquid tags and filters supported by LiquiDoc.
All you have to do is (1) add Liquid tags to your YAML configuration file.
If the Liquid markup in your config file expects variables, pass those variables on the liquidoc CLI using --var key=value.
Dynamic configurations typically expect variables to be passed in, either to directly populate values in the config file or to differentially trigger conditional tags in the config file.
Let’s first take a look at a sample dynamic configuration to see if we can understand what it is trying to do.
build-config.yml dynamic LiquiDoc configuration for alternate builds- action: parse
data: data/products.yml:{{ vars.product_slug }}
builds:
- template: product-datasheet.asciidoc
output: product-datasheet_{{ vars.product_slug }}.adocThis config file wants to build a product datasheet for a specific product, which it expects to be indicated by a config variable called product_slug.
Config variables are passed using the --var varname='var val' format, where varname is any key that exists as a Liquid variable in your config file, and 'var val' is its value, wrapped in single quotes.
Let’s say in this case, we want to generate the datasheet for the Windows Enterprise edition of our product.
bundle exec liquidoc -c _configs/build-config.yml -v product_slug=win-ent|
Note
|
The -v option is an alias for --var.
|
This will cause our dynamic configuration to look for a data block formatted like so: data/products.yml:win-ent.
So long as our products.yml file contains a top-level data structure called win-ent, we’re off to the races.
Equally as cool as enabling custom builds by accepting what amount to environment variables, we can also handle big, repetative builds with Liquid looping. Let’s try that file again with some powerful tweaks.
build-config.yml dynamic LiquiDoc configuration for iterative builds{% assign products = "win-exp,win-ent,mac-exp,mac-ent,ubu-exp,ubu-ent" %}
{% for slug in products %}
- action: parse
data: data/products.yml:{{ slug }}
builds:
- template: product-datasheet.asciidoc
output: product-datasheet_{{ slug }}.adoc
{% endfor %}Now we are building six data sheets using eight lines of code. And notice what is missing: no more vars.-scoped variables, just local ones.
Dynamic configurations are limited only by your imagination.
- action: parse
data: schema.yml
builds:
- name: parse-basic-nav
template: _templates/side-nav.html
output: _output/side-nav-basic.html
variables:
product:
edition: {{ vars.edition }}
environment: {{ vars.env }}With a configuration like this, our side-nav.html template can further process variables, such as base_url in the example snippet below.
side-nav.html) with variables passed{% if vars.env == "staging" %}
{% assign base_url = "http://staging.int.example.com" %}
{% elsif vars.env == "production" %}
{% assign base_url = "http://example.com" %}
{% endif %}
LiquiDoc {{ vars.product.edition }}
<ul class="nav">
{% for page in data.pages %}
<li><a href="{{ base_url }}/{{ page.path }}">{{ page.name }}</a>
{% endfor %}
</ul>To set the values of vars.edition and vars.env in the config file, add for instance --var edition=basic --var env=staging
Another way to use dynamic configuration is to conditionalize steps in the build. Recipe-based configuration will eventually be added to LiquiDoc, but for now you can toggle parts of your build on and off using conditionals governed by environment variables. For instance,
build-config.yml with conditionalized steps{% assign build_pdf = true %}
{% assign build_html = true %}
{% case recipe %}
{% when 'pdfonly' %}
{% assign build_html = false %}
{% when 'nopdf' %}
{% assign build_pdf = false %}
{% endcase %}
- action: render
data: _configs/asciidoctor.yml
source: content/product-datasheet.adoc
builds:
{% if build_html %}
- backend: html5
output: product-datasheet.html
{% endif %}
{% if build_pdf %}
- backend: pdf
output: product-datasheet.pdf
{% endif %}With a build config like this, optionally invoking --var recipe=nopdf, for instance, will suppress the PDF substep during the build routine.
Aside from implementing conditional elements in your configs, dynamism also introduces looping. Repetitive procedures that take up lots of vertical space to repeat sequentially with largely the same specifics can be difficult to manage. If you’re building lots of parallel documents from the same source with minimal differences in each configuration action or build step, you may find yourself wishing you could write once and execute five times.
With Liquid’s for loops, you can do just that. Review this code and imagine how much vertical space is saved.
{% assign products = "one,two,three,four,five" | split: "," %}
{% assign langs = "en,es" %}
- stage: parse-strings
action: parse
data: data/strings.yml
builds:
{% for prod in portals %}{% for lang in langs %}
- output: strings-{{prod}}-{{lang}}.yml
template: string-processing.yaml
variables:
portal: {{prod}}
lang: {{lang}}
{% endfor %}This code saves the space and maintenance of five -output: blocks.
|
Tip
|
In Liquid, loops can only iterate through arrays.
Comma-delimited lists can be converted to arrays using the split filter to divide its contents into items.
The | split: "," notation here tells Liquid we wish to apply this filter so the variable portals can become an array.
|
LiquiDoc supports all standard Liquid tags and filters, as well as all of Jekyll’s custom Liquid filters. Support for Jekyll’s include tag should be coming soon.
Here is a table of all the established configuration settings, as they pertain to each key LiquiDoc action.
| Setting | Parse | Migrate | Render | Deploy |
|---|---|---|---|---|
Main Per-stage Settings |
||||
action |
Required |
Required |
Required |
|
data |
Optional |
N/A |
Optional |
|
source |
N/A |
Required |
Required |
|
target |
N/A |
Required |
N/A |
|
options |
N/A |
Optional |
Optional |
|
stage |
Optional |
Optional |
Optional |
|
builds |
Required |
N/A |
Required |
|
Per-Build Settings |
||||
output |
Required |
N/A |
Optional* |
|
backend |
N/A |
N/A |
Optional |
|
config |
N/A |
N/A |
Optional |
|
template |
Optional |
N/A |
N/A |
|
style |
N/A |
N/A |
Optional |
|
attributes |
N/A |
N/A |
Optional |
|
variables |
Optional |
N/A |
N/A |
|
properties |
N/A |
N/A |
Optional |
|
search |
N/A |
N/A |
Optional |
|
*The output setting is considered optional for render operations because static site generations target a directory set in the SSG’s config file.
I get that this is the least sexy tool anyone has ever built. I truly do.
Except I kind of disagree. To me, it’s one of the most elegant ideas I’ve ever worked on, and I actually adore it.
Maybe it’s due to my love of flat files. The simplicity of anything in / anything out for plaintext files is such a holy grail in my mind. I am a huge fan of the universal converter Pandoc, which has saved me countless hours of struggle.
I totally dig markup languages and dynamic template engines, both of which I’ve been using to build cool shit for about 20 years. These form the direct sublayers of everything done with textual content in computing, and I want to help others play in the sandbox of dynamic markup.
You don’t have to love LiquiDoc to use it, or even to contribute. But if you get what I’m trying to do, give a holler.
The reason I’m developing LiquiDoc is to most flexibly handle common single-sourcing challenges posed by divergent output needs. I intend to experiment with other toolchains, datasource types, and template engines, but the point of this utility is to pull together great technologies to solve tough, recurring problems.
Contributions are very welcome.
This repo is maintained by the former Technical Documentation Manager at Rocana (formerly ScalingData, now mostly acquired by Splunk), which is the original copyright holder of LiquiDoc. I am teaching myself basic Ruby scripting just to code LiquiDoc and related tooling. Therefore, instructional pull requests are encouraged. I have no ego around the code itself. I know this isn’t the best, most consistent Ruby scripting out there, and I confess I’m more interested in what the tool does than how it does it. Help will be appreciated.
That said, because this utility is also made to go along with my book Codewriting, I prefer not to overcomplicate the source code, as I want relative beginners to be able to intuitively follow and maybe even modify it. I guess by that I mean, I’m resisting over-abstracting the source — I must be the beginner I have in mind.
I am very eager to collaborate, and I actually have extensive experience with collective authorship and product design, but I’m not a very social programmer. If you want to contribute to this tool, please get in touch. A pull request is a great way to reach out.
LiquiDoc originated under the copyright of Rocana, Inc, released under the MIT License. This fork is maintained by Brian Dominick, the original author. Rocana has been acquired by Splunk, but the author and driving maintainer of this tooling chose not to continue on with the rest of Rocana engineering, precisely in order to openly explore what tooling of this kind can do in various environments.
I am not sure if the copyright for the prime source transferred to Splunk, but it does not matter. This fork repository will be actively maintained by the original author, and my old coworkers and their new employer can make make use of my upgrades like everyone else.
|
Note
|
The LiquiDoc gem at rubygems.org has been published out of this repo starting with version 0.2.0. |
LiquiDoc and Codewriting author Brian Dominick is now available for contract work around implementation of advanced docs-as-code infrastructure. I am eager to work with engineering and support teams at software companies. I’m also seeking opportunities to innovate management of documentation and presentations at non-software organizations — especially if you’re working to make the world a better place! Check out codewriting.org for more info.